Oct 13 17:24:13 crc systemd[1]: Starting Kubernetes Kubelet... Oct 13 17:24:13 crc restorecon[4666]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 13 17:24:13 crc restorecon[4666]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 13 17:24:14 crc restorecon[4666]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 13 17:24:14 crc restorecon[4666]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Oct 13 17:24:14 crc kubenswrapper[4720]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 17:24:14 crc kubenswrapper[4720]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Oct 13 17:24:14 crc kubenswrapper[4720]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 17:24:14 crc kubenswrapper[4720]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 17:24:14 crc kubenswrapper[4720]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 13 17:24:14 crc kubenswrapper[4720]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.929511 4720 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.934989 4720 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935008 4720 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935013 4720 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935018 4720 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935022 4720 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935026 4720 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935031 4720 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935035 4720 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935039 4720 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935043 4720 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935049 4720 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935055 4720 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935060 4720 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935065 4720 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935071 4720 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935075 4720 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935087 4720 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935093 4720 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935097 4720 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935101 4720 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935105 4720 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935109 4720 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935113 4720 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935117 4720 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935121 4720 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935124 4720 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935130 4720 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935136 4720 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935142 4720 feature_gate.go:330] unrecognized feature gate: Example Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935146 4720 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935151 4720 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935155 4720 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935159 4720 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935164 4720 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935168 4720 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935172 4720 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935176 4720 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935180 4720 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935199 4720 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935203 4720 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935206 4720 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935211 4720 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935214 4720 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935219 4720 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935223 4720 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935228 4720 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935237 4720 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935241 4720 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935244 4720 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935249 4720 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935253 4720 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935257 4720 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935260 4720 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935264 4720 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935268 4720 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935272 4720 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935277 4720 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935282 4720 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935286 4720 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935291 4720 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935295 4720 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935299 4720 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935305 4720 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935310 4720 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935314 4720 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935318 4720 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935322 4720 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935326 4720 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935331 4720 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935335 4720 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.935339 4720 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935428 4720 flags.go:64] FLAG: --address="0.0.0.0" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935439 4720 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935447 4720 flags.go:64] FLAG: --anonymous-auth="true" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935453 4720 flags.go:64] FLAG: --application-metrics-count-limit="100" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935459 4720 flags.go:64] FLAG: --authentication-token-webhook="false" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935465 4720 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935471 4720 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935480 4720 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935486 4720 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935490 4720 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935495 4720 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935500 4720 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935505 4720 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935510 4720 flags.go:64] FLAG: --cgroup-root="" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935515 4720 flags.go:64] FLAG: --cgroups-per-qos="true" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935520 4720 flags.go:64] FLAG: --client-ca-file="" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935525 4720 flags.go:64] FLAG: --cloud-config="" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935530 4720 flags.go:64] FLAG: --cloud-provider="" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935536 4720 flags.go:64] FLAG: --cluster-dns="[]" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935541 4720 flags.go:64] FLAG: --cluster-domain="" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935547 4720 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935552 4720 flags.go:64] FLAG: --config-dir="" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935556 4720 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935562 4720 flags.go:64] FLAG: --container-log-max-files="5" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935569 4720 flags.go:64] FLAG: --container-log-max-size="10Mi" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935574 4720 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935578 4720 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935585 4720 flags.go:64] FLAG: --containerd-namespace="k8s.io" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935590 4720 flags.go:64] FLAG: --contention-profiling="false" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935596 4720 flags.go:64] FLAG: --cpu-cfs-quota="true" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935600 4720 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935605 4720 flags.go:64] FLAG: --cpu-manager-policy="none" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935609 4720 flags.go:64] FLAG: --cpu-manager-policy-options="" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935615 4720 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935620 4720 flags.go:64] FLAG: --enable-controller-attach-detach="true" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935624 4720 flags.go:64] FLAG: --enable-debugging-handlers="true" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935629 4720 flags.go:64] FLAG: --enable-load-reader="false" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935634 4720 flags.go:64] FLAG: --enable-server="true" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935638 4720 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935648 4720 flags.go:64] FLAG: --event-burst="100" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935653 4720 flags.go:64] FLAG: --event-qps="50" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935657 4720 flags.go:64] FLAG: --event-storage-age-limit="default=0" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935661 4720 flags.go:64] FLAG: --event-storage-event-limit="default=0" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935666 4720 flags.go:64] FLAG: --eviction-hard="" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935671 4720 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935675 4720 flags.go:64] FLAG: --eviction-minimum-reclaim="" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935680 4720 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935684 4720 flags.go:64] FLAG: --eviction-soft="" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935688 4720 flags.go:64] FLAG: --eviction-soft-grace-period="" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935693 4720 flags.go:64] FLAG: --exit-on-lock-contention="false" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935697 4720 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935702 4720 flags.go:64] FLAG: --experimental-mounter-path="" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935707 4720 flags.go:64] FLAG: --fail-cgroupv1="false" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935711 4720 flags.go:64] FLAG: --fail-swap-on="true" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935716 4720 flags.go:64] FLAG: --feature-gates="" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935723 4720 flags.go:64] FLAG: --file-check-frequency="20s" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935727 4720 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935732 4720 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935737 4720 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935741 4720 flags.go:64] FLAG: --healthz-port="10248" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935746 4720 flags.go:64] FLAG: --help="false" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935751 4720 flags.go:64] FLAG: --hostname-override="" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935755 4720 flags.go:64] FLAG: --housekeeping-interval="10s" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935760 4720 flags.go:64] FLAG: --http-check-frequency="20s" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935765 4720 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935770 4720 flags.go:64] FLAG: --image-credential-provider-config="" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935774 4720 flags.go:64] FLAG: --image-gc-high-threshold="85" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935779 4720 flags.go:64] FLAG: --image-gc-low-threshold="80" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935784 4720 flags.go:64] FLAG: --image-service-endpoint="" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935789 4720 flags.go:64] FLAG: --kernel-memcg-notification="false" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935794 4720 flags.go:64] FLAG: --kube-api-burst="100" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935801 4720 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935806 4720 flags.go:64] FLAG: --kube-api-qps="50" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935811 4720 flags.go:64] FLAG: --kube-reserved="" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935817 4720 flags.go:64] FLAG: --kube-reserved-cgroup="" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935822 4720 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935827 4720 flags.go:64] FLAG: --kubelet-cgroups="" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935832 4720 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935838 4720 flags.go:64] FLAG: --lock-file="" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935843 4720 flags.go:64] FLAG: --log-cadvisor-usage="false" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935848 4720 flags.go:64] FLAG: --log-flush-frequency="5s" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935853 4720 flags.go:64] FLAG: --log-json-info-buffer-size="0" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935861 4720 flags.go:64] FLAG: --log-json-split-stream="false" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935874 4720 flags.go:64] FLAG: --log-text-info-buffer-size="0" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935880 4720 flags.go:64] FLAG: --log-text-split-stream="false" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935885 4720 flags.go:64] FLAG: --logging-format="text" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935891 4720 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935897 4720 flags.go:64] FLAG: --make-iptables-util-chains="true" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935901 4720 flags.go:64] FLAG: --manifest-url="" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935906 4720 flags.go:64] FLAG: --manifest-url-header="" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935913 4720 flags.go:64] FLAG: --max-housekeeping-interval="15s" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935918 4720 flags.go:64] FLAG: --max-open-files="1000000" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935924 4720 flags.go:64] FLAG: --max-pods="110" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935928 4720 flags.go:64] FLAG: --maximum-dead-containers="-1" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935933 4720 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935937 4720 flags.go:64] FLAG: --memory-manager-policy="None" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935941 4720 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935946 4720 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935950 4720 flags.go:64] FLAG: --node-ip="192.168.126.11" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935955 4720 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935965 4720 flags.go:64] FLAG: --node-status-max-images="50" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935970 4720 flags.go:64] FLAG: --node-status-update-frequency="10s" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935974 4720 flags.go:64] FLAG: --oom-score-adj="-999" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935981 4720 flags.go:64] FLAG: --pod-cidr="" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935985 4720 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935993 4720 flags.go:64] FLAG: --pod-manifest-path="" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.935998 4720 flags.go:64] FLAG: --pod-max-pids="-1" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.936002 4720 flags.go:64] FLAG: --pods-per-core="0" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.936007 4720 flags.go:64] FLAG: --port="10250" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.936012 4720 flags.go:64] FLAG: --protect-kernel-defaults="false" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.936016 4720 flags.go:64] FLAG: --provider-id="" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.936021 4720 flags.go:64] FLAG: --qos-reserved="" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.936025 4720 flags.go:64] FLAG: --read-only-port="10255" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.936030 4720 flags.go:64] FLAG: --register-node="true" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.936034 4720 flags.go:64] FLAG: --register-schedulable="true" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.936039 4720 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.936046 4720 flags.go:64] FLAG: --registry-burst="10" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.936051 4720 flags.go:64] FLAG: --registry-qps="5" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.936056 4720 flags.go:64] FLAG: --reserved-cpus="" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.936060 4720 flags.go:64] FLAG: --reserved-memory="" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.936065 4720 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.936069 4720 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.936074 4720 flags.go:64] FLAG: --rotate-certificates="false" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.936079 4720 flags.go:64] FLAG: --rotate-server-certificates="false" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.936083 4720 flags.go:64] FLAG: --runonce="false" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.936088 4720 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.936093 4720 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.936098 4720 flags.go:64] FLAG: --seccomp-default="false" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.936103 4720 flags.go:64] FLAG: --serialize-image-pulls="true" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.936108 4720 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.936113 4720 flags.go:64] FLAG: --storage-driver-db="cadvisor" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.936118 4720 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.936122 4720 flags.go:64] FLAG: --storage-driver-password="root" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.936126 4720 flags.go:64] FLAG: --storage-driver-secure="false" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.936131 4720 flags.go:64] FLAG: --storage-driver-table="stats" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.936138 4720 flags.go:64] FLAG: --storage-driver-user="root" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.936142 4720 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.936146 4720 flags.go:64] FLAG: --sync-frequency="1m0s" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.936151 4720 flags.go:64] FLAG: --system-cgroups="" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.936155 4720 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.936162 4720 flags.go:64] FLAG: --system-reserved-cgroup="" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.936166 4720 flags.go:64] FLAG: --tls-cert-file="" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.936170 4720 flags.go:64] FLAG: --tls-cipher-suites="[]" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.936176 4720 flags.go:64] FLAG: --tls-min-version="" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.936180 4720 flags.go:64] FLAG: --tls-private-key-file="" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.936199 4720 flags.go:64] FLAG: --topology-manager-policy="none" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.936204 4720 flags.go:64] FLAG: --topology-manager-policy-options="" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.936208 4720 flags.go:64] FLAG: --topology-manager-scope="container" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.936213 4720 flags.go:64] FLAG: --v="2" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.936220 4720 flags.go:64] FLAG: --version="false" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.936226 4720 flags.go:64] FLAG: --vmodule="" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.936231 4720 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.936236 4720 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936358 4720 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936364 4720 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936369 4720 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936373 4720 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936378 4720 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936384 4720 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936389 4720 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936392 4720 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936396 4720 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936401 4720 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936406 4720 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936410 4720 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936414 4720 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936418 4720 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936425 4720 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936429 4720 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936433 4720 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936437 4720 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936441 4720 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936445 4720 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936450 4720 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936454 4720 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936458 4720 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936462 4720 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936466 4720 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936469 4720 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936473 4720 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936478 4720 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936489 4720 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936494 4720 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936498 4720 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936502 4720 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936506 4720 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936510 4720 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936514 4720 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936517 4720 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936521 4720 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936525 4720 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936529 4720 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936533 4720 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936537 4720 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936542 4720 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936546 4720 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936552 4720 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936557 4720 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936560 4720 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936566 4720 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936570 4720 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936575 4720 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936580 4720 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936585 4720 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936589 4720 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936594 4720 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936599 4720 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936604 4720 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936609 4720 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936614 4720 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936618 4720 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936623 4720 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936627 4720 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936633 4720 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936637 4720 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936642 4720 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936646 4720 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936651 4720 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936655 4720 feature_gate.go:330] unrecognized feature gate: Example Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936659 4720 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936663 4720 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936667 4720 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936672 4720 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.936676 4720 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.936689 4720 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.944222 4720 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.944254 4720 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944325 4720 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944332 4720 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944337 4720 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944341 4720 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944345 4720 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944350 4720 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944354 4720 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944358 4720 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944363 4720 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944370 4720 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944374 4720 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944378 4720 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944382 4720 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944387 4720 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944390 4720 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944394 4720 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944398 4720 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944402 4720 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944407 4720 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944412 4720 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944416 4720 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944421 4720 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944425 4720 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944430 4720 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944434 4720 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944438 4720 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944441 4720 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944445 4720 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944449 4720 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944453 4720 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944457 4720 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944462 4720 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944466 4720 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944469 4720 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944474 4720 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944479 4720 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944483 4720 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944487 4720 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944491 4720 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944495 4720 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944499 4720 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944504 4720 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944508 4720 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944513 4720 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944517 4720 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944521 4720 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944525 4720 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944529 4720 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944533 4720 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944538 4720 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944542 4720 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944547 4720 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944552 4720 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944557 4720 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944561 4720 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944565 4720 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944569 4720 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944572 4720 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944576 4720 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944580 4720 feature_gate.go:330] unrecognized feature gate: Example Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944584 4720 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944587 4720 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944591 4720 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944597 4720 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944601 4720 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944605 4720 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944609 4720 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944614 4720 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944618 4720 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944622 4720 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944627 4720 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.944634 4720 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944767 4720 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944775 4720 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944779 4720 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944784 4720 feature_gate.go:330] unrecognized feature gate: Example Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944788 4720 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944792 4720 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944796 4720 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944800 4720 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944805 4720 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944809 4720 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944813 4720 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944817 4720 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944821 4720 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944824 4720 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944828 4720 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944832 4720 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944836 4720 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944840 4720 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944845 4720 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944851 4720 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944856 4720 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944861 4720 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944866 4720 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944870 4720 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944874 4720 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944878 4720 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944882 4720 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944887 4720 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944891 4720 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944895 4720 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944899 4720 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944903 4720 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944907 4720 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944911 4720 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944915 4720 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944919 4720 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944923 4720 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944927 4720 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944930 4720 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944934 4720 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944938 4720 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944942 4720 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944945 4720 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944949 4720 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944953 4720 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944958 4720 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944962 4720 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944967 4720 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944971 4720 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944975 4720 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944979 4720 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944983 4720 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944988 4720 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944991 4720 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944995 4720 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.944999 4720 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.945003 4720 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.945006 4720 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.945010 4720 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.945014 4720 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.945018 4720 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.945022 4720 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.945026 4720 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.945030 4720 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.945034 4720 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.945039 4720 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.945043 4720 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.945047 4720 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.945052 4720 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.945057 4720 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 13 17:24:14 crc kubenswrapper[4720]: W1013 17:24:14.945062 4720 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.945069 4720 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.945292 4720 server.go:940] "Client rotation is on, will bootstrap in background" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.949627 4720 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.949701 4720 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.951367 4720 server.go:997] "Starting client certificate rotation" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.951388 4720 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.952326 4720 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-28 14:51:52.743646063 +0000 UTC Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.952491 4720 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 1821h27m37.791160723s for next certificate rotation Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.982970 4720 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 13 17:24:14 crc kubenswrapper[4720]: I1013 17:24:14.984781 4720 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.004866 4720 log.go:25] "Validated CRI v1 runtime API" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.044867 4720 log.go:25] "Validated CRI v1 image API" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.046956 4720 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.053314 4720 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-10-13-17-19-03-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.053366 4720 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.070743 4720 manager.go:217] Machine: {Timestamp:2025-10-13 17:24:15.068361223 +0000 UTC m=+0.525611425 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:00bb7c43-79d6-45b5-bd02-4b71a0ba6837 BootID:a530de0f-daad-4050-9522-69c64a451e75 Filesystems:[{Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:79:dc:6d Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:79:dc:6d Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:83:7c:b1 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:64:f1:2c Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:b0:58:d1 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:b8:b5:50 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:a2:b1:00:2b:c9:33 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:f6:45:cc:c4:87:8e Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.071007 4720 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.071129 4720 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.072424 4720 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.072641 4720 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.072692 4720 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.073684 4720 topology_manager.go:138] "Creating topology manager with none policy" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.073707 4720 container_manager_linux.go:303] "Creating device plugin manager" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.074066 4720 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.074098 4720 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.076847 4720 state_mem.go:36] "Initialized new in-memory state store" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.076945 4720 server.go:1245] "Using root directory" path="/var/lib/kubelet" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.080371 4720 kubelet.go:418] "Attempting to sync node with API server" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.080414 4720 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.080437 4720 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.080454 4720 kubelet.go:324] "Adding apiserver pod source" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.080467 4720 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.084921 4720 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.085906 4720 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Oct 13 17:24:15 crc kubenswrapper[4720]: W1013 17:24:15.089322 4720 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.17:6443: connect: connection refused Oct 13 17:24:15 crc kubenswrapper[4720]: E1013 17:24:15.089504 4720 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.17:6443: connect: connection refused" logger="UnhandledError" Oct 13 17:24:15 crc kubenswrapper[4720]: W1013 17:24:15.089326 4720 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.17:6443: connect: connection refused Oct 13 17:24:15 crc kubenswrapper[4720]: E1013 17:24:15.089577 4720 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.17:6443: connect: connection refused" logger="UnhandledError" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.089882 4720 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.092091 4720 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.092222 4720 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.092242 4720 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.092257 4720 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.092282 4720 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.092300 4720 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.092314 4720 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.092338 4720 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.092356 4720 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.092374 4720 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.092432 4720 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.092446 4720 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.093408 4720 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.094155 4720 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.17:6443: connect: connection refused Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.095097 4720 server.go:1280] "Started kubelet" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.095322 4720 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.095316 4720 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.096117 4720 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 13 17:24:15 crc systemd[1]: Started Kubernetes Kubelet. Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.098794 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.098822 4720 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.099133 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 10:50:30.807849513 +0000 UTC Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.099235 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 1193h26m15.708619288s for next certificate rotation Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.099280 4720 volume_manager.go:287] "The desired_state_of_world populator starts" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.099295 4720 volume_manager.go:289] "Starting Kubelet Volume Manager" Oct 13 17:24:15 crc kubenswrapper[4720]: E1013 17:24:15.099311 4720 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.099440 4720 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 13 17:24:15 crc kubenswrapper[4720]: W1013 17:24:15.100016 4720 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.17:6443: connect: connection refused Oct 13 17:24:15 crc kubenswrapper[4720]: E1013 17:24:15.100112 4720 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.17:6443: connect: connection refused" logger="UnhandledError" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.101245 4720 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.101277 4720 factory.go:55] Registering systemd factory Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.101291 4720 factory.go:221] Registration of the systemd container factory successfully Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.101635 4720 server.go:460] "Adding debug handlers to kubelet server" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.101969 4720 factory.go:153] Registering CRI-O factory Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.101988 4720 factory.go:221] Registration of the crio container factory successfully Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.102011 4720 factory.go:103] Registering Raw factory Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.102029 4720 manager.go:1196] Started watching for new ooms in manager Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.102624 4720 manager.go:319] Starting recovery of all containers Oct 13 17:24:15 crc kubenswrapper[4720]: E1013 17:24:15.102700 4720 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.17:6443: connect: connection refused" interval="200ms" Oct 13 17:24:15 crc kubenswrapper[4720]: E1013 17:24:15.106869 4720 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.17:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.186e1ce08bf8f0ee default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-10-13 17:24:15.095042286 +0000 UTC m=+0.552292448,LastTimestamp:2025-10-13 17:24:15.095042286 +0000 UTC m=+0.552292448,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.117373 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.117528 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.117548 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.117569 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.117581 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.117597 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.117612 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.117628 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.117652 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.117666 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.117684 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.117697 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.117712 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.117726 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.117748 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.117758 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.117818 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.117836 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.117852 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.117868 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.117884 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.117901 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.117914 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.117926 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.117942 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.117964 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.117988 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.118009 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.118021 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.118035 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.118057 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.118295 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.118438 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.118485 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.118516 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.118574 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.118606 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.118631 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.118660 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.118695 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.118733 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.118759 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.118796 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.118836 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.118876 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.120718 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.120774 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.120795 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.120837 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.120874 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.120895 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.120937 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.120991 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.121025 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.121060 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.121109 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.121141 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.121172 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.121244 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.121279 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.121308 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.121337 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.121365 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.121386 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.121406 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.121452 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.121474 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.121555 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.121586 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.121605 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.121641 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.121677 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.121703 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.121737 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.121794 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.121836 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.121861 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.121884 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.121920 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.121950 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.121984 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.122015 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.122043 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.122077 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.122108 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.122144 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.122231 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.122260 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.122288 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.122323 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.122355 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.122406 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.122451 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.122532 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.122569 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.122598 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.122618 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.122641 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.122666 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.122708 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.122824 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.122856 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.122904 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.122923 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.123059 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.123079 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.123098 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.123117 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.123156 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.123212 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.123235 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.123305 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.123324 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.123386 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.123517 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.123566 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.123615 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.123639 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.123668 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.123714 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.123741 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.123825 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.123870 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.123959 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.124065 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.124105 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.124136 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.124654 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.125529 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.125569 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.125592 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.125613 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.125634 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.125656 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.125677 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.125697 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.125715 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.125736 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.125755 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.125796 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.125838 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.125860 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.125879 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.125898 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.125919 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.127555 4720 manager.go:324] Recovery completed Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.128078 4720 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.128132 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.128159 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.128263 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.128285 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.128305 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.128333 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.128355 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.128376 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.128398 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.128418 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.128438 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.128458 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.128502 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.128524 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.128545 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.128566 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.128586 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.128607 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.128630 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.128649 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.128670 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.128709 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.128740 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.128760 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.128780 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.128799 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.128821 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.128841 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.128862 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.128881 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.128904 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.128926 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.128947 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.128967 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.128992 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.129011 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.129050 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.129069 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.129089 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.129108 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.129128 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.129150 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.129170 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.129212 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.129236 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.129256 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.129275 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.129296 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.129316 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.129336 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.129357 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.129376 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.129397 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.129418 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.129436 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.129456 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.129475 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.129496 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.129515 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.129535 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.129554 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.129574 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.129594 4720 reconstruct.go:97] "Volume reconstruction finished" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.129608 4720 reconciler.go:26] "Reconciler: start to sync state" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.141097 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.143003 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.143285 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.143340 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.144003 4720 cpu_manager.go:225] "Starting CPU manager" policy="none" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.144023 4720 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.144048 4720 state_mem.go:36] "Initialized new in-memory state store" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.163159 4720 policy_none.go:49] "None policy: Start" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.163881 4720 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.164578 4720 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.164627 4720 state_mem.go:35] "Initializing new in-memory state store" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.166707 4720 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.166811 4720 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.166890 4720 kubelet.go:2335] "Starting kubelet main sync loop" Oct 13 17:24:15 crc kubenswrapper[4720]: E1013 17:24:15.167030 4720 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 13 17:24:15 crc kubenswrapper[4720]: W1013 17:24:15.167630 4720 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.17:6443: connect: connection refused Oct 13 17:24:15 crc kubenswrapper[4720]: E1013 17:24:15.167733 4720 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.17:6443: connect: connection refused" logger="UnhandledError" Oct 13 17:24:15 crc kubenswrapper[4720]: E1013 17:24:15.200060 4720 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.208087 4720 manager.go:334] "Starting Device Plugin manager" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.208170 4720 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.208217 4720 server.go:79] "Starting device plugin registration server" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.208877 4720 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.208908 4720 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.209063 4720 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.209168 4720 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.209177 4720 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 13 17:24:15 crc kubenswrapper[4720]: E1013 17:24:15.220129 4720 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.267447 4720 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.267647 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.270409 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.270460 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.270476 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.270685 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.271048 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.271128 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.272055 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.272114 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.272132 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.272271 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.272304 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.272324 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.272405 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.272899 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.273013 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.274824 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.274881 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.274894 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.274905 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.274952 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.274965 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.275275 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.275769 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.276004 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.276530 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.276566 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.276577 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.276784 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.277473 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.277589 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.278164 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.278237 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.278249 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.278642 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.278727 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.278736 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.278790 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.278877 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.280409 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.280435 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.280475 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.280487 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.280437 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.280544 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:15 crc kubenswrapper[4720]: E1013 17:24:15.304241 4720 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.17:6443: connect: connection refused" interval="400ms" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.309274 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.310416 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.310451 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.310466 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.310492 4720 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 13 17:24:15 crc kubenswrapper[4720]: E1013 17:24:15.310937 4720 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.17:6443: connect: connection refused" node="crc" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.332324 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.332386 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.332449 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.332500 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.332541 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.332564 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.332597 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.332719 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.332867 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.332967 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.333052 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.333101 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.333234 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.333312 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.333364 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.434593 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.434652 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.434679 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.434705 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.434730 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.434751 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.434768 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.434786 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.434802 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.434808 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.434836 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.434863 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.434914 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.434936 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.434973 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.434959 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.434989 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.435008 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.435014 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.434960 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.435043 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.435001 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.435048 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.435069 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.435097 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.435142 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.435167 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.435502 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.435532 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.435560 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.511552 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.513315 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.513380 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.513403 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.513446 4720 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 13 17:24:15 crc kubenswrapper[4720]: E1013 17:24:15.514163 4720 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.17:6443: connect: connection refused" node="crc" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.618166 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.629833 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.663139 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 13 17:24:15 crc kubenswrapper[4720]: W1013 17:24:15.665267 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-027863e3af4bd5b0c0df47c78b4330372eb7a360b3ec9d5caa795f1b5fc5a6c4 WatchSource:0}: Error finding container 027863e3af4bd5b0c0df47c78b4330372eb7a360b3ec9d5caa795f1b5fc5a6c4: Status 404 returned error can't find the container with id 027863e3af4bd5b0c0df47c78b4330372eb7a360b3ec9d5caa795f1b5fc5a6c4 Oct 13 17:24:15 crc kubenswrapper[4720]: W1013 17:24:15.672512 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-61dec44816bb969c61b5f1f83934a40963e50250da60ee976cd86bbdad837efd WatchSource:0}: Error finding container 61dec44816bb969c61b5f1f83934a40963e50250da60ee976cd86bbdad837efd: Status 404 returned error can't find the container with id 61dec44816bb969c61b5f1f83934a40963e50250da60ee976cd86bbdad837efd Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.672622 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.678773 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 13 17:24:15 crc kubenswrapper[4720]: E1013 17:24:15.704820 4720 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.17:6443: connect: connection refused" interval="800ms" Oct 13 17:24:15 crc kubenswrapper[4720]: W1013 17:24:15.711693 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-d0e70c2fae192d4918f25ecfd6e97480aa1e8a5f71c70b375107168cdf8cb792 WatchSource:0}: Error finding container d0e70c2fae192d4918f25ecfd6e97480aa1e8a5f71c70b375107168cdf8cb792: Status 404 returned error can't find the container with id d0e70c2fae192d4918f25ecfd6e97480aa1e8a5f71c70b375107168cdf8cb792 Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.914562 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.916594 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.916643 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.916653 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:15 crc kubenswrapper[4720]: I1013 17:24:15.916677 4720 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 13 17:24:15 crc kubenswrapper[4720]: E1013 17:24:15.917262 4720 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.17:6443: connect: connection refused" node="crc" Oct 13 17:24:16 crc kubenswrapper[4720]: I1013 17:24:16.094896 4720 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.17:6443: connect: connection refused Oct 13 17:24:16 crc kubenswrapper[4720]: W1013 17:24:16.150040 4720 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.17:6443: connect: connection refused Oct 13 17:24:16 crc kubenswrapper[4720]: E1013 17:24:16.150145 4720 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.17:6443: connect: connection refused" logger="UnhandledError" Oct 13 17:24:16 crc kubenswrapper[4720]: I1013 17:24:16.171296 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"027863e3af4bd5b0c0df47c78b4330372eb7a360b3ec9d5caa795f1b5fc5a6c4"} Oct 13 17:24:16 crc kubenswrapper[4720]: I1013 17:24:16.172278 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d0e70c2fae192d4918f25ecfd6e97480aa1e8a5f71c70b375107168cdf8cb792"} Oct 13 17:24:16 crc kubenswrapper[4720]: I1013 17:24:16.173759 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"c058857ed29c976fcfd93a373f9cf9cf4aa1efbc8bd7e1be85efbca72230c19e"} Oct 13 17:24:16 crc kubenswrapper[4720]: I1013 17:24:16.174839 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6f4f49eb6071533636b329abbf75ab9b34a8cf92ad052f400ab34c72a5248a60"} Oct 13 17:24:16 crc kubenswrapper[4720]: I1013 17:24:16.175865 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"61dec44816bb969c61b5f1f83934a40963e50250da60ee976cd86bbdad837efd"} Oct 13 17:24:16 crc kubenswrapper[4720]: E1013 17:24:16.266954 4720 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.17:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.186e1ce08bf8f0ee default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-10-13 17:24:15.095042286 +0000 UTC m=+0.552292448,LastTimestamp:2025-10-13 17:24:15.095042286 +0000 UTC m=+0.552292448,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Oct 13 17:24:16 crc kubenswrapper[4720]: W1013 17:24:16.419973 4720 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.17:6443: connect: connection refused Oct 13 17:24:16 crc kubenswrapper[4720]: E1013 17:24:16.420290 4720 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.17:6443: connect: connection refused" logger="UnhandledError" Oct 13 17:24:16 crc kubenswrapper[4720]: W1013 17:24:16.426035 4720 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.17:6443: connect: connection refused Oct 13 17:24:16 crc kubenswrapper[4720]: E1013 17:24:16.426103 4720 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.17:6443: connect: connection refused" logger="UnhandledError" Oct 13 17:24:16 crc kubenswrapper[4720]: E1013 17:24:16.506166 4720 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.17:6443: connect: connection refused" interval="1.6s" Oct 13 17:24:16 crc kubenswrapper[4720]: W1013 17:24:16.536555 4720 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.17:6443: connect: connection refused Oct 13 17:24:16 crc kubenswrapper[4720]: E1013 17:24:16.536655 4720 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.17:6443: connect: connection refused" logger="UnhandledError" Oct 13 17:24:16 crc kubenswrapper[4720]: I1013 17:24:16.717606 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 17:24:16 crc kubenswrapper[4720]: I1013 17:24:16.719234 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:16 crc kubenswrapper[4720]: I1013 17:24:16.719293 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:16 crc kubenswrapper[4720]: I1013 17:24:16.719314 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:16 crc kubenswrapper[4720]: I1013 17:24:16.719357 4720 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 13 17:24:16 crc kubenswrapper[4720]: E1013 17:24:16.719962 4720 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.17:6443: connect: connection refused" node="crc" Oct 13 17:24:17 crc kubenswrapper[4720]: I1013 17:24:17.095137 4720 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.17:6443: connect: connection refused Oct 13 17:24:17 crc kubenswrapper[4720]: I1013 17:24:17.182227 4720 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff" exitCode=0 Oct 13 17:24:17 crc kubenswrapper[4720]: I1013 17:24:17.182303 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff"} Oct 13 17:24:17 crc kubenswrapper[4720]: I1013 17:24:17.182572 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 17:24:17 crc kubenswrapper[4720]: I1013 17:24:17.184451 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:17 crc kubenswrapper[4720]: I1013 17:24:17.184505 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:17 crc kubenswrapper[4720]: I1013 17:24:17.184523 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:17 crc kubenswrapper[4720]: I1013 17:24:17.185771 4720 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563" exitCode=0 Oct 13 17:24:17 crc kubenswrapper[4720]: I1013 17:24:17.185853 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563"} Oct 13 17:24:17 crc kubenswrapper[4720]: I1013 17:24:17.185936 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 17:24:17 crc kubenswrapper[4720]: I1013 17:24:17.186744 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 17:24:17 crc kubenswrapper[4720]: I1013 17:24:17.187096 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:17 crc kubenswrapper[4720]: I1013 17:24:17.187137 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:17 crc kubenswrapper[4720]: I1013 17:24:17.187154 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:17 crc kubenswrapper[4720]: I1013 17:24:17.187519 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:17 crc kubenswrapper[4720]: I1013 17:24:17.187567 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:17 crc kubenswrapper[4720]: I1013 17:24:17.187585 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:17 crc kubenswrapper[4720]: I1013 17:24:17.189571 4720 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="19c59ec68cb6306af37b90cd4583f5a1a266e93f77566d756223eecf7cdadb76" exitCode=0 Oct 13 17:24:17 crc kubenswrapper[4720]: I1013 17:24:17.189689 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 17:24:17 crc kubenswrapper[4720]: I1013 17:24:17.189702 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"19c59ec68cb6306af37b90cd4583f5a1a266e93f77566d756223eecf7cdadb76"} Oct 13 17:24:17 crc kubenswrapper[4720]: I1013 17:24:17.190660 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:17 crc kubenswrapper[4720]: I1013 17:24:17.190684 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:17 crc kubenswrapper[4720]: I1013 17:24:17.190693 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:17 crc kubenswrapper[4720]: I1013 17:24:17.191520 4720 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="18ddea8ad4714addb2f8431d0802d32a36f3d823bb123d4546bc6de1a3a017a9" exitCode=0 Oct 13 17:24:17 crc kubenswrapper[4720]: I1013 17:24:17.191578 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"18ddea8ad4714addb2f8431d0802d32a36f3d823bb123d4546bc6de1a3a017a9"} Oct 13 17:24:17 crc kubenswrapper[4720]: I1013 17:24:17.191660 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 17:24:17 crc kubenswrapper[4720]: I1013 17:24:17.192965 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:17 crc kubenswrapper[4720]: I1013 17:24:17.193011 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:17 crc kubenswrapper[4720]: I1013 17:24:17.193030 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:17 crc kubenswrapper[4720]: I1013 17:24:17.195260 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"32d8f079c2674e0c6bea1837755db23bc3205da6a01ac1896d6293226d9c6fb6"} Oct 13 17:24:17 crc kubenswrapper[4720]: I1013 17:24:17.195284 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3ec64662f56c2609ebac1383a78840ba4e0d9fb1fe10e666169285d3852e6b40"} Oct 13 17:24:17 crc kubenswrapper[4720]: I1013 17:24:17.195295 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"435ba16189d6d19892aaab158512c1a84a72103779e3cc4b2c634de344583c89"} Oct 13 17:24:17 crc kubenswrapper[4720]: I1013 17:24:17.195304 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e566f3b66d404dd8da93bba1745b9e67cd63532330591f5b735f6826c07eae2c"} Oct 13 17:24:17 crc kubenswrapper[4720]: I1013 17:24:17.195372 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 17:24:17 crc kubenswrapper[4720]: I1013 17:24:17.195941 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:17 crc kubenswrapper[4720]: I1013 17:24:17.195964 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:17 crc kubenswrapper[4720]: I1013 17:24:17.195971 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:17 crc kubenswrapper[4720]: I1013 17:24:17.636079 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 13 17:24:18 crc kubenswrapper[4720]: I1013 17:24:18.095133 4720 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.17:6443: connect: connection refused Oct 13 17:24:18 crc kubenswrapper[4720]: E1013 17:24:18.107045 4720 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.17:6443: connect: connection refused" interval="3.2s" Oct 13 17:24:18 crc kubenswrapper[4720]: I1013 17:24:18.203050 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"591ebf9a13f38e2f458aa34584be8fabc9115335a04af437a2291e7980a903ec"} Oct 13 17:24:18 crc kubenswrapper[4720]: I1013 17:24:18.203128 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 17:24:18 crc kubenswrapper[4720]: I1013 17:24:18.204171 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:18 crc kubenswrapper[4720]: I1013 17:24:18.204225 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:18 crc kubenswrapper[4720]: I1013 17:24:18.204240 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:18 crc kubenswrapper[4720]: I1013 17:24:18.207512 4720 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657" exitCode=0 Oct 13 17:24:18 crc kubenswrapper[4720]: I1013 17:24:18.207633 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 17:24:18 crc kubenswrapper[4720]: I1013 17:24:18.206129 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"cd536f783e3f8f974f25772994c3d00a01c2ba66997a6e06236f1b83b890f8f4"} Oct 13 17:24:18 crc kubenswrapper[4720]: I1013 17:24:18.207764 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b7d875b0d7eb9f13bc697597a33f7196598ae679980b13842e92acf20477526f"} Oct 13 17:24:18 crc kubenswrapper[4720]: I1013 17:24:18.207782 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7e18d0e9df43d366a6e7088cc3d5bdc794882828f60f008eaf29e788e0141ac2"} Oct 13 17:24:18 crc kubenswrapper[4720]: I1013 17:24:18.207793 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"88b8a46e2e2c2342bd22a4a9b42cadd0188ec83f23815773f6ffb46be79fdf83"} Oct 13 17:24:18 crc kubenswrapper[4720]: I1013 17:24:18.207807 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657"} Oct 13 17:24:18 crc kubenswrapper[4720]: I1013 17:24:18.208877 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:18 crc kubenswrapper[4720]: I1013 17:24:18.208903 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:18 crc kubenswrapper[4720]: I1013 17:24:18.208912 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:18 crc kubenswrapper[4720]: I1013 17:24:18.212267 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d586c8b48f5d2ca87e3a758dd265611f11678f6fa8dd1118a859923c331c4f67"} Oct 13 17:24:18 crc kubenswrapper[4720]: I1013 17:24:18.212322 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"3f9d9053346b85f12f6cf781f1699dbf8aea670b7e1cc5f4fdc1ffac2c969712"} Oct 13 17:24:18 crc kubenswrapper[4720]: I1013 17:24:18.212333 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6e96fe80c0a88f19c6c5705c7fd945c3282a104b7767db5c217b6364c045d649"} Oct 13 17:24:18 crc kubenswrapper[4720]: I1013 17:24:18.212338 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 17:24:18 crc kubenswrapper[4720]: I1013 17:24:18.212390 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 17:24:18 crc kubenswrapper[4720]: I1013 17:24:18.213217 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:18 crc kubenswrapper[4720]: I1013 17:24:18.213236 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:18 crc kubenswrapper[4720]: I1013 17:24:18.213247 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:18 crc kubenswrapper[4720]: I1013 17:24:18.213331 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:18 crc kubenswrapper[4720]: I1013 17:24:18.213365 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:18 crc kubenswrapper[4720]: I1013 17:24:18.213377 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:18 crc kubenswrapper[4720]: I1013 17:24:18.245173 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 13 17:24:18 crc kubenswrapper[4720]: I1013 17:24:18.320061 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 17:24:18 crc kubenswrapper[4720]: I1013 17:24:18.321439 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:18 crc kubenswrapper[4720]: I1013 17:24:18.321497 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:18 crc kubenswrapper[4720]: I1013 17:24:18.321508 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:18 crc kubenswrapper[4720]: I1013 17:24:18.321532 4720 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 13 17:24:18 crc kubenswrapper[4720]: E1013 17:24:18.323618 4720 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.17:6443: connect: connection refused" node="crc" Oct 13 17:24:18 crc kubenswrapper[4720]: W1013 17:24:18.493722 4720 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.17:6443: connect: connection refused Oct 13 17:24:18 crc kubenswrapper[4720]: E1013 17:24:18.493827 4720 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.17:6443: connect: connection refused" logger="UnhandledError" Oct 13 17:24:19 crc kubenswrapper[4720]: I1013 17:24:19.219508 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b7977e20d8558810fb9f8f827830d0378de7d15c71340d396a95b506902c0962"} Oct 13 17:24:19 crc kubenswrapper[4720]: I1013 17:24:19.220377 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 17:24:19 crc kubenswrapper[4720]: I1013 17:24:19.221755 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:19 crc kubenswrapper[4720]: I1013 17:24:19.221819 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:19 crc kubenswrapper[4720]: I1013 17:24:19.221838 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:19 crc kubenswrapper[4720]: I1013 17:24:19.223287 4720 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6" exitCode=0 Oct 13 17:24:19 crc kubenswrapper[4720]: I1013 17:24:19.223344 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6"} Oct 13 17:24:19 crc kubenswrapper[4720]: I1013 17:24:19.223478 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 17:24:19 crc kubenswrapper[4720]: I1013 17:24:19.223559 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 17:24:19 crc kubenswrapper[4720]: I1013 17:24:19.223745 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 17:24:19 crc kubenswrapper[4720]: I1013 17:24:19.224476 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 17:24:19 crc kubenswrapper[4720]: I1013 17:24:19.225338 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:19 crc kubenswrapper[4720]: I1013 17:24:19.225362 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:19 crc kubenswrapper[4720]: I1013 17:24:19.225372 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:19 crc kubenswrapper[4720]: I1013 17:24:19.225716 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:19 crc kubenswrapper[4720]: I1013 17:24:19.225741 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:19 crc kubenswrapper[4720]: I1013 17:24:19.225820 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:19 crc kubenswrapper[4720]: I1013 17:24:19.225936 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:19 crc kubenswrapper[4720]: I1013 17:24:19.225960 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:19 crc kubenswrapper[4720]: I1013 17:24:19.225864 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:19 crc kubenswrapper[4720]: I1013 17:24:19.226021 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:19 crc kubenswrapper[4720]: I1013 17:24:19.225894 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:19 crc kubenswrapper[4720]: I1013 17:24:19.226112 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:20 crc kubenswrapper[4720]: I1013 17:24:20.233970 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"fc60672e282de4d3d6a2d8fd64306ec0fc0d032ade655f88afb0c0b9e0a8e589"} Oct 13 17:24:20 crc kubenswrapper[4720]: I1013 17:24:20.234044 4720 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 17:24:20 crc kubenswrapper[4720]: I1013 17:24:20.234112 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 17:24:20 crc kubenswrapper[4720]: I1013 17:24:20.234045 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2507a30017fbd9947ca456b84812f0b26baabf572768870f5a34b3c7699443cc"} Oct 13 17:24:20 crc kubenswrapper[4720]: I1013 17:24:20.234262 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b452d113d48729b6d4ab59f80138b15e7cb16cc16eef4ec9916901a067986d07"} Oct 13 17:24:20 crc kubenswrapper[4720]: I1013 17:24:20.234183 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 17:24:20 crc kubenswrapper[4720]: I1013 17:24:20.235041 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:20 crc kubenswrapper[4720]: I1013 17:24:20.235075 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:20 crc kubenswrapper[4720]: I1013 17:24:20.235088 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:20 crc kubenswrapper[4720]: I1013 17:24:20.235492 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:20 crc kubenswrapper[4720]: I1013 17:24:20.235542 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:20 crc kubenswrapper[4720]: I1013 17:24:20.235555 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:21 crc kubenswrapper[4720]: I1013 17:24:21.244717 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"06eea6556daedf913d87b8b0527b337070528b07cc026de878b4df332598a2d7"} Oct 13 17:24:21 crc kubenswrapper[4720]: I1013 17:24:21.244800 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f9cf903094dae4e559bcef3e269240670b5d4693344d600779c394ce1e039b6a"} Oct 13 17:24:21 crc kubenswrapper[4720]: I1013 17:24:21.244945 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 17:24:21 crc kubenswrapper[4720]: I1013 17:24:21.246401 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:21 crc kubenswrapper[4720]: I1013 17:24:21.246463 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:21 crc kubenswrapper[4720]: I1013 17:24:21.246490 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:21 crc kubenswrapper[4720]: I1013 17:24:21.401687 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 13 17:24:21 crc kubenswrapper[4720]: I1013 17:24:21.401910 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 17:24:21 crc kubenswrapper[4720]: I1013 17:24:21.403785 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:21 crc kubenswrapper[4720]: I1013 17:24:21.403819 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:21 crc kubenswrapper[4720]: I1013 17:24:21.403829 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:21 crc kubenswrapper[4720]: I1013 17:24:21.412713 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 13 17:24:21 crc kubenswrapper[4720]: I1013 17:24:21.524631 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 17:24:21 crc kubenswrapper[4720]: I1013 17:24:21.526520 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:21 crc kubenswrapper[4720]: I1013 17:24:21.526569 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:21 crc kubenswrapper[4720]: I1013 17:24:21.526588 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:21 crc kubenswrapper[4720]: I1013 17:24:21.526618 4720 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 13 17:24:21 crc kubenswrapper[4720]: I1013 17:24:21.847007 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 13 17:24:21 crc kubenswrapper[4720]: I1013 17:24:21.847295 4720 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 17:24:21 crc kubenswrapper[4720]: I1013 17:24:21.847354 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 17:24:21 crc kubenswrapper[4720]: I1013 17:24:21.848942 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:21 crc kubenswrapper[4720]: I1013 17:24:21.849002 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:21 crc kubenswrapper[4720]: I1013 17:24:21.849025 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:22 crc kubenswrapper[4720]: I1013 17:24:22.248048 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 17:24:22 crc kubenswrapper[4720]: I1013 17:24:22.248053 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 17:24:22 crc kubenswrapper[4720]: I1013 17:24:22.249511 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:22 crc kubenswrapper[4720]: I1013 17:24:22.249565 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:22 crc kubenswrapper[4720]: I1013 17:24:22.249583 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:22 crc kubenswrapper[4720]: I1013 17:24:22.249618 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:22 crc kubenswrapper[4720]: I1013 17:24:22.249659 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:22 crc kubenswrapper[4720]: I1013 17:24:22.249681 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:22 crc kubenswrapper[4720]: I1013 17:24:22.370981 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 13 17:24:22 crc kubenswrapper[4720]: I1013 17:24:22.371287 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 17:24:22 crc kubenswrapper[4720]: I1013 17:24:22.372744 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:22 crc kubenswrapper[4720]: I1013 17:24:22.372822 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:22 crc kubenswrapper[4720]: I1013 17:24:22.372854 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:22 crc kubenswrapper[4720]: I1013 17:24:22.395731 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 13 17:24:23 crc kubenswrapper[4720]: I1013 17:24:23.090840 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Oct 13 17:24:23 crc kubenswrapper[4720]: I1013 17:24:23.250153 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 17:24:23 crc kubenswrapper[4720]: I1013 17:24:23.250271 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 17:24:23 crc kubenswrapper[4720]: I1013 17:24:23.251536 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:23 crc kubenswrapper[4720]: I1013 17:24:23.251571 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:23 crc kubenswrapper[4720]: I1013 17:24:23.251590 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:23 crc kubenswrapper[4720]: I1013 17:24:23.251891 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:23 crc kubenswrapper[4720]: I1013 17:24:23.251962 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:23 crc kubenswrapper[4720]: I1013 17:24:23.251987 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:23 crc kubenswrapper[4720]: I1013 17:24:23.279825 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 13 17:24:23 crc kubenswrapper[4720]: I1013 17:24:23.383896 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 13 17:24:23 crc kubenswrapper[4720]: I1013 17:24:23.384171 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 17:24:23 crc kubenswrapper[4720]: I1013 17:24:23.385911 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:23 crc kubenswrapper[4720]: I1013 17:24:23.385978 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:23 crc kubenswrapper[4720]: I1013 17:24:23.385992 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:24 crc kubenswrapper[4720]: I1013 17:24:24.252920 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 17:24:24 crc kubenswrapper[4720]: I1013 17:24:24.254381 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:24 crc kubenswrapper[4720]: I1013 17:24:24.254440 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:24 crc kubenswrapper[4720]: I1013 17:24:24.254466 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:25 crc kubenswrapper[4720]: E1013 17:24:25.220277 4720 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Oct 13 17:24:26 crc kubenswrapper[4720]: I1013 17:24:26.280502 4720 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 13 17:24:26 crc kubenswrapper[4720]: I1013 17:24:26.280593 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Oct 13 17:24:27 crc kubenswrapper[4720]: I1013 17:24:27.918303 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Oct 13 17:24:27 crc kubenswrapper[4720]: I1013 17:24:27.918516 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 17:24:27 crc kubenswrapper[4720]: I1013 17:24:27.919975 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:27 crc kubenswrapper[4720]: I1013 17:24:27.920037 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:27 crc kubenswrapper[4720]: I1013 17:24:27.920055 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:28 crc kubenswrapper[4720]: W1013 17:24:28.837628 4720 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Oct 13 17:24:28 crc kubenswrapper[4720]: I1013 17:24:28.837744 4720 trace.go:236] Trace[425410118]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (13-Oct-2025 17:24:18.835) (total time: 10002ms): Oct 13 17:24:28 crc kubenswrapper[4720]: Trace[425410118]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (17:24:28.837) Oct 13 17:24:28 crc kubenswrapper[4720]: Trace[425410118]: [10.002592074s] [10.002592074s] END Oct 13 17:24:28 crc kubenswrapper[4720]: E1013 17:24:28.837770 4720 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Oct 13 17:24:29 crc kubenswrapper[4720]: I1013 17:24:29.096306 4720 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Oct 13 17:24:29 crc kubenswrapper[4720]: W1013 17:24:29.233616 4720 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Oct 13 17:24:29 crc kubenswrapper[4720]: I1013 17:24:29.233707 4720 trace.go:236] Trace[769281961]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (13-Oct-2025 17:24:19.232) (total time: 10001ms): Oct 13 17:24:29 crc kubenswrapper[4720]: Trace[769281961]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (17:24:29.233) Oct 13 17:24:29 crc kubenswrapper[4720]: Trace[769281961]: [10.001445633s] [10.001445633s] END Oct 13 17:24:29 crc kubenswrapper[4720]: E1013 17:24:29.233725 4720 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Oct 13 17:24:29 crc kubenswrapper[4720]: W1013 17:24:29.617719 4720 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Oct 13 17:24:29 crc kubenswrapper[4720]: I1013 17:24:29.617853 4720 trace.go:236] Trace[1299253202]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (13-Oct-2025 17:24:19.614) (total time: 10003ms): Oct 13 17:24:29 crc kubenswrapper[4720]: Trace[1299253202]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (17:24:29.617) Oct 13 17:24:29 crc kubenswrapper[4720]: Trace[1299253202]: [10.003038165s] [10.003038165s] END Oct 13 17:24:29 crc kubenswrapper[4720]: E1013 17:24:29.617886 4720 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Oct 13 17:24:29 crc kubenswrapper[4720]: I1013 17:24:29.851420 4720 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Oct 13 17:24:29 crc kubenswrapper[4720]: I1013 17:24:29.851497 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Oct 13 17:24:29 crc kubenswrapper[4720]: I1013 17:24:29.863453 4720 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Oct 13 17:24:29 crc kubenswrapper[4720]: I1013 17:24:29.863523 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Oct 13 17:24:31 crc kubenswrapper[4720]: I1013 17:24:31.853304 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 13 17:24:31 crc kubenswrapper[4720]: I1013 17:24:31.853488 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 17:24:31 crc kubenswrapper[4720]: I1013 17:24:31.854738 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:31 crc kubenswrapper[4720]: I1013 17:24:31.854778 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:31 crc kubenswrapper[4720]: I1013 17:24:31.854790 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:31 crc kubenswrapper[4720]: I1013 17:24:31.859812 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 13 17:24:32 crc kubenswrapper[4720]: I1013 17:24:32.273915 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 17:24:32 crc kubenswrapper[4720]: I1013 17:24:32.275001 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:32 crc kubenswrapper[4720]: I1013 17:24:32.275039 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:32 crc kubenswrapper[4720]: I1013 17:24:32.275052 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:32 crc kubenswrapper[4720]: I1013 17:24:32.400126 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 13 17:24:32 crc kubenswrapper[4720]: I1013 17:24:32.400385 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 17:24:32 crc kubenswrapper[4720]: I1013 17:24:32.401703 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:32 crc kubenswrapper[4720]: I1013 17:24:32.401753 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:32 crc kubenswrapper[4720]: I1013 17:24:32.401771 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:33 crc kubenswrapper[4720]: I1013 17:24:33.374462 4720 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Oct 13 17:24:33 crc kubenswrapper[4720]: I1013 17:24:33.832085 4720 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Oct 13 17:24:34 crc kubenswrapper[4720]: I1013 17:24:34.659154 4720 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Oct 13 17:24:34 crc kubenswrapper[4720]: E1013 17:24:34.841089 4720 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Oct 13 17:24:34 crc kubenswrapper[4720]: I1013 17:24:34.843598 4720 trace.go:236] Trace[1540678573]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (13-Oct-2025 17:24:22.853) (total time: 11990ms): Oct 13 17:24:34 crc kubenswrapper[4720]: Trace[1540678573]: ---"Objects listed" error: 11990ms (17:24:34.843) Oct 13 17:24:34 crc kubenswrapper[4720]: Trace[1540678573]: [11.990295287s] [11.990295287s] END Oct 13 17:24:34 crc kubenswrapper[4720]: I1013 17:24:34.843619 4720 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Oct 13 17:24:34 crc kubenswrapper[4720]: I1013 17:24:34.844321 4720 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Oct 13 17:24:34 crc kubenswrapper[4720]: E1013 17:24:34.846307 4720 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Oct 13 17:24:34 crc kubenswrapper[4720]: I1013 17:24:34.892644 4720 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:53846->192.168.126.11:17697: read: connection reset by peer" start-of-body= Oct 13 17:24:34 crc kubenswrapper[4720]: I1013 17:24:34.892713 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:53846->192.168.126.11:17697: read: connection reset by peer" Oct 13 17:24:34 crc kubenswrapper[4720]: I1013 17:24:34.892738 4720 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:53854->192.168.126.11:17697: read: connection reset by peer" start-of-body= Oct 13 17:24:34 crc kubenswrapper[4720]: I1013 17:24:34.892829 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:53854->192.168.126.11:17697: read: connection reset by peer" Oct 13 17:24:34 crc kubenswrapper[4720]: I1013 17:24:34.893144 4720 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Oct 13 17:24:34 crc kubenswrapper[4720]: I1013 17:24:34.893224 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Oct 13 17:24:34 crc kubenswrapper[4720]: I1013 17:24:34.893621 4720 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Oct 13 17:24:34 crc kubenswrapper[4720]: I1013 17:24:34.893845 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.090508 4720 apiserver.go:52] "Watching apiserver" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.092786 4720 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.093037 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.093486 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.093570 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.093585 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.093660 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.093743 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 13 17:24:35 crc kubenswrapper[4720]: E1013 17:24:35.093839 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.093847 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:24:35 crc kubenswrapper[4720]: E1013 17:24:35.093987 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 17:24:35 crc kubenswrapper[4720]: E1013 17:24:35.094255 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.098644 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.098711 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.098772 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.098840 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.101915 4720 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.104477 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.104502 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.104636 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.104919 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.105166 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.141812 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.146594 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.146641 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.146663 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.146686 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.146712 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.146760 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.146782 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.146805 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.146890 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.146912 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.146932 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.146954 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.146974 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.146996 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.147044 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.147069 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.147090 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.147140 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.147183 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.147223 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.147269 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.147292 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.147317 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.147383 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.147408 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.147430 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.147451 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.147474 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.147495 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.147516 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.147542 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.147564 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.147586 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.147608 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.147633 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.147703 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.147725 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.147746 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.147767 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.147790 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.147812 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.147792 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.147835 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.147920 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.147924 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.147954 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.147981 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.148008 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.148030 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.148055 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.148078 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.148101 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.148254 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.148331 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.148342 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.148374 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.148401 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.148423 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.148443 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.148444 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.148465 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.148515 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.148555 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.148566 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.148599 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.148613 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.148639 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.148677 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.148712 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.148742 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.148774 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.148746 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.148806 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.148841 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.148870 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.148879 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.148937 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.148975 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.149011 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.149060 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.149096 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.149128 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.149160 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.149221 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.149254 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.149369 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.149402 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.149436 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.149472 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.149505 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.149540 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.149574 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.149607 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.149639 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.149673 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.149710 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.149745 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.149780 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.149816 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.149849 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.149885 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.149920 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.149058 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.149208 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.149221 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.149310 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.149413 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.149451 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.149491 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.149507 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.149768 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.149844 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.149843 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.149913 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.149922 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.150165 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.150337 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.150507 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.150580 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.150600 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.150604 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.150689 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.150820 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.150931 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.151003 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.151107 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.151317 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.151631 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: E1013 17:24:35.151810 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 17:24:35.651789216 +0000 UTC m=+21.109039358 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.152637 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.153241 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.153290 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.153607 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.153873 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.149953 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.153930 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.153956 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.153981 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.154062 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.154067 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.154288 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.154316 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.154487 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.154512 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.154510 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.154539 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.154563 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.154588 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.154617 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.154639 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.154633 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.154661 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.154668 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.154687 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.154747 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.154759 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.154781 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.154826 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.154866 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.154968 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.155006 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.155039 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.155046 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.155079 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.155114 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.155146 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.155185 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.155263 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.155295 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.155326 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.155359 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.155389 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.155407 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.155424 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.155457 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.155488 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.155520 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.155553 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.155589 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.155620 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.155652 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.155681 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.155711 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.155746 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.155782 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.155818 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.155853 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.155884 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.155920 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.155951 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.155982 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.156014 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.156049 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.156084 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.156116 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.156153 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.156215 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.156252 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.156310 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.156343 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.156382 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.156417 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.156451 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.156521 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.156557 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.156591 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.156623 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.156657 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.156693 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.156724 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.156746 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.156768 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.156791 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.156813 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.156835 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.156858 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.156883 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.156907 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.156940 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.156975 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.157009 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.157031 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.157055 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.157076 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.157098 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.157121 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.157144 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.157168 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.157236 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.157268 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.157292 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.157317 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.157344 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.157378 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.157401 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.157426 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.157457 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.157481 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.157503 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.157525 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.157551 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.157574 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.157637 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.157720 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.157796 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.157848 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.157923 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.157978 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.158029 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.158065 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.158092 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.158117 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.158144 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.158169 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.158242 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.158266 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.158291 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.158315 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.158341 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.158364 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.158459 4720 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.158475 4720 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.158491 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.158506 4720 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.158520 4720 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.158536 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.158549 4720 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.158562 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.158576 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.158590 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.158603 4720 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.158617 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.158631 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.158644 4720 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.158659 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.158673 4720 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.158686 4720 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.158699 4720 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.158714 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.158727 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.158743 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.158757 4720 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.158772 4720 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.158785 4720 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.158800 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.158814 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.158827 4720 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.158840 4720 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.158855 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.158870 4720 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.158883 4720 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.158897 4720 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.158911 4720 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.158925 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.158941 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.158954 4720 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.158968 4720 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.158981 4720 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.158995 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.159012 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.159026 4720 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.159040 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.159052 4720 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.159065 4720 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.159078 4720 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.159120 4720 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.159134 4720 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.159149 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.159164 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.159177 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.161987 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.162280 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.155418 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.155641 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.155702 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.156006 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.156097 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.165313 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.156160 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.156352 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.156394 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.157130 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.157391 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.157450 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.157741 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.157757 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.165453 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.157796 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.157945 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.158038 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.158058 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.158445 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.158473 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.158493 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.158491 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.158944 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.159097 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.159244 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.159360 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.159635 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.159740 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.159965 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.160352 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.160385 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.160485 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.160563 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.160705 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.165649 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.160596 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.160824 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.160969 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.165759 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.160993 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.161008 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.161458 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.161498 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.161608 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.161643 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.161924 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.162022 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.162680 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.162843 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.162859 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.162905 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.166037 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.163023 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.163302 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.166073 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.163433 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.163486 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.163525 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.163877 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.164577 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.164955 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.165003 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.165131 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.162893 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.166264 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.166320 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.166361 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.166713 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.166782 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.167167 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.167180 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.167247 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.167330 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.166427 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.167379 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.167606 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.167660 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.167691 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.167717 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.167754 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.167788 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.167980 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.167992 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: E1013 17:24:35.168573 4720 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 13 17:24:35 crc kubenswrapper[4720]: E1013 17:24:35.168639 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-13 17:24:35.668619454 +0000 UTC m=+21.125869716 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 13 17:24:35 crc kubenswrapper[4720]: E1013 17:24:35.168710 4720 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 13 17:24:35 crc kubenswrapper[4720]: E1013 17:24:35.168738 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-13 17:24:35.668730957 +0000 UTC m=+21.125981219 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.169887 4720 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.170533 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.170602 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.170624 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.170781 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.171271 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.171306 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.184980 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.185247 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.185349 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: E1013 17:24:35.186851 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 13 17:24:35 crc kubenswrapper[4720]: E1013 17:24:35.186870 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 13 17:24:35 crc kubenswrapper[4720]: E1013 17:24:35.186884 4720 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 13 17:24:35 crc kubenswrapper[4720]: E1013 17:24:35.186936 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-13 17:24:35.68691937 +0000 UTC m=+21.144169512 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.187554 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.187756 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.188227 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.188575 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.189861 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.191476 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.191691 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.191859 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.191881 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.192282 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.192834 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.192848 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.192867 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.193180 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.193383 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.193461 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.193646 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.193692 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.196325 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.196361 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.194722 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.196429 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.196380 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.194977 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.194830 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.195480 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.195671 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.196044 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.196174 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.196983 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.197341 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.197396 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: E1013 17:24:35.197776 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 13 17:24:35 crc kubenswrapper[4720]: E1013 17:24:35.197841 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 13 17:24:35 crc kubenswrapper[4720]: E1013 17:24:35.197857 4720 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 13 17:24:35 crc kubenswrapper[4720]: E1013 17:24:35.197943 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-13 17:24:35.697920326 +0000 UTC m=+21.155170458 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.198130 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.198298 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.198592 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.198703 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.200506 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.200714 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.201475 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.201739 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.201664 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.202221 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.203374 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.203551 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.205052 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.205106 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.205174 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.205392 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.205436 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.206021 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.205992 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.207077 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.207455 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.208569 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.208966 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.210821 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.211226 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.211410 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.211475 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.212459 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.212887 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.214651 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.215241 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.217397 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.217894 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.220140 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.220141 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.220285 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.221236 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.222111 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.223305 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.223613 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.224030 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.225536 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.225780 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.227140 4720 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.228722 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.231183 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.233957 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.235422 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.237177 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.237583 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.242139 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.242879 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.244111 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.246064 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.246240 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.246204 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.247154 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.248843 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.250179 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.251580 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.252624 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.253993 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.254679 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.256044 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.256326 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.256804 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.257279 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.259265 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.259536 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.259617 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.259670 4720 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.259680 4720 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.259690 4720 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.259699 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.259707 4720 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.259715 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.259724 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.259732 4720 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.259741 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.259749 4720 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.259757 4720 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.259766 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.259773 4720 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.259781 4720 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.259788 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.259797 4720 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.259805 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.259814 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.259823 4720 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.260049 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.260144 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.259832 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.260517 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.260530 4720 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.260538 4720 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.260546 4720 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.260555 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.260564 4720 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.260572 4720 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.260581 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.260589 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.260597 4720 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.260604 4720 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.260612 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.260620 4720 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.260628 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.260638 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.260646 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.260655 4720 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.260662 4720 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.260671 4720 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.260679 4720 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.260688 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.260696 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.260704 4720 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.260804 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.260816 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.260827 4720 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.260836 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.260847 4720 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.260855 4720 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.260863 4720 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.260873 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.260882 4720 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.260890 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.260899 4720 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.260907 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.260915 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.260925 4720 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.260934 4720 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.260942 4720 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.260951 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.260959 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.260967 4720 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.260976 4720 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.260984 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.260993 4720 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261001 4720 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261011 4720 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261019 4720 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261028 4720 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261036 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261056 4720 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261065 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261073 4720 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261081 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261090 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261098 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261105 4720 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261113 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261121 4720 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261128 4720 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261136 4720 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261144 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261152 4720 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261160 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261169 4720 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261177 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261202 4720 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261216 4720 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261224 4720 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261233 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261241 4720 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261248 4720 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261256 4720 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261263 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261271 4720 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261280 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261288 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261295 4720 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261304 4720 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261314 4720 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261322 4720 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261335 4720 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261344 4720 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261353 4720 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261360 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261368 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261376 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261384 4720 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261398 4720 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261406 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261415 4720 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261424 4720 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261432 4720 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261439 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261447 4720 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261455 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261463 4720 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261471 4720 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261478 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261486 4720 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261494 4720 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261502 4720 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261509 4720 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261517 4720 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261524 4720 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261550 4720 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261560 4720 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261567 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261575 4720 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261584 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261593 4720 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261602 4720 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261610 4720 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261623 4720 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261631 4720 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261639 4720 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261647 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261655 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261663 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261671 4720 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261678 4720 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261686 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261694 4720 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.261702 4720 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.262751 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.263250 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.264144 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.264995 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.265638 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.266078 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.266595 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.274733 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.284281 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.285301 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.287118 4720 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b7977e20d8558810fb9f8f827830d0378de7d15c71340d396a95b506902c0962" exitCode=255 Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.287178 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"b7977e20d8558810fb9f8f827830d0378de7d15c71340d396a95b506902c0962"} Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.292296 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.298398 4720 scope.go:117] "RemoveContainer" containerID="b7977e20d8558810fb9f8f827830d0378de7d15c71340d396a95b506902c0962" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.298647 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.302332 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.312764 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.323519 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.332586 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.343233 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.353356 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.417016 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.429585 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.436557 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 13 17:24:35 crc kubenswrapper[4720]: W1013 17:24:35.439649 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-2f55a3a5379accdee8fb5aac7317e7ab1ae25028c83f5ea36d49981898fba756 WatchSource:0}: Error finding container 2f55a3a5379accdee8fb5aac7317e7ab1ae25028c83f5ea36d49981898fba756: Status 404 returned error can't find the container with id 2f55a3a5379accdee8fb5aac7317e7ab1ae25028c83f5ea36d49981898fba756 Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.462066 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 13 17:24:35 crc kubenswrapper[4720]: W1013 17:24:35.463402 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-58dc9e29907cf09ace499d5219675080bb575c2ced46e454e8def559ebd4e528 WatchSource:0}: Error finding container 58dc9e29907cf09ace499d5219675080bb575c2ced46e454e8def559ebd4e528: Status 404 returned error can't find the container with id 58dc9e29907cf09ace499d5219675080bb575c2ced46e454e8def559ebd4e528 Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.474019 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.475368 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.483124 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea323a1a-c607-40cf-b61c-b7ff03399c45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88b8a46e2e2c2342bd22a4a9b42cadd0188ec83f23815773f6ffb46be79fdf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d875b0d7eb9f13bc697597a33f7196598ae679980b13842e92acf20477526f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e18d0e9df43d366a6e7088cc3d5bdc794882828f60f008eaf29e788e0141ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7977e20d8558810fb9f8f827830d0378de7d15c71340d396a95b506902c0962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7977e20d8558810fb9f8f827830d0378de7d15c71340d396a95b506902c0962\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T17:24:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1013 17:24:28.736714 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 17:24:28.738546 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445610647/tls.crt::/tmp/serving-cert-445610647/tls.key\\\\\\\"\\\\nI1013 17:24:34.860768 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 17:24:34.864611 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 17:24:34.864632 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 17:24:34.864654 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 17:24:34.864660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 17:24:34.871230 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1013 17:24:34.871248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1013 17:24:34.871273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871290 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871305 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 17:24:34.871314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 17:24:34.871324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 17:24:34.871335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1013 17:24:34.872239 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd536f783e3f8f974f25772994c3d00a01c2ba66997a6e06236f1b83b890f8f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.498476 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.508780 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.520590 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.531141 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.544988 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.563678 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.577971 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea323a1a-c607-40cf-b61c-b7ff03399c45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88b8a46e2e2c2342bd22a4a9b42cadd0188ec83f23815773f6ffb46be79fdf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d875b0d7eb9f13bc697597a33f7196598ae679980b13842e92acf20477526f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e18d0e9df43d366a6e7088cc3d5bdc794882828f60f008eaf29e788e0141ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7977e20d8558810fb9f8f827830d0378de7d15c71340d396a95b506902c0962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7977e20d8558810fb9f8f827830d0378de7d15c71340d396a95b506902c0962\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T17:24:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1013 17:24:28.736714 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 17:24:28.738546 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445610647/tls.crt::/tmp/serving-cert-445610647/tls.key\\\\\\\"\\\\nI1013 17:24:34.860768 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 17:24:34.864611 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 17:24:34.864632 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 17:24:34.864654 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 17:24:34.864660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 17:24:34.871230 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1013 17:24:34.871248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1013 17:24:34.871273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871290 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871305 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 17:24:34.871314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 17:24:34.871324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 17:24:34.871335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1013 17:24:34.872239 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd536f783e3f8f974f25772994c3d00a01c2ba66997a6e06236f1b83b890f8f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.591249 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7baf94d8-250c-4568-ab1c-a1429b8caf70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435ba16189d6d19892aaab158512c1a84a72103779e3cc4b2c634de344583c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e566f3b66d404dd8da93bba1745b9e67cd63532330591f5b735f6826c07eae2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ec64662f56c2609ebac1383a78840ba4e0d9fb1fe10e666169285d3852e6b40\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d8f079c2674e0c6bea1837755db23bc3205da6a01ac1896d6293226d9c6fb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.604673 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.615337 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.625692 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.636303 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.652830 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.664581 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.664682 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 17:24:35 crc kubenswrapper[4720]: E1013 17:24:35.664904 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 17:24:36.66487716 +0000 UTC m=+22.122127322 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.766242 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.766322 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.766362 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:24:35 crc kubenswrapper[4720]: I1013 17:24:35.766401 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:24:35 crc kubenswrapper[4720]: E1013 17:24:35.766447 4720 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 13 17:24:35 crc kubenswrapper[4720]: E1013 17:24:35.766540 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-13 17:24:36.766519334 +0000 UTC m=+22.223769556 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 13 17:24:35 crc kubenswrapper[4720]: E1013 17:24:35.766561 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 13 17:24:35 crc kubenswrapper[4720]: E1013 17:24:35.766587 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 13 17:24:35 crc kubenswrapper[4720]: E1013 17:24:35.766605 4720 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 13 17:24:35 crc kubenswrapper[4720]: E1013 17:24:35.766670 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-13 17:24:36.766649037 +0000 UTC m=+22.223899199 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 13 17:24:35 crc kubenswrapper[4720]: E1013 17:24:35.766743 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 13 17:24:35 crc kubenswrapper[4720]: E1013 17:24:35.766759 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 13 17:24:35 crc kubenswrapper[4720]: E1013 17:24:35.766776 4720 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 13 17:24:35 crc kubenswrapper[4720]: E1013 17:24:35.766816 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-13 17:24:36.766803011 +0000 UTC m=+22.224053183 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 13 17:24:35 crc kubenswrapper[4720]: E1013 17:24:35.766867 4720 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 13 17:24:35 crc kubenswrapper[4720]: E1013 17:24:35.766956 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-13 17:24:36.766934445 +0000 UTC m=+22.224184617 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 13 17:24:36 crc kubenswrapper[4720]: I1013 17:24:36.291844 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"705766b44d481300d0fd8cc654aa2dc9046733e5a71ad3a7ab789f85e737dcee"} Oct 13 17:24:36 crc kubenswrapper[4720]: I1013 17:24:36.291889 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"c8a69b8825692cbbb7cce0a52b1ccb7b137ce6c9f8a8d788fff95bb7228cb40e"} Oct 13 17:24:36 crc kubenswrapper[4720]: I1013 17:24:36.291899 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"2f55a3a5379accdee8fb5aac7317e7ab1ae25028c83f5ea36d49981898fba756"} Oct 13 17:24:36 crc kubenswrapper[4720]: I1013 17:24:36.294016 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Oct 13 17:24:36 crc kubenswrapper[4720]: I1013 17:24:36.296106 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ef2ba92cea782dd4fd31a511e1086777b58c00340014dff74ff6615b07bd10c5"} Oct 13 17:24:36 crc kubenswrapper[4720]: I1013 17:24:36.296507 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 13 17:24:36 crc kubenswrapper[4720]: I1013 17:24:36.297138 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"58dc9e29907cf09ace499d5219675080bb575c2ced46e454e8def559ebd4e528"} Oct 13 17:24:36 crc kubenswrapper[4720]: I1013 17:24:36.298294 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"f856022da70859465022db433585bd8cad12f8cd2aeae7943491d3f7e5049256"} Oct 13 17:24:36 crc kubenswrapper[4720]: I1013 17:24:36.298323 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"ada67d3bc09c6d91a07ca37499dd5152a0919a747f9c82bf5d97c0331ecb11c5"} Oct 13 17:24:36 crc kubenswrapper[4720]: I1013 17:24:36.312829 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea323a1a-c607-40cf-b61c-b7ff03399c45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88b8a46e2e2c2342bd22a4a9b42cadd0188ec83f23815773f6ffb46be79fdf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d875b0d7eb9f13bc697597a33f7196598ae679980b13842e92acf20477526f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e18d0e9df43d366a6e7088cc3d5bdc794882828f60f008eaf29e788e0141ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7977e20d8558810fb9f8f827830d0378de7d15c71340d396a95b506902c0962\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7977e20d8558810fb9f8f827830d0378de7d15c71340d396a95b506902c0962\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T17:24:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1013 17:24:28.736714 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 17:24:28.738546 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445610647/tls.crt::/tmp/serving-cert-445610647/tls.key\\\\\\\"\\\\nI1013 17:24:34.860768 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 17:24:34.864611 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 17:24:34.864632 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 17:24:34.864654 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 17:24:34.864660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 17:24:34.871230 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1013 17:24:34.871248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1013 17:24:34.871273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871290 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871305 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 17:24:34.871314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 17:24:34.871324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 17:24:34.871335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1013 17:24:34.872239 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd536f783e3f8f974f25772994c3d00a01c2ba66997a6e06236f1b83b890f8f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:36Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:36 crc kubenswrapper[4720]: I1013 17:24:36.336464 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:36Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:36 crc kubenswrapper[4720]: I1013 17:24:36.359010 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:36Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:36 crc kubenswrapper[4720]: I1013 17:24:36.377402 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://705766b44d481300d0fd8cc654aa2dc9046733e5a71ad3a7ab789f85e737dcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a69b8825692cbbb7cce0a52b1ccb7b137ce6c9f8a8d788fff95bb7228cb40e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:36Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:36 crc kubenswrapper[4720]: I1013 17:24:36.392584 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:36Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:36 crc kubenswrapper[4720]: I1013 17:24:36.414126 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7baf94d8-250c-4568-ab1c-a1429b8caf70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435ba16189d6d19892aaab158512c1a84a72103779e3cc4b2c634de344583c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e566f3b66d404dd8da93bba1745b9e67cd63532330591f5b735f6826c07eae2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ec64662f56c2609ebac1383a78840ba4e0d9fb1fe10e666169285d3852e6b40\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d8f079c2674e0c6bea1837755db23bc3205da6a01ac1896d6293226d9c6fb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:36Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:36 crc kubenswrapper[4720]: I1013 17:24:36.432976 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:36Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:36 crc kubenswrapper[4720]: I1013 17:24:36.447510 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:36Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:36 crc kubenswrapper[4720]: I1013 17:24:36.466367 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea323a1a-c607-40cf-b61c-b7ff03399c45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88b8a46e2e2c2342bd22a4a9b42cadd0188ec83f23815773f6ffb46be79fdf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d875b0d7eb9f13bc697597a33f7196598ae679980b13842e92acf20477526f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e18d0e9df43d366a6e7088cc3d5bdc794882828f60f008eaf29e788e0141ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef2ba92cea782dd4fd31a511e1086777b58c00340014dff74ff6615b07bd10c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7977e20d8558810fb9f8f827830d0378de7d15c71340d396a95b506902c0962\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T17:24:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1013 17:24:28.736714 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 17:24:28.738546 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445610647/tls.crt::/tmp/serving-cert-445610647/tls.key\\\\\\\"\\\\nI1013 17:24:34.860768 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 17:24:34.864611 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 17:24:34.864632 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 17:24:34.864654 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 17:24:34.864660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 17:24:34.871230 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1013 17:24:34.871248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1013 17:24:34.871273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871290 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871305 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 17:24:34.871314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 17:24:34.871324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 17:24:34.871335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1013 17:24:34.872239 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd536f783e3f8f974f25772994c3d00a01c2ba66997a6e06236f1b83b890f8f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:36Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:36 crc kubenswrapper[4720]: I1013 17:24:36.481930 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:36Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:36 crc kubenswrapper[4720]: I1013 17:24:36.504109 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:36Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:36 crc kubenswrapper[4720]: I1013 17:24:36.525723 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://705766b44d481300d0fd8cc654aa2dc9046733e5a71ad3a7ab789f85e737dcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a69b8825692cbbb7cce0a52b1ccb7b137ce6c9f8a8d788fff95bb7228cb40e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:36Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:36 crc kubenswrapper[4720]: I1013 17:24:36.540284 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:36Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:36 crc kubenswrapper[4720]: I1013 17:24:36.559119 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7baf94d8-250c-4568-ab1c-a1429b8caf70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435ba16189d6d19892aaab158512c1a84a72103779e3cc4b2c634de344583c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e566f3b66d404dd8da93bba1745b9e67cd63532330591f5b735f6826c07eae2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ec64662f56c2609ebac1383a78840ba4e0d9fb1fe10e666169285d3852e6b40\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d8f079c2674e0c6bea1837755db23bc3205da6a01ac1896d6293226d9c6fb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:36Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:36 crc kubenswrapper[4720]: I1013 17:24:36.577253 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f856022da70859465022db433585bd8cad12f8cd2aeae7943491d3f7e5049256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:36Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:36 crc kubenswrapper[4720]: I1013 17:24:36.595226 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:36Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:36 crc kubenswrapper[4720]: I1013 17:24:36.674650 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 17:24:36 crc kubenswrapper[4720]: E1013 17:24:36.674964 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 17:24:38.674944919 +0000 UTC m=+24.132195061 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:24:36 crc kubenswrapper[4720]: I1013 17:24:36.776020 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:24:36 crc kubenswrapper[4720]: I1013 17:24:36.776067 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:24:36 crc kubenswrapper[4720]: I1013 17:24:36.776089 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:24:36 crc kubenswrapper[4720]: I1013 17:24:36.776111 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:24:36 crc kubenswrapper[4720]: E1013 17:24:36.776232 4720 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 13 17:24:36 crc kubenswrapper[4720]: E1013 17:24:36.776261 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 13 17:24:36 crc kubenswrapper[4720]: E1013 17:24:36.776301 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 13 17:24:36 crc kubenswrapper[4720]: E1013 17:24:36.776304 4720 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 13 17:24:36 crc kubenswrapper[4720]: E1013 17:24:36.776321 4720 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 13 17:24:36 crc kubenswrapper[4720]: E1013 17:24:36.776382 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 13 17:24:36 crc kubenswrapper[4720]: E1013 17:24:36.776418 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 13 17:24:36 crc kubenswrapper[4720]: E1013 17:24:36.776439 4720 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 13 17:24:36 crc kubenswrapper[4720]: E1013 17:24:36.776280 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-13 17:24:38.776265474 +0000 UTC m=+24.233515606 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 13 17:24:36 crc kubenswrapper[4720]: E1013 17:24:36.776505 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-13 17:24:38.77648624 +0000 UTC m=+24.233736372 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 13 17:24:36 crc kubenswrapper[4720]: E1013 17:24:36.776521 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-13 17:24:38.776515641 +0000 UTC m=+24.233765773 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 13 17:24:36 crc kubenswrapper[4720]: E1013 17:24:36.776532 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-13 17:24:38.776527491 +0000 UTC m=+24.233777623 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 13 17:24:37 crc kubenswrapper[4720]: I1013 17:24:37.168082 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:24:37 crc kubenswrapper[4720]: I1013 17:24:37.168206 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:24:37 crc kubenswrapper[4720]: E1013 17:24:37.168289 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 17:24:37 crc kubenswrapper[4720]: E1013 17:24:37.168344 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 17:24:37 crc kubenswrapper[4720]: I1013 17:24:37.168219 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:24:37 crc kubenswrapper[4720]: E1013 17:24:37.168417 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 17:24:37 crc kubenswrapper[4720]: I1013 17:24:37.173987 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Oct 13 17:24:37 crc kubenswrapper[4720]: I1013 17:24:37.174902 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Oct 13 17:24:37 crc kubenswrapper[4720]: I1013 17:24:37.175675 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Oct 13 17:24:37 crc kubenswrapper[4720]: I1013 17:24:37.176297 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Oct 13 17:24:37 crc kubenswrapper[4720]: I1013 17:24:37.176898 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Oct 13 17:24:37 crc kubenswrapper[4720]: I1013 17:24:37.177540 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Oct 13 17:24:37 crc kubenswrapper[4720]: I1013 17:24:37.178076 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Oct 13 17:24:37 crc kubenswrapper[4720]: I1013 17:24:37.178643 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Oct 13 17:24:37 crc kubenswrapper[4720]: I1013 17:24:37.179385 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Oct 13 17:24:37 crc kubenswrapper[4720]: I1013 17:24:37.179870 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Oct 13 17:24:37 crc kubenswrapper[4720]: I1013 17:24:37.180398 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Oct 13 17:24:37 crc kubenswrapper[4720]: I1013 17:24:37.181078 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Oct 13 17:24:37 crc kubenswrapper[4720]: I1013 17:24:37.181511 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Oct 13 17:24:37 crc kubenswrapper[4720]: I1013 17:24:37.950495 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Oct 13 17:24:37 crc kubenswrapper[4720]: I1013 17:24:37.967727 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Oct 13 17:24:37 crc kubenswrapper[4720]: I1013 17:24:37.971251 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea323a1a-c607-40cf-b61c-b7ff03399c45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88b8a46e2e2c2342bd22a4a9b42cadd0188ec83f23815773f6ffb46be79fdf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d875b0d7eb9f13bc697597a33f7196598ae679980b13842e92acf20477526f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e18d0e9df43d366a6e7088cc3d5bdc794882828f60f008eaf29e788e0141ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef2ba92cea782dd4fd31a511e1086777b58c00340014dff74ff6615b07bd10c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7977e20d8558810fb9f8f827830d0378de7d15c71340d396a95b506902c0962\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T17:24:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1013 17:24:28.736714 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 17:24:28.738546 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445610647/tls.crt::/tmp/serving-cert-445610647/tls.key\\\\\\\"\\\\nI1013 17:24:34.860768 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 17:24:34.864611 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 17:24:34.864632 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 17:24:34.864654 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 17:24:34.864660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 17:24:34.871230 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1013 17:24:34.871248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1013 17:24:34.871273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871290 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871305 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 17:24:34.871314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 17:24:34.871324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 17:24:34.871335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1013 17:24:34.872239 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd536f783e3f8f974f25772994c3d00a01c2ba66997a6e06236f1b83b890f8f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:37Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:37 crc kubenswrapper[4720]: I1013 17:24:37.971822 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Oct 13 17:24:37 crc kubenswrapper[4720]: I1013 17:24:37.985248 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:37Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:37 crc kubenswrapper[4720]: I1013 17:24:37.998801 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://705766b44d481300d0fd8cc654aa2dc9046733e5a71ad3a7ab789f85e737dcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a69b8825692cbbb7cce0a52b1ccb7b137ce6c9f8a8d788fff95bb7228cb40e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:37Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:38 crc kubenswrapper[4720]: I1013 17:24:38.016458 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:38Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:38 crc kubenswrapper[4720]: I1013 17:24:38.033754 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7baf94d8-250c-4568-ab1c-a1429b8caf70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435ba16189d6d19892aaab158512c1a84a72103779e3cc4b2c634de344583c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e566f3b66d404dd8da93bba1745b9e67cd63532330591f5b735f6826c07eae2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ec64662f56c2609ebac1383a78840ba4e0d9fb1fe10e666169285d3852e6b40\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d8f079c2674e0c6bea1837755db23bc3205da6a01ac1896d6293226d9c6fb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:38Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:38 crc kubenswrapper[4720]: I1013 17:24:38.049850 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:38Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:38 crc kubenswrapper[4720]: I1013 17:24:38.064161 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f856022da70859465022db433585bd8cad12f8cd2aeae7943491d3f7e5049256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:38Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:38 crc kubenswrapper[4720]: I1013 17:24:38.089579 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:38Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:38 crc kubenswrapper[4720]: I1013 17:24:38.117678 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:38Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:38 crc kubenswrapper[4720]: I1013 17:24:38.144360 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:38Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:38 crc kubenswrapper[4720]: I1013 17:24:38.162766 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://705766b44d481300d0fd8cc654aa2dc9046733e5a71ad3a7ab789f85e737dcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a69b8825692cbbb7cce0a52b1ccb7b137ce6c9f8a8d788fff95bb7228cb40e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:38Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:38 crc kubenswrapper[4720]: I1013 17:24:38.176167 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:38Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:38 crc kubenswrapper[4720]: I1013 17:24:38.197128 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a1f1696-ed7f-4482-a300-ca25b68e09e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2507a30017fbd9947ca456b84812f0b26baabf572768870f5a34b3c7699443cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc60672e282de4d3d6a2d8fd64306ec0fc0d032ade655f88afb0c0b9e0a8e589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cf903094dae4e559bcef3e269240670b5d4693344d600779c394ce1e039b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06eea6556daedf913d87b8b0527b337070528b07cc026de878b4df332598a2d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b452d113d48729b6d4ab59f80138b15e7cb16cc16eef4ec9916901a067986d07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:38Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:38 crc kubenswrapper[4720]: I1013 17:24:38.211656 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7baf94d8-250c-4568-ab1c-a1429b8caf70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435ba16189d6d19892aaab158512c1a84a72103779e3cc4b2c634de344583c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e566f3b66d404dd8da93bba1745b9e67cd63532330591f5b735f6826c07eae2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ec64662f56c2609ebac1383a78840ba4e0d9fb1fe10e666169285d3852e6b40\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d8f079c2674e0c6bea1837755db23bc3205da6a01ac1896d6293226d9c6fb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:38Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:38 crc kubenswrapper[4720]: I1013 17:24:38.228347 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f856022da70859465022db433585bd8cad12f8cd2aeae7943491d3f7e5049256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:38Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:38 crc kubenswrapper[4720]: I1013 17:24:38.242542 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:38Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:38 crc kubenswrapper[4720]: I1013 17:24:38.264429 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea323a1a-c607-40cf-b61c-b7ff03399c45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88b8a46e2e2c2342bd22a4a9b42cadd0188ec83f23815773f6ffb46be79fdf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d875b0d7eb9f13bc697597a33f7196598ae679980b13842e92acf20477526f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e18d0e9df43d366a6e7088cc3d5bdc794882828f60f008eaf29e788e0141ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef2ba92cea782dd4fd31a511e1086777b58c00340014dff74ff6615b07bd10c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7977e20d8558810fb9f8f827830d0378de7d15c71340d396a95b506902c0962\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T17:24:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1013 17:24:28.736714 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 17:24:28.738546 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445610647/tls.crt::/tmp/serving-cert-445610647/tls.key\\\\\\\"\\\\nI1013 17:24:34.860768 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 17:24:34.864611 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 17:24:34.864632 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 17:24:34.864654 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 17:24:34.864660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 17:24:34.871230 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1013 17:24:34.871248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1013 17:24:34.871273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871290 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871305 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 17:24:34.871314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 17:24:34.871324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 17:24:34.871335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1013 17:24:34.872239 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd536f783e3f8f974f25772994c3d00a01c2ba66997a6e06236f1b83b890f8f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:38Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:38 crc kubenswrapper[4720]: I1013 17:24:38.696084 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 17:24:38 crc kubenswrapper[4720]: E1013 17:24:38.696312 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 17:24:42.69628246 +0000 UTC m=+28.153532582 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:24:38 crc kubenswrapper[4720]: I1013 17:24:38.797649 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:24:38 crc kubenswrapper[4720]: I1013 17:24:38.797709 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:24:38 crc kubenswrapper[4720]: I1013 17:24:38.797737 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:24:38 crc kubenswrapper[4720]: I1013 17:24:38.797765 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:24:38 crc kubenswrapper[4720]: E1013 17:24:38.797889 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 13 17:24:38 crc kubenswrapper[4720]: E1013 17:24:38.797890 4720 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 13 17:24:38 crc kubenswrapper[4720]: E1013 17:24:38.797959 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 13 17:24:38 crc kubenswrapper[4720]: E1013 17:24:38.797910 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 13 17:24:38 crc kubenswrapper[4720]: E1013 17:24:38.798013 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 13 17:24:38 crc kubenswrapper[4720]: E1013 17:24:38.798021 4720 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 13 17:24:38 crc kubenswrapper[4720]: E1013 17:24:38.798036 4720 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 13 17:24:38 crc kubenswrapper[4720]: E1013 17:24:38.797987 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-13 17:24:42.797963314 +0000 UTC m=+28.255213476 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 13 17:24:38 crc kubenswrapper[4720]: E1013 17:24:38.798133 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-13 17:24:42.798103328 +0000 UTC m=+28.255353490 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 13 17:24:38 crc kubenswrapper[4720]: E1013 17:24:38.798157 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-13 17:24:42.798145799 +0000 UTC m=+28.255395971 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 13 17:24:38 crc kubenswrapper[4720]: E1013 17:24:38.798248 4720 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 13 17:24:38 crc kubenswrapper[4720]: E1013 17:24:38.798316 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-13 17:24:42.798293423 +0000 UTC m=+28.255543685 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 13 17:24:39 crc kubenswrapper[4720]: I1013 17:24:39.167639 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:24:39 crc kubenswrapper[4720]: I1013 17:24:39.167676 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:24:39 crc kubenswrapper[4720]: E1013 17:24:39.167779 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 17:24:39 crc kubenswrapper[4720]: I1013 17:24:39.167639 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:24:39 crc kubenswrapper[4720]: E1013 17:24:39.167962 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 17:24:39 crc kubenswrapper[4720]: E1013 17:24:39.168054 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 17:24:39 crc kubenswrapper[4720]: I1013 17:24:39.308449 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"2f85e4aab30e7f49ef05e48cc22fc22e0291e32604a8f74ae1cc2ea57b5889e8"} Oct 13 17:24:39 crc kubenswrapper[4720]: I1013 17:24:39.341346 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://705766b44d481300d0fd8cc654aa2dc9046733e5a71ad3a7ab789f85e737dcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a69b8825692cbbb7cce0a52b1ccb7b137ce6c9f8a8d788fff95bb7228cb40e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:39Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:39 crc kubenswrapper[4720]: I1013 17:24:39.359966 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f85e4aab30e7f49ef05e48cc22fc22e0291e32604a8f74ae1cc2ea57b5889e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:39Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:39 crc kubenswrapper[4720]: I1013 17:24:39.392018 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a1f1696-ed7f-4482-a300-ca25b68e09e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2507a30017fbd9947ca456b84812f0b26baabf572768870f5a34b3c7699443cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc60672e282de4d3d6a2d8fd64306ec0fc0d032ade655f88afb0c0b9e0a8e589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cf903094dae4e559bcef3e269240670b5d4693344d600779c394ce1e039b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06eea6556daedf913d87b8b0527b337070528b07cc026de878b4df332598a2d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b452d113d48729b6d4ab59f80138b15e7cb16cc16eef4ec9916901a067986d07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:39Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:39 crc kubenswrapper[4720]: I1013 17:24:39.408730 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7baf94d8-250c-4568-ab1c-a1429b8caf70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435ba16189d6d19892aaab158512c1a84a72103779e3cc4b2c634de344583c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e566f3b66d404dd8da93bba1745b9e67cd63532330591f5b735f6826c07eae2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ec64662f56c2609ebac1383a78840ba4e0d9fb1fe10e666169285d3852e6b40\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d8f079c2674e0c6bea1837755db23bc3205da6a01ac1896d6293226d9c6fb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:39Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:39 crc kubenswrapper[4720]: I1013 17:24:39.426764 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:39Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:39 crc kubenswrapper[4720]: I1013 17:24:39.449662 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:39Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:39 crc kubenswrapper[4720]: I1013 17:24:39.470176 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f856022da70859465022db433585bd8cad12f8cd2aeae7943491d3f7e5049256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:39Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:39 crc kubenswrapper[4720]: I1013 17:24:39.481213 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:39Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:39 crc kubenswrapper[4720]: I1013 17:24:39.493986 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea323a1a-c607-40cf-b61c-b7ff03399c45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88b8a46e2e2c2342bd22a4a9b42cadd0188ec83f23815773f6ffb46be79fdf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d875b0d7eb9f13bc697597a33f7196598ae679980b13842e92acf20477526f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e18d0e9df43d366a6e7088cc3d5bdc794882828f60f008eaf29e788e0141ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef2ba92cea782dd4fd31a511e1086777b58c00340014dff74ff6615b07bd10c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7977e20d8558810fb9f8f827830d0378de7d15c71340d396a95b506902c0962\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T17:24:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1013 17:24:28.736714 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 17:24:28.738546 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445610647/tls.crt::/tmp/serving-cert-445610647/tls.key\\\\\\\"\\\\nI1013 17:24:34.860768 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 17:24:34.864611 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 17:24:34.864632 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 17:24:34.864654 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 17:24:34.864660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 17:24:34.871230 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1013 17:24:34.871248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1013 17:24:34.871273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871290 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871305 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 17:24:34.871314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 17:24:34.871324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 17:24:34.871335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1013 17:24:34.872239 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd536f783e3f8f974f25772994c3d00a01c2ba66997a6e06236f1b83b890f8f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:39Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:39 crc kubenswrapper[4720]: I1013 17:24:39.502668 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-pmjlm"] Oct 13 17:24:39 crc kubenswrapper[4720]: I1013 17:24:39.502942 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-pmjlm" Oct 13 17:24:39 crc kubenswrapper[4720]: I1013 17:24:39.504540 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Oct 13 17:24:39 crc kubenswrapper[4720]: I1013 17:24:39.505082 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Oct 13 17:24:39 crc kubenswrapper[4720]: I1013 17:24:39.505723 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Oct 13 17:24:39 crc kubenswrapper[4720]: I1013 17:24:39.514736 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7baf94d8-250c-4568-ab1c-a1429b8caf70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435ba16189d6d19892aaab158512c1a84a72103779e3cc4b2c634de344583c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e566f3b66d404dd8da93bba1745b9e67cd63532330591f5b735f6826c07eae2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ec64662f56c2609ebac1383a78840ba4e0d9fb1fe10e666169285d3852e6b40\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d8f079c2674e0c6bea1837755db23bc3205da6a01ac1896d6293226d9c6fb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:39Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:39 crc kubenswrapper[4720]: I1013 17:24:39.523696 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:39Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:39 crc kubenswrapper[4720]: I1013 17:24:39.535400 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:39Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:39 crc kubenswrapper[4720]: I1013 17:24:39.545460 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://705766b44d481300d0fd8cc654aa2dc9046733e5a71ad3a7ab789f85e737dcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a69b8825692cbbb7cce0a52b1ccb7b137ce6c9f8a8d788fff95bb7228cb40e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:39Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:39 crc kubenswrapper[4720]: I1013 17:24:39.555365 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f85e4aab30e7f49ef05e48cc22fc22e0291e32604a8f74ae1cc2ea57b5889e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:39Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:39 crc kubenswrapper[4720]: I1013 17:24:39.572111 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a1f1696-ed7f-4482-a300-ca25b68e09e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2507a30017fbd9947ca456b84812f0b26baabf572768870f5a34b3c7699443cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc60672e282de4d3d6a2d8fd64306ec0fc0d032ade655f88afb0c0b9e0a8e589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cf903094dae4e559bcef3e269240670b5d4693344d600779c394ce1e039b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06eea6556daedf913d87b8b0527b337070528b07cc026de878b4df332598a2d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b452d113d48729b6d4ab59f80138b15e7cb16cc16eef4ec9916901a067986d07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:39Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:39 crc kubenswrapper[4720]: I1013 17:24:39.583374 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:39Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:39 crc kubenswrapper[4720]: I1013 17:24:39.593018 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmjlm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f85220d6-f940-4538-806a-2b26bacd7b09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7795z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmjlm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:39Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:39 crc kubenswrapper[4720]: I1013 17:24:39.603220 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7795z\" (UniqueName: \"kubernetes.io/projected/f85220d6-f940-4538-806a-2b26bacd7b09-kube-api-access-7795z\") pod \"node-resolver-pmjlm\" (UID: \"f85220d6-f940-4538-806a-2b26bacd7b09\") " pod="openshift-dns/node-resolver-pmjlm" Oct 13 17:24:39 crc kubenswrapper[4720]: I1013 17:24:39.603353 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f85220d6-f940-4538-806a-2b26bacd7b09-hosts-file\") pod \"node-resolver-pmjlm\" (UID: \"f85220d6-f940-4538-806a-2b26bacd7b09\") " pod="openshift-dns/node-resolver-pmjlm" Oct 13 17:24:39 crc kubenswrapper[4720]: I1013 17:24:39.605405 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f856022da70859465022db433585bd8cad12f8cd2aeae7943491d3f7e5049256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:39Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:39 crc kubenswrapper[4720]: I1013 17:24:39.617037 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea323a1a-c607-40cf-b61c-b7ff03399c45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88b8a46e2e2c2342bd22a4a9b42cadd0188ec83f23815773f6ffb46be79fdf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d875b0d7eb9f13bc697597a33f7196598ae679980b13842e92acf20477526f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e18d0e9df43d366a6e7088cc3d5bdc794882828f60f008eaf29e788e0141ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef2ba92cea782dd4fd31a511e1086777b58c00340014dff74ff6615b07bd10c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7977e20d8558810fb9f8f827830d0378de7d15c71340d396a95b506902c0962\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T17:24:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1013 17:24:28.736714 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 17:24:28.738546 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445610647/tls.crt::/tmp/serving-cert-445610647/tls.key\\\\\\\"\\\\nI1013 17:24:34.860768 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 17:24:34.864611 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 17:24:34.864632 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 17:24:34.864654 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 17:24:34.864660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 17:24:34.871230 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1013 17:24:34.871248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1013 17:24:34.871273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871290 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871305 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 17:24:34.871314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 17:24:34.871324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 17:24:34.871335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1013 17:24:34.872239 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd536f783e3f8f974f25772994c3d00a01c2ba66997a6e06236f1b83b890f8f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:39Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:39 crc kubenswrapper[4720]: I1013 17:24:39.704549 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7795z\" (UniqueName: \"kubernetes.io/projected/f85220d6-f940-4538-806a-2b26bacd7b09-kube-api-access-7795z\") pod \"node-resolver-pmjlm\" (UID: \"f85220d6-f940-4538-806a-2b26bacd7b09\") " pod="openshift-dns/node-resolver-pmjlm" Oct 13 17:24:39 crc kubenswrapper[4720]: I1013 17:24:39.704593 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f85220d6-f940-4538-806a-2b26bacd7b09-hosts-file\") pod \"node-resolver-pmjlm\" (UID: \"f85220d6-f940-4538-806a-2b26bacd7b09\") " pod="openshift-dns/node-resolver-pmjlm" Oct 13 17:24:39 crc kubenswrapper[4720]: I1013 17:24:39.704669 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f85220d6-f940-4538-806a-2b26bacd7b09-hosts-file\") pod \"node-resolver-pmjlm\" (UID: \"f85220d6-f940-4538-806a-2b26bacd7b09\") " pod="openshift-dns/node-resolver-pmjlm" Oct 13 17:24:39 crc kubenswrapper[4720]: I1013 17:24:39.720435 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7795z\" (UniqueName: \"kubernetes.io/projected/f85220d6-f940-4538-806a-2b26bacd7b09-kube-api-access-7795z\") pod \"node-resolver-pmjlm\" (UID: \"f85220d6-f940-4538-806a-2b26bacd7b09\") " pod="openshift-dns/node-resolver-pmjlm" Oct 13 17:24:39 crc kubenswrapper[4720]: I1013 17:24:39.813718 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-pmjlm" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.293640 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-htwnl"] Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.293994 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.294059 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-lxmjt"] Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.294390 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-lxmjt" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.295916 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.296064 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.296245 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.296350 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.296895 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-5rgfd"] Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.297049 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.297299 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.297324 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.297593 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-5rgfd" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.298150 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.298226 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.298256 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.301544 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.301673 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.301910 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pn6lz"] Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.302817 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.306849 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.307198 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.307586 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.308251 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.308351 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.308593 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.311556 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-pmjlm" event={"ID":"f85220d6-f940-4538-806a-2b26bacd7b09","Type":"ContainerStarted","Data":"6315723f6cd375a392c8c427a737021b7e01415c902514b09ee43407749caa34"} Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.311701 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-pmjlm" event={"ID":"f85220d6-f940-4538-806a-2b26bacd7b09","Type":"ContainerStarted","Data":"0a6248198df52fbd45a464d60c8c4d01263d023166f73169e694244130a60ee4"} Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.312255 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.324659 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a1f1696-ed7f-4482-a300-ca25b68e09e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2507a30017fbd9947ca456b84812f0b26baabf572768870f5a34b3c7699443cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc60672e282de4d3d6a2d8fd64306ec0fc0d032ade655f88afb0c0b9e0a8e589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cf903094dae4e559bcef3e269240670b5d4693344d600779c394ce1e039b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06eea6556daedf913d87b8b0527b337070528b07cc026de878b4df332598a2d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b452d113d48729b6d4ab59f80138b15e7cb16cc16eef4ec9916901a067986d07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:40Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.336265 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:40Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.348393 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:40Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.361025 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea323a1a-c607-40cf-b61c-b7ff03399c45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88b8a46e2e2c2342bd22a4a9b42cadd0188ec83f23815773f6ffb46be79fdf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d875b0d7eb9f13bc697597a33f7196598ae679980b13842e92acf20477526f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e18d0e9df43d366a6e7088cc3d5bdc794882828f60f008eaf29e788e0141ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef2ba92cea782dd4fd31a511e1086777b58c00340014dff74ff6615b07bd10c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7977e20d8558810fb9f8f827830d0378de7d15c71340d396a95b506902c0962\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T17:24:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1013 17:24:28.736714 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 17:24:28.738546 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445610647/tls.crt::/tmp/serving-cert-445610647/tls.key\\\\\\\"\\\\nI1013 17:24:34.860768 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 17:24:34.864611 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 17:24:34.864632 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 17:24:34.864654 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 17:24:34.864660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 17:24:34.871230 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1013 17:24:34.871248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1013 17:24:34.871273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871290 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871305 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 17:24:34.871314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 17:24:34.871324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 17:24:34.871335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1013 17:24:34.872239 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd536f783e3f8f974f25772994c3d00a01c2ba66997a6e06236f1b83b890f8f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:40Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.376997 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxmjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b45ec2d-5bea-4007-a49f-224a866f93eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xsrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxmjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:40Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.389455 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7baf94d8-250c-4568-ab1c-a1429b8caf70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435ba16189d6d19892aaab158512c1a84a72103779e3cc4b2c634de344583c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e566f3b66d404dd8da93bba1745b9e67cd63532330591f5b735f6826c07eae2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ec64662f56c2609ebac1383a78840ba4e0d9fb1fe10e666169285d3852e6b40\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d8f079c2674e0c6bea1837755db23bc3205da6a01ac1896d6293226d9c6fb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:40Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.404519 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:40Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.410614 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-etc-openvswitch\") pod \"ovnkube-node-pn6lz\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.410864 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7b45ec2d-5bea-4007-a49f-224a866f93eb-hostroot\") pod \"multus-lxmjt\" (UID: \"7b45ec2d-5bea-4007-a49f-224a866f93eb\") " pod="openshift-multus/multus-lxmjt" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.410951 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ce442c80-fcde-4b79-b6f9-f8f25771dfd4-mcd-auth-proxy-config\") pod \"machine-config-daemon-htwnl\" (UID: \"ce442c80-fcde-4b79-b6f9-f8f25771dfd4\") " pod="openshift-machine-config-operator/machine-config-daemon-htwnl" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.411040 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2d7408e5-b529-4396-92b3-2fed275c3250-tuning-conf-dir\") pod \"multus-additional-cni-plugins-5rgfd\" (UID: \"2d7408e5-b529-4396-92b3-2fed275c3250\") " pod="openshift-multus/multus-additional-cni-plugins-5rgfd" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.411138 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-host-slash\") pod \"ovnkube-node-pn6lz\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.411230 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2d7408e5-b529-4396-92b3-2fed275c3250-system-cni-dir\") pod \"multus-additional-cni-plugins-5rgfd\" (UID: \"2d7408e5-b529-4396-92b3-2fed275c3250\") " pod="openshift-multus/multus-additional-cni-plugins-5rgfd" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.411312 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7b45ec2d-5bea-4007-a49f-224a866f93eb-os-release\") pod \"multus-lxmjt\" (UID: \"7b45ec2d-5bea-4007-a49f-224a866f93eb\") " pod="openshift-multus/multus-lxmjt" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.411388 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7b45ec2d-5bea-4007-a49f-224a866f93eb-cni-binary-copy\") pod \"multus-lxmjt\" (UID: \"7b45ec2d-5bea-4007-a49f-224a866f93eb\") " pod="openshift-multus/multus-lxmjt" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.411467 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7b45ec2d-5bea-4007-a49f-224a866f93eb-multus-conf-dir\") pod \"multus-lxmjt\" (UID: \"7b45ec2d-5bea-4007-a49f-224a866f93eb\") " pod="openshift-multus/multus-lxmjt" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.411541 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2d7408e5-b529-4396-92b3-2fed275c3250-os-release\") pod \"multus-additional-cni-plugins-5rgfd\" (UID: \"2d7408e5-b529-4396-92b3-2fed275c3250\") " pod="openshift-multus/multus-additional-cni-plugins-5rgfd" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.411610 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mrnw\" (UniqueName: \"kubernetes.io/projected/2d7408e5-b529-4396-92b3-2fed275c3250-kube-api-access-9mrnw\") pod \"multus-additional-cni-plugins-5rgfd\" (UID: \"2d7408e5-b529-4396-92b3-2fed275c3250\") " pod="openshift-multus/multus-additional-cni-plugins-5rgfd" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.411702 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-run-systemd\") pod \"ovnkube-node-pn6lz\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.411781 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8064812e-b6aa-4f56-81c9-16154c00abad-env-overrides\") pod \"ovnkube-node-pn6lz\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.411855 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7b45ec2d-5bea-4007-a49f-224a866f93eb-cnibin\") pod \"multus-lxmjt\" (UID: \"7b45ec2d-5bea-4007-a49f-224a866f93eb\") " pod="openshift-multus/multus-lxmjt" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.411931 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7b45ec2d-5bea-4007-a49f-224a866f93eb-multus-socket-dir-parent\") pod \"multus-lxmjt\" (UID: \"7b45ec2d-5bea-4007-a49f-224a866f93eb\") " pod="openshift-multus/multus-lxmjt" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.412002 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xsrv\" (UniqueName: \"kubernetes.io/projected/7b45ec2d-5bea-4007-a49f-224a866f93eb-kube-api-access-6xsrv\") pod \"multus-lxmjt\" (UID: \"7b45ec2d-5bea-4007-a49f-224a866f93eb\") " pod="openshift-multus/multus-lxmjt" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.412075 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8064812e-b6aa-4f56-81c9-16154c00abad-ovnkube-script-lib\") pod \"ovnkube-node-pn6lz\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.412152 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7b45ec2d-5bea-4007-a49f-224a866f93eb-host-run-netns\") pod \"multus-lxmjt\" (UID: \"7b45ec2d-5bea-4007-a49f-224a866f93eb\") " pod="openshift-multus/multus-lxmjt" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.412251 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7b45ec2d-5bea-4007-a49f-224a866f93eb-etc-kubernetes\") pod \"multus-lxmjt\" (UID: \"7b45ec2d-5bea-4007-a49f-224a866f93eb\") " pod="openshift-multus/multus-lxmjt" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.412327 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8064812e-b6aa-4f56-81c9-16154c00abad-ovnkube-config\") pod \"ovnkube-node-pn6lz\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.412409 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4cjp\" (UniqueName: \"kubernetes.io/projected/ce442c80-fcde-4b79-b6f9-f8f25771dfd4-kube-api-access-c4cjp\") pod \"machine-config-daemon-htwnl\" (UID: \"ce442c80-fcde-4b79-b6f9-f8f25771dfd4\") " pod="openshift-machine-config-operator/machine-config-daemon-htwnl" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.412511 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-run-openvswitch\") pod \"ovnkube-node-pn6lz\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.412734 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm5x2\" (UniqueName: \"kubernetes.io/projected/8064812e-b6aa-4f56-81c9-16154c00abad-kube-api-access-rm5x2\") pod \"ovnkube-node-pn6lz\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.412808 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7b45ec2d-5bea-4007-a49f-224a866f93eb-host-var-lib-cni-multus\") pod \"multus-lxmjt\" (UID: \"7b45ec2d-5bea-4007-a49f-224a866f93eb\") " pod="openshift-multus/multus-lxmjt" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.412900 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-run-ovn\") pod \"ovnkube-node-pn6lz\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.412975 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-node-log\") pod \"ovnkube-node-pn6lz\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.413043 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7b45ec2d-5bea-4007-a49f-224a866f93eb-host-var-lib-cni-bin\") pod \"multus-lxmjt\" (UID: \"7b45ec2d-5bea-4007-a49f-224a866f93eb\") " pod="openshift-multus/multus-lxmjt" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.413124 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7b45ec2d-5bea-4007-a49f-224a866f93eb-host-var-lib-kubelet\") pod \"multus-lxmjt\" (UID: \"7b45ec2d-5bea-4007-a49f-224a866f93eb\") " pod="openshift-multus/multus-lxmjt" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.413219 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2d7408e5-b529-4396-92b3-2fed275c3250-cnibin\") pod \"multus-additional-cni-plugins-5rgfd\" (UID: \"2d7408e5-b529-4396-92b3-2fed275c3250\") " pod="openshift-multus/multus-additional-cni-plugins-5rgfd" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.413302 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-host-cni-netd\") pod \"ovnkube-node-pn6lz\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.413387 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pn6lz\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.413472 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7b45ec2d-5bea-4007-a49f-224a866f93eb-multus-daemon-config\") pod \"multus-lxmjt\" (UID: \"7b45ec2d-5bea-4007-a49f-224a866f93eb\") " pod="openshift-multus/multus-lxmjt" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.413549 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2d7408e5-b529-4396-92b3-2fed275c3250-cni-binary-copy\") pod \"multus-additional-cni-plugins-5rgfd\" (UID: \"2d7408e5-b529-4396-92b3-2fed275c3250\") " pod="openshift-multus/multus-additional-cni-plugins-5rgfd" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.413637 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-host-run-ovn-kubernetes\") pod \"ovnkube-node-pn6lz\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.413711 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8064812e-b6aa-4f56-81c9-16154c00abad-ovn-node-metrics-cert\") pod \"ovnkube-node-pn6lz\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.413892 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7b45ec2d-5bea-4007-a49f-224a866f93eb-system-cni-dir\") pod \"multus-lxmjt\" (UID: \"7b45ec2d-5bea-4007-a49f-224a866f93eb\") " pod="openshift-multus/multus-lxmjt" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.413969 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7b45ec2d-5bea-4007-a49f-224a866f93eb-host-run-multus-certs\") pod \"multus-lxmjt\" (UID: \"7b45ec2d-5bea-4007-a49f-224a866f93eb\") " pod="openshift-multus/multus-lxmjt" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.414043 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2d7408e5-b529-4396-92b3-2fed275c3250-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-5rgfd\" (UID: \"2d7408e5-b529-4396-92b3-2fed275c3250\") " pod="openshift-multus/multus-additional-cni-plugins-5rgfd" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.414134 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-host-kubelet\") pod \"ovnkube-node-pn6lz\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.414225 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-systemd-units\") pod \"ovnkube-node-pn6lz\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.414302 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7b45ec2d-5bea-4007-a49f-224a866f93eb-host-run-k8s-cni-cncf-io\") pod \"multus-lxmjt\" (UID: \"7b45ec2d-5bea-4007-a49f-224a866f93eb\") " pod="openshift-multus/multus-lxmjt" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.414377 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ce442c80-fcde-4b79-b6f9-f8f25771dfd4-proxy-tls\") pod \"machine-config-daemon-htwnl\" (UID: \"ce442c80-fcde-4b79-b6f9-f8f25771dfd4\") " pod="openshift-machine-config-operator/machine-config-daemon-htwnl" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.414454 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-log-socket\") pod \"ovnkube-node-pn6lz\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.414552 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-host-run-netns\") pod \"ovnkube-node-pn6lz\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.414649 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-var-lib-openvswitch\") pod \"ovnkube-node-pn6lz\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.414727 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ce442c80-fcde-4b79-b6f9-f8f25771dfd4-rootfs\") pod \"machine-config-daemon-htwnl\" (UID: \"ce442c80-fcde-4b79-b6f9-f8f25771dfd4\") " pod="openshift-machine-config-operator/machine-config-daemon-htwnl" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.414805 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-host-cni-bin\") pod \"ovnkube-node-pn6lz\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.414871 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7b45ec2d-5bea-4007-a49f-224a866f93eb-multus-cni-dir\") pod \"multus-lxmjt\" (UID: \"7b45ec2d-5bea-4007-a49f-224a866f93eb\") " pod="openshift-multus/multus-lxmjt" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.419783 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://705766b44d481300d0fd8cc654aa2dc9046733e5a71ad3a7ab789f85e737dcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a69b8825692cbbb7cce0a52b1ccb7b137ce6c9f8a8d788fff95bb7228cb40e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:40Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.440092 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f85e4aab30e7f49ef05e48cc22fc22e0291e32604a8f74ae1cc2ea57b5889e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:40Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.455309 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f856022da70859465022db433585bd8cad12f8cd2aeae7943491d3f7e5049256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:40Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.469855 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmjlm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f85220d6-f940-4538-806a-2b26bacd7b09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7795z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmjlm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:40Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.480728 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce442c80-fcde-4b79-b6f9-f8f25771dfd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-htwnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:40Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.506694 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7baf94d8-250c-4568-ab1c-a1429b8caf70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435ba16189d6d19892aaab158512c1a84a72103779e3cc4b2c634de344583c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e566f3b66d404dd8da93bba1745b9e67cd63532330591f5b735f6826c07eae2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ec64662f56c2609ebac1383a78840ba4e0d9fb1fe10e666169285d3852e6b40\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d8f079c2674e0c6bea1837755db23bc3205da6a01ac1896d6293226d9c6fb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:40Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.515851 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7b45ec2d-5bea-4007-a49f-224a866f93eb-multus-conf-dir\") pod \"multus-lxmjt\" (UID: \"7b45ec2d-5bea-4007-a49f-224a866f93eb\") " pod="openshift-multus/multus-lxmjt" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.515887 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2d7408e5-b529-4396-92b3-2fed275c3250-os-release\") pod \"multus-additional-cni-plugins-5rgfd\" (UID: \"2d7408e5-b529-4396-92b3-2fed275c3250\") " pod="openshift-multus/multus-additional-cni-plugins-5rgfd" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.515903 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mrnw\" (UniqueName: \"kubernetes.io/projected/2d7408e5-b529-4396-92b3-2fed275c3250-kube-api-access-9mrnw\") pod \"multus-additional-cni-plugins-5rgfd\" (UID: \"2d7408e5-b529-4396-92b3-2fed275c3250\") " pod="openshift-multus/multus-additional-cni-plugins-5rgfd" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.515944 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-run-systemd\") pod \"ovnkube-node-pn6lz\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.515959 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8064812e-b6aa-4f56-81c9-16154c00abad-env-overrides\") pod \"ovnkube-node-pn6lz\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.515975 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7b45ec2d-5bea-4007-a49f-224a866f93eb-cnibin\") pod \"multus-lxmjt\" (UID: \"7b45ec2d-5bea-4007-a49f-224a866f93eb\") " pod="openshift-multus/multus-lxmjt" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.515994 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7b45ec2d-5bea-4007-a49f-224a866f93eb-multus-socket-dir-parent\") pod \"multus-lxmjt\" (UID: \"7b45ec2d-5bea-4007-a49f-224a866f93eb\") " pod="openshift-multus/multus-lxmjt" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.516009 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xsrv\" (UniqueName: \"kubernetes.io/projected/7b45ec2d-5bea-4007-a49f-224a866f93eb-kube-api-access-6xsrv\") pod \"multus-lxmjt\" (UID: \"7b45ec2d-5bea-4007-a49f-224a866f93eb\") " pod="openshift-multus/multus-lxmjt" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.516024 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8064812e-b6aa-4f56-81c9-16154c00abad-ovnkube-script-lib\") pod \"ovnkube-node-pn6lz\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.516036 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7b45ec2d-5bea-4007-a49f-224a866f93eb-host-run-netns\") pod \"multus-lxmjt\" (UID: \"7b45ec2d-5bea-4007-a49f-224a866f93eb\") " pod="openshift-multus/multus-lxmjt" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.516049 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7b45ec2d-5bea-4007-a49f-224a866f93eb-etc-kubernetes\") pod \"multus-lxmjt\" (UID: \"7b45ec2d-5bea-4007-a49f-224a866f93eb\") " pod="openshift-multus/multus-lxmjt" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.516074 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8064812e-b6aa-4f56-81c9-16154c00abad-ovnkube-config\") pod \"ovnkube-node-pn6lz\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.516092 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4cjp\" (UniqueName: \"kubernetes.io/projected/ce442c80-fcde-4b79-b6f9-f8f25771dfd4-kube-api-access-c4cjp\") pod \"machine-config-daemon-htwnl\" (UID: \"ce442c80-fcde-4b79-b6f9-f8f25771dfd4\") " pod="openshift-machine-config-operator/machine-config-daemon-htwnl" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.516106 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-run-openvswitch\") pod \"ovnkube-node-pn6lz\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.516120 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rm5x2\" (UniqueName: \"kubernetes.io/projected/8064812e-b6aa-4f56-81c9-16154c00abad-kube-api-access-rm5x2\") pod \"ovnkube-node-pn6lz\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.516134 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7b45ec2d-5bea-4007-a49f-224a866f93eb-host-var-lib-cni-multus\") pod \"multus-lxmjt\" (UID: \"7b45ec2d-5bea-4007-a49f-224a866f93eb\") " pod="openshift-multus/multus-lxmjt" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.516156 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-node-log\") pod \"ovnkube-node-pn6lz\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.516171 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7b45ec2d-5bea-4007-a49f-224a866f93eb-host-var-lib-cni-bin\") pod \"multus-lxmjt\" (UID: \"7b45ec2d-5bea-4007-a49f-224a866f93eb\") " pod="openshift-multus/multus-lxmjt" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.516206 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7b45ec2d-5bea-4007-a49f-224a866f93eb-host-var-lib-kubelet\") pod \"multus-lxmjt\" (UID: \"7b45ec2d-5bea-4007-a49f-224a866f93eb\") " pod="openshift-multus/multus-lxmjt" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.516225 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2d7408e5-b529-4396-92b3-2fed275c3250-cnibin\") pod \"multus-additional-cni-plugins-5rgfd\" (UID: \"2d7408e5-b529-4396-92b3-2fed275c3250\") " pod="openshift-multus/multus-additional-cni-plugins-5rgfd" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.516242 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-run-ovn\") pod \"ovnkube-node-pn6lz\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.516263 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7b45ec2d-5bea-4007-a49f-224a866f93eb-multus-daemon-config\") pod \"multus-lxmjt\" (UID: \"7b45ec2d-5bea-4007-a49f-224a866f93eb\") " pod="openshift-multus/multus-lxmjt" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.516276 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2d7408e5-b529-4396-92b3-2fed275c3250-cni-binary-copy\") pod \"multus-additional-cni-plugins-5rgfd\" (UID: \"2d7408e5-b529-4396-92b3-2fed275c3250\") " pod="openshift-multus/multus-additional-cni-plugins-5rgfd" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.516290 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-host-cni-netd\") pod \"ovnkube-node-pn6lz\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.516304 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pn6lz\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.516318 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-host-run-ovn-kubernetes\") pod \"ovnkube-node-pn6lz\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.516333 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8064812e-b6aa-4f56-81c9-16154c00abad-ovn-node-metrics-cert\") pod \"ovnkube-node-pn6lz\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.516346 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7b45ec2d-5bea-4007-a49f-224a866f93eb-system-cni-dir\") pod \"multus-lxmjt\" (UID: \"7b45ec2d-5bea-4007-a49f-224a866f93eb\") " pod="openshift-multus/multus-lxmjt" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.516359 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7b45ec2d-5bea-4007-a49f-224a866f93eb-host-run-multus-certs\") pod \"multus-lxmjt\" (UID: \"7b45ec2d-5bea-4007-a49f-224a866f93eb\") " pod="openshift-multus/multus-lxmjt" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.516375 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2d7408e5-b529-4396-92b3-2fed275c3250-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-5rgfd\" (UID: \"2d7408e5-b529-4396-92b3-2fed275c3250\") " pod="openshift-multus/multus-additional-cni-plugins-5rgfd" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.516396 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-host-kubelet\") pod \"ovnkube-node-pn6lz\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.516409 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-systemd-units\") pod \"ovnkube-node-pn6lz\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.516424 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7b45ec2d-5bea-4007-a49f-224a866f93eb-host-run-k8s-cni-cncf-io\") pod \"multus-lxmjt\" (UID: \"7b45ec2d-5bea-4007-a49f-224a866f93eb\") " pod="openshift-multus/multus-lxmjt" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.516437 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ce442c80-fcde-4b79-b6f9-f8f25771dfd4-proxy-tls\") pod \"machine-config-daemon-htwnl\" (UID: \"ce442c80-fcde-4b79-b6f9-f8f25771dfd4\") " pod="openshift-machine-config-operator/machine-config-daemon-htwnl" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.516452 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-log-socket\") pod \"ovnkube-node-pn6lz\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.516470 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-host-run-netns\") pod \"ovnkube-node-pn6lz\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.516483 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-var-lib-openvswitch\") pod \"ovnkube-node-pn6lz\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.516496 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ce442c80-fcde-4b79-b6f9-f8f25771dfd4-rootfs\") pod \"machine-config-daemon-htwnl\" (UID: \"ce442c80-fcde-4b79-b6f9-f8f25771dfd4\") " pod="openshift-machine-config-operator/machine-config-daemon-htwnl" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.516510 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-host-cni-bin\") pod \"ovnkube-node-pn6lz\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.516524 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7b45ec2d-5bea-4007-a49f-224a866f93eb-multus-cni-dir\") pod \"multus-lxmjt\" (UID: \"7b45ec2d-5bea-4007-a49f-224a866f93eb\") " pod="openshift-multus/multus-lxmjt" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.516538 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-etc-openvswitch\") pod \"ovnkube-node-pn6lz\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.516551 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7b45ec2d-5bea-4007-a49f-224a866f93eb-hostroot\") pod \"multus-lxmjt\" (UID: \"7b45ec2d-5bea-4007-a49f-224a866f93eb\") " pod="openshift-multus/multus-lxmjt" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.516565 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ce442c80-fcde-4b79-b6f9-f8f25771dfd4-mcd-auth-proxy-config\") pod \"machine-config-daemon-htwnl\" (UID: \"ce442c80-fcde-4b79-b6f9-f8f25771dfd4\") " pod="openshift-machine-config-operator/machine-config-daemon-htwnl" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.516578 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2d7408e5-b529-4396-92b3-2fed275c3250-tuning-conf-dir\") pod \"multus-additional-cni-plugins-5rgfd\" (UID: \"2d7408e5-b529-4396-92b3-2fed275c3250\") " pod="openshift-multus/multus-additional-cni-plugins-5rgfd" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.516591 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2d7408e5-b529-4396-92b3-2fed275c3250-system-cni-dir\") pod \"multus-additional-cni-plugins-5rgfd\" (UID: \"2d7408e5-b529-4396-92b3-2fed275c3250\") " pod="openshift-multus/multus-additional-cni-plugins-5rgfd" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.516612 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-host-slash\") pod \"ovnkube-node-pn6lz\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.516626 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7b45ec2d-5bea-4007-a49f-224a866f93eb-os-release\") pod \"multus-lxmjt\" (UID: \"7b45ec2d-5bea-4007-a49f-224a866f93eb\") " pod="openshift-multus/multus-lxmjt" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.516640 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7b45ec2d-5bea-4007-a49f-224a866f93eb-cni-binary-copy\") pod \"multus-lxmjt\" (UID: \"7b45ec2d-5bea-4007-a49f-224a866f93eb\") " pod="openshift-multus/multus-lxmjt" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.517202 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7b45ec2d-5bea-4007-a49f-224a866f93eb-cni-binary-copy\") pod \"multus-lxmjt\" (UID: \"7b45ec2d-5bea-4007-a49f-224a866f93eb\") " pod="openshift-multus/multus-lxmjt" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.517246 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7b45ec2d-5bea-4007-a49f-224a866f93eb-multus-conf-dir\") pod \"multus-lxmjt\" (UID: \"7b45ec2d-5bea-4007-a49f-224a866f93eb\") " pod="openshift-multus/multus-lxmjt" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.517668 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-host-run-ovn-kubernetes\") pod \"ovnkube-node-pn6lz\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.517670 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pn6lz\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.517889 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-run-systemd\") pod \"ovnkube-node-pn6lz\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.518005 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-run-openvswitch\") pod \"ovnkube-node-pn6lz\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.518043 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7b45ec2d-5bea-4007-a49f-224a866f93eb-host-run-netns\") pod \"multus-lxmjt\" (UID: \"7b45ec2d-5bea-4007-a49f-224a866f93eb\") " pod="openshift-multus/multus-lxmjt" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.518068 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2d7408e5-b529-4396-92b3-2fed275c3250-cnibin\") pod \"multus-additional-cni-plugins-5rgfd\" (UID: \"2d7408e5-b529-4396-92b3-2fed275c3250\") " pod="openshift-multus/multus-additional-cni-plugins-5rgfd" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.518097 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-run-ovn\") pod \"ovnkube-node-pn6lz\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.518139 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-var-lib-openvswitch\") pod \"ovnkube-node-pn6lz\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.518137 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7b45ec2d-5bea-4007-a49f-224a866f93eb-host-var-lib-kubelet\") pod \"multus-lxmjt\" (UID: \"7b45ec2d-5bea-4007-a49f-224a866f93eb\") " pod="openshift-multus/multus-lxmjt" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.518226 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7b45ec2d-5bea-4007-a49f-224a866f93eb-etc-kubernetes\") pod \"multus-lxmjt\" (UID: \"7b45ec2d-5bea-4007-a49f-224a866f93eb\") " pod="openshift-multus/multus-lxmjt" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.518284 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-node-log\") pod \"ovnkube-node-pn6lz\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.518325 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7b45ec2d-5bea-4007-a49f-224a866f93eb-host-var-lib-cni-multus\") pod \"multus-lxmjt\" (UID: \"7b45ec2d-5bea-4007-a49f-224a866f93eb\") " pod="openshift-multus/multus-lxmjt" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.518355 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-host-cni-netd\") pod \"ovnkube-node-pn6lz\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.518383 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-systemd-units\") pod \"ovnkube-node-pn6lz\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.518406 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7b45ec2d-5bea-4007-a49f-224a866f93eb-host-run-multus-certs\") pod \"multus-lxmjt\" (UID: \"7b45ec2d-5bea-4007-a49f-224a866f93eb\") " pod="openshift-multus/multus-lxmjt" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.518428 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-host-kubelet\") pod \"ovnkube-node-pn6lz\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.518453 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7b45ec2d-5bea-4007-a49f-224a866f93eb-hostroot\") pod \"multus-lxmjt\" (UID: \"7b45ec2d-5bea-4007-a49f-224a866f93eb\") " pod="openshift-multus/multus-lxmjt" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.518418 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7b45ec2d-5bea-4007-a49f-224a866f93eb-multus-socket-dir-parent\") pod \"multus-lxmjt\" (UID: \"7b45ec2d-5bea-4007-a49f-224a866f93eb\") " pod="openshift-multus/multus-lxmjt" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.518492 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ce442c80-fcde-4b79-b6f9-f8f25771dfd4-rootfs\") pod \"machine-config-daemon-htwnl\" (UID: \"ce442c80-fcde-4b79-b6f9-f8f25771dfd4\") " pod="openshift-machine-config-operator/machine-config-daemon-htwnl" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.518516 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-host-cni-bin\") pod \"ovnkube-node-pn6lz\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.518525 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8064812e-b6aa-4f56-81c9-16154c00abad-ovnkube-script-lib\") pod \"ovnkube-node-pn6lz\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.518541 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7b45ec2d-5bea-4007-a49f-224a866f93eb-host-var-lib-cni-bin\") pod \"multus-lxmjt\" (UID: \"7b45ec2d-5bea-4007-a49f-224a866f93eb\") " pod="openshift-multus/multus-lxmjt" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.518587 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7b45ec2d-5bea-4007-a49f-224a866f93eb-multus-daemon-config\") pod \"multus-lxmjt\" (UID: \"7b45ec2d-5bea-4007-a49f-224a866f93eb\") " pod="openshift-multus/multus-lxmjt" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.518610 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7b45ec2d-5bea-4007-a49f-224a866f93eb-host-run-k8s-cni-cncf-io\") pod \"multus-lxmjt\" (UID: \"7b45ec2d-5bea-4007-a49f-224a866f93eb\") " pod="openshift-multus/multus-lxmjt" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.518626 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-etc-openvswitch\") pod \"ovnkube-node-pn6lz\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.518641 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-log-socket\") pod \"ovnkube-node-pn6lz\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.518659 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-host-run-netns\") pod \"ovnkube-node-pn6lz\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.518674 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-host-slash\") pod \"ovnkube-node-pn6lz\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.518707 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7b45ec2d-5bea-4007-a49f-224a866f93eb-multus-cni-dir\") pod \"multus-lxmjt\" (UID: \"7b45ec2d-5bea-4007-a49f-224a866f93eb\") " pod="openshift-multus/multus-lxmjt" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.518780 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7b45ec2d-5bea-4007-a49f-224a866f93eb-os-release\") pod \"multus-lxmjt\" (UID: \"7b45ec2d-5bea-4007-a49f-224a866f93eb\") " pod="openshift-multus/multus-lxmjt" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.518789 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8064812e-b6aa-4f56-81c9-16154c00abad-ovnkube-config\") pod \"ovnkube-node-pn6lz\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.518833 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7b45ec2d-5bea-4007-a49f-224a866f93eb-cnibin\") pod \"multus-lxmjt\" (UID: \"7b45ec2d-5bea-4007-a49f-224a866f93eb\") " pod="openshift-multus/multus-lxmjt" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.518857 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2d7408e5-b529-4396-92b3-2fed275c3250-cni-binary-copy\") pod \"multus-additional-cni-plugins-5rgfd\" (UID: \"2d7408e5-b529-4396-92b3-2fed275c3250\") " pod="openshift-multus/multus-additional-cni-plugins-5rgfd" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.518891 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7b45ec2d-5bea-4007-a49f-224a866f93eb-system-cni-dir\") pod \"multus-lxmjt\" (UID: \"7b45ec2d-5bea-4007-a49f-224a866f93eb\") " pod="openshift-multus/multus-lxmjt" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.519011 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8064812e-b6aa-4f56-81c9-16154c00abad-env-overrides\") pod \"ovnkube-node-pn6lz\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.519024 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2d7408e5-b529-4396-92b3-2fed275c3250-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-5rgfd\" (UID: \"2d7408e5-b529-4396-92b3-2fed275c3250\") " pod="openshift-multus/multus-additional-cni-plugins-5rgfd" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.519050 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2d7408e5-b529-4396-92b3-2fed275c3250-system-cni-dir\") pod \"multus-additional-cni-plugins-5rgfd\" (UID: \"2d7408e5-b529-4396-92b3-2fed275c3250\") " pod="openshift-multus/multus-additional-cni-plugins-5rgfd" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.519093 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ce442c80-fcde-4b79-b6f9-f8f25771dfd4-mcd-auth-proxy-config\") pod \"machine-config-daemon-htwnl\" (UID: \"ce442c80-fcde-4b79-b6f9-f8f25771dfd4\") " pod="openshift-machine-config-operator/machine-config-daemon-htwnl" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.519132 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2d7408e5-b529-4396-92b3-2fed275c3250-os-release\") pod \"multus-additional-cni-plugins-5rgfd\" (UID: \"2d7408e5-b529-4396-92b3-2fed275c3250\") " pod="openshift-multus/multus-additional-cni-plugins-5rgfd" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.521315 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2d7408e5-b529-4396-92b3-2fed275c3250-tuning-conf-dir\") pod \"multus-additional-cni-plugins-5rgfd\" (UID: \"2d7408e5-b529-4396-92b3-2fed275c3250\") " pod="openshift-multus/multus-additional-cni-plugins-5rgfd" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.521542 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8064812e-b6aa-4f56-81c9-16154c00abad-ovn-node-metrics-cert\") pod \"ovnkube-node-pn6lz\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.523083 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ce442c80-fcde-4b79-b6f9-f8f25771dfd4-proxy-tls\") pod \"machine-config-daemon-htwnl\" (UID: \"ce442c80-fcde-4b79-b6f9-f8f25771dfd4\") " pod="openshift-machine-config-operator/machine-config-daemon-htwnl" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.530674 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:40Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.533008 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rm5x2\" (UniqueName: \"kubernetes.io/projected/8064812e-b6aa-4f56-81c9-16154c00abad-kube-api-access-rm5x2\") pod \"ovnkube-node-pn6lz\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.533503 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xsrv\" (UniqueName: \"kubernetes.io/projected/7b45ec2d-5bea-4007-a49f-224a866f93eb-kube-api-access-6xsrv\") pod \"multus-lxmjt\" (UID: \"7b45ec2d-5bea-4007-a49f-224a866f93eb\") " pod="openshift-multus/multus-lxmjt" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.535347 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mrnw\" (UniqueName: \"kubernetes.io/projected/2d7408e5-b529-4396-92b3-2fed275c3250-kube-api-access-9mrnw\") pod \"multus-additional-cni-plugins-5rgfd\" (UID: \"2d7408e5-b529-4396-92b3-2fed275c3250\") " pod="openshift-multus/multus-additional-cni-plugins-5rgfd" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.539705 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4cjp\" (UniqueName: \"kubernetes.io/projected/ce442c80-fcde-4b79-b6f9-f8f25771dfd4-kube-api-access-c4cjp\") pod \"machine-config-daemon-htwnl\" (UID: \"ce442c80-fcde-4b79-b6f9-f8f25771dfd4\") " pod="openshift-machine-config-operator/machine-config-daemon-htwnl" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.540373 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://705766b44d481300d0fd8cc654aa2dc9046733e5a71ad3a7ab789f85e737dcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a69b8825692cbbb7cce0a52b1ccb7b137ce6c9f8a8d788fff95bb7228cb40e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:40Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.549408 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f85e4aab30e7f49ef05e48cc22fc22e0291e32604a8f74ae1cc2ea57b5889e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:40Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.560470 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f856022da70859465022db433585bd8cad12f8cd2aeae7943491d3f7e5049256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:40Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.569438 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmjlm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f85220d6-f940-4538-806a-2b26bacd7b09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6315723f6cd375a392c8c427a737021b7e01415c902514b09ee43407749caa34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7795z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmjlm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:40Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.577829 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce442c80-fcde-4b79-b6f9-f8f25771dfd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-htwnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:40Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.589336 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5rgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d7408e5-b529-4396-92b3-2fed275c3250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5rgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:40Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.604659 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.606793 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a1f1696-ed7f-4482-a300-ca25b68e09e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2507a30017fbd9947ca456b84812f0b26baabf572768870f5a34b3c7699443cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc60672e282de4d3d6a2d8fd64306ec0fc0d032ade655f88afb0c0b9e0a8e589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cf903094dae4e559bcef3e269240670b5d4693344d600779c394ce1e039b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06eea6556daedf913d87b8b0527b337070528b07cc026de878b4df332598a2d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b452d113d48729b6d4ab59f80138b15e7cb16cc16eef4ec9916901a067986d07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:40Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.611707 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-lxmjt" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.618829 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:40Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.619269 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-5rgfd" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.624555 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.629791 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:40Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:40 crc kubenswrapper[4720]: W1013 17:24:40.632596 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b45ec2d_5bea_4007_a49f_224a866f93eb.slice/crio-79aff25c4327759d0e329f6015857119c7117de60803dbcc3591933e4db98a21 WatchSource:0}: Error finding container 79aff25c4327759d0e329f6015857119c7117de60803dbcc3591933e4db98a21: Status 404 returned error can't find the container with id 79aff25c4327759d0e329f6015857119c7117de60803dbcc3591933e4db98a21 Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.644601 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea323a1a-c607-40cf-b61c-b7ff03399c45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88b8a46e2e2c2342bd22a4a9b42cadd0188ec83f23815773f6ffb46be79fdf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d875b0d7eb9f13bc697597a33f7196598ae679980b13842e92acf20477526f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e18d0e9df43d366a6e7088cc3d5bdc794882828f60f008eaf29e788e0141ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef2ba92cea782dd4fd31a511e1086777b58c00340014dff74ff6615b07bd10c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7977e20d8558810fb9f8f827830d0378de7d15c71340d396a95b506902c0962\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T17:24:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1013 17:24:28.736714 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 17:24:28.738546 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445610647/tls.crt::/tmp/serving-cert-445610647/tls.key\\\\\\\"\\\\nI1013 17:24:34.860768 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 17:24:34.864611 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 17:24:34.864632 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 17:24:34.864654 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 17:24:34.864660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 17:24:34.871230 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1013 17:24:34.871248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1013 17:24:34.871273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871290 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871305 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 17:24:34.871314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 17:24:34.871324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 17:24:34.871335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1013 17:24:34.872239 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd536f783e3f8f974f25772994c3d00a01c2ba66997a6e06236f1b83b890f8f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:40Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:40 crc kubenswrapper[4720]: W1013 17:24:40.646223 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d7408e5_b529_4396_92b3_2fed275c3250.slice/crio-e6da6e1f28572bf4984bb20824ccfde97b8d4940b13f03e9cebbe9998087a4ef WatchSource:0}: Error finding container e6da6e1f28572bf4984bb20824ccfde97b8d4940b13f03e9cebbe9998087a4ef: Status 404 returned error can't find the container with id e6da6e1f28572bf4984bb20824ccfde97b8d4940b13f03e9cebbe9998087a4ef Oct 13 17:24:40 crc kubenswrapper[4720]: W1013 17:24:40.648644 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8064812e_b6aa_4f56_81c9_16154c00abad.slice/crio-54e24a19ce5ce94b646652618294cdf42d51c0f94f5972c9d9f4274575ff8d6d WatchSource:0}: Error finding container 54e24a19ce5ce94b646652618294cdf42d51c0f94f5972c9d9f4274575ff8d6d: Status 404 returned error can't find the container with id 54e24a19ce5ce94b646652618294cdf42d51c0f94f5972c9d9f4274575ff8d6d Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.659980 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxmjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b45ec2d-5bea-4007-a49f-224a866f93eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xsrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxmjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:40Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:40 crc kubenswrapper[4720]: I1013 17:24:40.693825 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8064812e-b6aa-4f56-81c9-16154c00abad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:40Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.167792 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:24:41 crc kubenswrapper[4720]: E1013 17:24:41.168243 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.167908 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:24:41 crc kubenswrapper[4720]: E1013 17:24:41.168327 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.167779 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:24:41 crc kubenswrapper[4720]: E1013 17:24:41.168386 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.247097 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.248976 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.249022 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.249034 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.249161 4720 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.257539 4720 kubelet_node_status.go:115] "Node was previously registered" node="crc" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.257822 4720 kubelet_node_status.go:79] "Successfully registered node" node="crc" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.258841 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.258903 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.258920 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.258950 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.258976 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:41Z","lastTransitionTime":"2025-10-13T17:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:41 crc kubenswrapper[4720]: E1013 17:24:41.287778 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a530de0f-daad-4050-9522-69c64a451e75\\\",\\\"systemUUID\\\":\\\"00bb7c43-79d6-45b5-bd02-4b71a0ba6837\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.291691 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.291745 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.291754 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.291772 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.291782 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:41Z","lastTransitionTime":"2025-10-13T17:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:41 crc kubenswrapper[4720]: E1013 17:24:41.308288 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a530de0f-daad-4050-9522-69c64a451e75\\\",\\\"systemUUID\\\":\\\"00bb7c43-79d6-45b5-bd02-4b71a0ba6837\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.311911 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.311949 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.311958 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.311973 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.311983 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:41Z","lastTransitionTime":"2025-10-13T17:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.317325 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" event={"ID":"ce442c80-fcde-4b79-b6f9-f8f25771dfd4","Type":"ContainerStarted","Data":"10e7a946185d6502e37a1ab60ad1eca36cce991f230188d22b2d6e2ea554f920"} Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.317409 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" event={"ID":"ce442c80-fcde-4b79-b6f9-f8f25771dfd4","Type":"ContainerStarted","Data":"864e330267c8cec8487e1dcf4a50bbb2ba37d81d80361f0873e6bf69de1ed721"} Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.317429 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" event={"ID":"ce442c80-fcde-4b79-b6f9-f8f25771dfd4","Type":"ContainerStarted","Data":"55883733b900a59fbc2dc174aec122fe62e15f7a59a7bd0d728da5621366fcba"} Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.319141 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lxmjt" event={"ID":"7b45ec2d-5bea-4007-a49f-224a866f93eb","Type":"ContainerStarted","Data":"c00868f87f1a3020a1db7e50377bbe90203d6e93799c8ec9f3100dd52a6c0cf6"} Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.319186 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lxmjt" event={"ID":"7b45ec2d-5bea-4007-a49f-224a866f93eb","Type":"ContainerStarted","Data":"79aff25c4327759d0e329f6015857119c7117de60803dbcc3591933e4db98a21"} Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.321455 4720 generic.go:334] "Generic (PLEG): container finished" podID="8064812e-b6aa-4f56-81c9-16154c00abad" containerID="be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a" exitCode=0 Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.321508 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" event={"ID":"8064812e-b6aa-4f56-81c9-16154c00abad","Type":"ContainerDied","Data":"be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a"} Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.321599 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" event={"ID":"8064812e-b6aa-4f56-81c9-16154c00abad","Type":"ContainerStarted","Data":"54e24a19ce5ce94b646652618294cdf42d51c0f94f5972c9d9f4274575ff8d6d"} Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.323923 4720 generic.go:334] "Generic (PLEG): container finished" podID="2d7408e5-b529-4396-92b3-2fed275c3250" containerID="c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03" exitCode=0 Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.323990 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5rgfd" event={"ID":"2d7408e5-b529-4396-92b3-2fed275c3250","Type":"ContainerDied","Data":"c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03"} Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.324047 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5rgfd" event={"ID":"2d7408e5-b529-4396-92b3-2fed275c3250","Type":"ContainerStarted","Data":"e6da6e1f28572bf4984bb20824ccfde97b8d4940b13f03e9cebbe9998087a4ef"} Oct 13 17:24:41 crc kubenswrapper[4720]: E1013 17:24:41.334815 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a530de0f-daad-4050-9522-69c64a451e75\\\",\\\"systemUUID\\\":\\\"00bb7c43-79d6-45b5-bd02-4b71a0ba6837\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.340391 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.340433 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.340445 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.340462 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.340476 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:41Z","lastTransitionTime":"2025-10-13T17:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.347994 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8064812e-b6aa-4f56-81c9-16154c00abad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:41 crc kubenswrapper[4720]: E1013 17:24:41.356757 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a530de0f-daad-4050-9522-69c64a451e75\\\",\\\"systemUUID\\\":\\\"00bb7c43-79d6-45b5-bd02-4b71a0ba6837\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.366207 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.366242 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.366253 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.366270 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.366281 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:41Z","lastTransitionTime":"2025-10-13T17:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.368471 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea323a1a-c607-40cf-b61c-b7ff03399c45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88b8a46e2e2c2342bd22a4a9b42cadd0188ec83f23815773f6ffb46be79fdf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d875b0d7eb9f13bc697597a33f7196598ae679980b13842e92acf20477526f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e18d0e9df43d366a6e7088cc3d5bdc794882828f60f008eaf29e788e0141ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef2ba92cea782dd4fd31a511e1086777b58c00340014dff74ff6615b07bd10c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7977e20d8558810fb9f8f827830d0378de7d15c71340d396a95b506902c0962\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T17:24:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1013 17:24:28.736714 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 17:24:28.738546 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445610647/tls.crt::/tmp/serving-cert-445610647/tls.key\\\\\\\"\\\\nI1013 17:24:34.860768 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 17:24:34.864611 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 17:24:34.864632 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 17:24:34.864654 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 17:24:34.864660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 17:24:34.871230 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1013 17:24:34.871248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1013 17:24:34.871273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871290 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871305 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 17:24:34.871314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 17:24:34.871324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 17:24:34.871335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1013 17:24:34.872239 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd536f783e3f8f974f25772994c3d00a01c2ba66997a6e06236f1b83b890f8f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:41 crc kubenswrapper[4720]: E1013 17:24:41.381173 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a530de0f-daad-4050-9522-69c64a451e75\\\",\\\"systemUUID\\\":\\\"00bb7c43-79d6-45b5-bd02-4b71a0ba6837\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:41 crc kubenswrapper[4720]: E1013 17:24:41.381518 4720 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.382836 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.382886 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.382903 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.382927 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.382945 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:41Z","lastTransitionTime":"2025-10-13T17:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.384479 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxmjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b45ec2d-5bea-4007-a49f-224a866f93eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xsrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxmjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.399602 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.414998 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://705766b44d481300d0fd8cc654aa2dc9046733e5a71ad3a7ab789f85e737dcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a69b8825692cbbb7cce0a52b1ccb7b137ce6c9f8a8d788fff95bb7228cb40e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.431844 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f85e4aab30e7f49ef05e48cc22fc22e0291e32604a8f74ae1cc2ea57b5889e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.446485 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f856022da70859465022db433585bd8cad12f8cd2aeae7943491d3f7e5049256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.459013 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmjlm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f85220d6-f940-4538-806a-2b26bacd7b09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6315723f6cd375a392c8c427a737021b7e01415c902514b09ee43407749caa34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7795z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmjlm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.478335 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7baf94d8-250c-4568-ab1c-a1429b8caf70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435ba16189d6d19892aaab158512c1a84a72103779e3cc4b2c634de344583c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e566f3b66d404dd8da93bba1745b9e67cd63532330591f5b735f6826c07eae2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ec64662f56c2609ebac1383a78840ba4e0d9fb1fe10e666169285d3852e6b40\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d8f079c2674e0c6bea1837755db23bc3205da6a01ac1896d6293226d9c6fb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.485552 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.485584 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.485595 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.485620 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.485632 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:41Z","lastTransitionTime":"2025-10-13T17:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.491842 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce442c80-fcde-4b79-b6f9-f8f25771dfd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10e7a946185d6502e37a1ab60ad1eca36cce991f230188d22b2d6e2ea554f920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://864e330267c8cec8487e1dcf4a50bbb2ba37d81d80361f0873e6bf69de1ed721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-htwnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.506112 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5rgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d7408e5-b529-4396-92b3-2fed275c3250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5rgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.518369 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.530997 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.547941 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a1f1696-ed7f-4482-a300-ca25b68e09e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2507a30017fbd9947ca456b84812f0b26baabf572768870f5a34b3c7699443cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc60672e282de4d3d6a2d8fd64306ec0fc0d032ade655f88afb0c0b9e0a8e589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cf903094dae4e559bcef3e269240670b5d4693344d600779c394ce1e039b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06eea6556daedf913d87b8b0527b337070528b07cc026de878b4df332598a2d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b452d113d48729b6d4ab59f80138b15e7cb16cc16eef4ec9916901a067986d07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.558316 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmjlm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f85220d6-f940-4538-806a-2b26bacd7b09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6315723f6cd375a392c8c427a737021b7e01415c902514b09ee43407749caa34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7795z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmjlm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.570081 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7baf94d8-250c-4568-ab1c-a1429b8caf70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435ba16189d6d19892aaab158512c1a84a72103779e3cc4b2c634de344583c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e566f3b66d404dd8da93bba1745b9e67cd63532330591f5b735f6826c07eae2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ec64662f56c2609ebac1383a78840ba4e0d9fb1fe10e666169285d3852e6b40\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d8f079c2674e0c6bea1837755db23bc3205da6a01ac1896d6293226d9c6fb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.588093 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.588140 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.588149 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.588163 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.588175 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:41Z","lastTransitionTime":"2025-10-13T17:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.589384 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.600724 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://705766b44d481300d0fd8cc654aa2dc9046733e5a71ad3a7ab789f85e737dcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a69b8825692cbbb7cce0a52b1ccb7b137ce6c9f8a8d788fff95bb7228cb40e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.612657 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f85e4aab30e7f49ef05e48cc22fc22e0291e32604a8f74ae1cc2ea57b5889e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.626104 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f856022da70859465022db433585bd8cad12f8cd2aeae7943491d3f7e5049256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.637547 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce442c80-fcde-4b79-b6f9-f8f25771dfd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10e7a946185d6502e37a1ab60ad1eca36cce991f230188d22b2d6e2ea554f920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://864e330267c8cec8487e1dcf4a50bbb2ba37d81d80361f0873e6bf69de1ed721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-htwnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.655033 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5rgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d7408e5-b529-4396-92b3-2fed275c3250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5rgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.674123 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a1f1696-ed7f-4482-a300-ca25b68e09e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2507a30017fbd9947ca456b84812f0b26baabf572768870f5a34b3c7699443cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc60672e282de4d3d6a2d8fd64306ec0fc0d032ade655f88afb0c0b9e0a8e589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cf903094dae4e559bcef3e269240670b5d4693344d600779c394ce1e039b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06eea6556daedf913d87b8b0527b337070528b07cc026de878b4df332598a2d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b452d113d48729b6d4ab59f80138b15e7cb16cc16eef4ec9916901a067986d07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.688360 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.692597 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.692624 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.692637 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.692653 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.692662 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:41Z","lastTransitionTime":"2025-10-13T17:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.705900 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.735104 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea323a1a-c607-40cf-b61c-b7ff03399c45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88b8a46e2e2c2342bd22a4a9b42cadd0188ec83f23815773f6ffb46be79fdf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d875b0d7eb9f13bc697597a33f7196598ae679980b13842e92acf20477526f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e18d0e9df43d366a6e7088cc3d5bdc794882828f60f008eaf29e788e0141ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef2ba92cea782dd4fd31a511e1086777b58c00340014dff74ff6615b07bd10c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7977e20d8558810fb9f8f827830d0378de7d15c71340d396a95b506902c0962\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T17:24:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1013 17:24:28.736714 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 17:24:28.738546 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445610647/tls.crt::/tmp/serving-cert-445610647/tls.key\\\\\\\"\\\\nI1013 17:24:34.860768 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 17:24:34.864611 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 17:24:34.864632 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 17:24:34.864654 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 17:24:34.864660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 17:24:34.871230 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1013 17:24:34.871248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1013 17:24:34.871273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871290 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871305 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 17:24:34.871314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 17:24:34.871324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 17:24:34.871335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1013 17:24:34.872239 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd536f783e3f8f974f25772994c3d00a01c2ba66997a6e06236f1b83b890f8f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.747619 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxmjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b45ec2d-5bea-4007-a49f-224a866f93eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00868f87f1a3020a1db7e50377bbe90203d6e93799c8ec9f3100dd52a6c0cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xsrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxmjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.753595 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-bvtrx"] Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.753992 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-bvtrx" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.755466 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.755764 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.757133 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.757405 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.777746 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8064812e-b6aa-4f56-81c9-16154c00abad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.792606 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.795373 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.795428 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.795447 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.795469 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.795486 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:41Z","lastTransitionTime":"2025-10-13T17:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.816077 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a1f1696-ed7f-4482-a300-ca25b68e09e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2507a30017fbd9947ca456b84812f0b26baabf572768870f5a34b3c7699443cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc60672e282de4d3d6a2d8fd64306ec0fc0d032ade655f88afb0c0b9e0a8e589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cf903094dae4e559bcef3e269240670b5d4693344d600779c394ce1e039b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06eea6556daedf913d87b8b0527b337070528b07cc026de878b4df332598a2d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b452d113d48729b6d4ab59f80138b15e7cb16cc16eef4ec9916901a067986d07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.829168 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/b35c333b-0d4e-4d9a-a8fe-a1c6f5fbbf75-serviceca\") pod \"node-ca-bvtrx\" (UID: \"b35c333b-0d4e-4d9a-a8fe-a1c6f5fbbf75\") " pod="openshift-image-registry/node-ca-bvtrx" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.829292 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b35c333b-0d4e-4d9a-a8fe-a1c6f5fbbf75-host\") pod \"node-ca-bvtrx\" (UID: \"b35c333b-0d4e-4d9a-a8fe-a1c6f5fbbf75\") " pod="openshift-image-registry/node-ca-bvtrx" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.829321 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g97k\" (UniqueName: \"kubernetes.io/projected/b35c333b-0d4e-4d9a-a8fe-a1c6f5fbbf75-kube-api-access-4g97k\") pod \"node-ca-bvtrx\" (UID: \"b35c333b-0d4e-4d9a-a8fe-a1c6f5fbbf75\") " pod="openshift-image-registry/node-ca-bvtrx" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.829392 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.840268 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bvtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b35c333b-0d4e-4d9a-a8fe-a1c6f5fbbf75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g97k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bvtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.854401 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea323a1a-c607-40cf-b61c-b7ff03399c45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88b8a46e2e2c2342bd22a4a9b42cadd0188ec83f23815773f6ffb46be79fdf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d875b0d7eb9f13bc697597a33f7196598ae679980b13842e92acf20477526f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e18d0e9df43d366a6e7088cc3d5bdc794882828f60f008eaf29e788e0141ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef2ba92cea782dd4fd31a511e1086777b58c00340014dff74ff6615b07bd10c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7977e20d8558810fb9f8f827830d0378de7d15c71340d396a95b506902c0962\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T17:24:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1013 17:24:28.736714 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 17:24:28.738546 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445610647/tls.crt::/tmp/serving-cert-445610647/tls.key\\\\\\\"\\\\nI1013 17:24:34.860768 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 17:24:34.864611 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 17:24:34.864632 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 17:24:34.864654 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 17:24:34.864660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 17:24:34.871230 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1013 17:24:34.871248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1013 17:24:34.871273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871290 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871305 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 17:24:34.871314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 17:24:34.871324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 17:24:34.871335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1013 17:24:34.872239 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd536f783e3f8f974f25772994c3d00a01c2ba66997a6e06236f1b83b890f8f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.868779 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxmjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b45ec2d-5bea-4007-a49f-224a866f93eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00868f87f1a3020a1db7e50377bbe90203d6e93799c8ec9f3100dd52a6c0cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xsrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxmjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.885875 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8064812e-b6aa-4f56-81c9-16154c00abad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.895965 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://705766b44d481300d0fd8cc654aa2dc9046733e5a71ad3a7ab789f85e737dcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a69b8825692cbbb7cce0a52b1ccb7b137ce6c9f8a8d788fff95bb7228cb40e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.898003 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.898029 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.898038 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.898051 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.898061 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:41Z","lastTransitionTime":"2025-10-13T17:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.905850 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f85e4aab30e7f49ef05e48cc22fc22e0291e32604a8f74ae1cc2ea57b5889e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.918103 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f856022da70859465022db433585bd8cad12f8cd2aeae7943491d3f7e5049256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.927285 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmjlm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f85220d6-f940-4538-806a-2b26bacd7b09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6315723f6cd375a392c8c427a737021b7e01415c902514b09ee43407749caa34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7795z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmjlm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.929971 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/b35c333b-0d4e-4d9a-a8fe-a1c6f5fbbf75-serviceca\") pod \"node-ca-bvtrx\" (UID: \"b35c333b-0d4e-4d9a-a8fe-a1c6f5fbbf75\") " pod="openshift-image-registry/node-ca-bvtrx" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.930015 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b35c333b-0d4e-4d9a-a8fe-a1c6f5fbbf75-host\") pod \"node-ca-bvtrx\" (UID: \"b35c333b-0d4e-4d9a-a8fe-a1c6f5fbbf75\") " pod="openshift-image-registry/node-ca-bvtrx" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.930034 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4g97k\" (UniqueName: \"kubernetes.io/projected/b35c333b-0d4e-4d9a-a8fe-a1c6f5fbbf75-kube-api-access-4g97k\") pod \"node-ca-bvtrx\" (UID: \"b35c333b-0d4e-4d9a-a8fe-a1c6f5fbbf75\") " pod="openshift-image-registry/node-ca-bvtrx" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.930260 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b35c333b-0d4e-4d9a-a8fe-a1c6f5fbbf75-host\") pod \"node-ca-bvtrx\" (UID: \"b35c333b-0d4e-4d9a-a8fe-a1c6f5fbbf75\") " pod="openshift-image-registry/node-ca-bvtrx" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.930851 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/b35c333b-0d4e-4d9a-a8fe-a1c6f5fbbf75-serviceca\") pod \"node-ca-bvtrx\" (UID: \"b35c333b-0d4e-4d9a-a8fe-a1c6f5fbbf75\") " pod="openshift-image-registry/node-ca-bvtrx" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.938603 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7baf94d8-250c-4568-ab1c-a1429b8caf70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435ba16189d6d19892aaab158512c1a84a72103779e3cc4b2c634de344583c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e566f3b66d404dd8da93bba1745b9e67cd63532330591f5b735f6826c07eae2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ec64662f56c2609ebac1383a78840ba4e0d9fb1fe10e666169285d3852e6b40\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d8f079c2674e0c6bea1837755db23bc3205da6a01ac1896d6293226d9c6fb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.945178 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4g97k\" (UniqueName: \"kubernetes.io/projected/b35c333b-0d4e-4d9a-a8fe-a1c6f5fbbf75-kube-api-access-4g97k\") pod \"node-ca-bvtrx\" (UID: \"b35c333b-0d4e-4d9a-a8fe-a1c6f5fbbf75\") " pod="openshift-image-registry/node-ca-bvtrx" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.949873 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.959131 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce442c80-fcde-4b79-b6f9-f8f25771dfd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10e7a946185d6502e37a1ab60ad1eca36cce991f230188d22b2d6e2ea554f920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://864e330267c8cec8487e1dcf4a50bbb2ba37d81d80361f0873e6bf69de1ed721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-htwnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:41 crc kubenswrapper[4720]: I1013 17:24:41.971200 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5rgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d7408e5-b529-4396-92b3-2fed275c3250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5rgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.000099 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.000137 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.000146 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.000160 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.000169 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:42Z","lastTransitionTime":"2025-10-13T17:24:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.102238 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.102301 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.102338 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.102357 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.102369 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:42Z","lastTransitionTime":"2025-10-13T17:24:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.205173 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.205464 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.205476 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.205496 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.205508 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:42Z","lastTransitionTime":"2025-10-13T17:24:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.260880 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-bvtrx" Oct 13 17:24:42 crc kubenswrapper[4720]: W1013 17:24:42.289051 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb35c333b_0d4e_4d9a_a8fe_a1c6f5fbbf75.slice/crio-48c4ded95e0ea4a2979c96187c7e631574d4d8bae93b46c2494f92a38342b376 WatchSource:0}: Error finding container 48c4ded95e0ea4a2979c96187c7e631574d4d8bae93b46c2494f92a38342b376: Status 404 returned error can't find the container with id 48c4ded95e0ea4a2979c96187c7e631574d4d8bae93b46c2494f92a38342b376 Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.308393 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.308450 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.308471 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.308496 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.308516 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:42Z","lastTransitionTime":"2025-10-13T17:24:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.330807 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" event={"ID":"8064812e-b6aa-4f56-81c9-16154c00abad","Type":"ContainerStarted","Data":"7bbd41d9b0518c8b060b1d01c335b6c9923e7196624fbd49326133faaa2e36ba"} Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.330872 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" event={"ID":"8064812e-b6aa-4f56-81c9-16154c00abad","Type":"ContainerStarted","Data":"5c28fde07efa17d20ddecd1c8a0da09a6fa09e98a90b14718565c00aef9c0d25"} Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.330894 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" event={"ID":"8064812e-b6aa-4f56-81c9-16154c00abad","Type":"ContainerStarted","Data":"69d511a66a84fb0fa90f426bd4609c2b3ad915571802c6c024e0370d6e1683a3"} Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.330914 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" event={"ID":"8064812e-b6aa-4f56-81c9-16154c00abad","Type":"ContainerStarted","Data":"1aefa46ee989416f0e4fd05f6f12031da637a354fadd7f6caef75ddc2c74ddf3"} Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.334861 4720 generic.go:334] "Generic (PLEG): container finished" podID="2d7408e5-b529-4396-92b3-2fed275c3250" containerID="cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b" exitCode=0 Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.334987 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5rgfd" event={"ID":"2d7408e5-b529-4396-92b3-2fed275c3250","Type":"ContainerDied","Data":"cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b"} Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.336534 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-bvtrx" event={"ID":"b35c333b-0d4e-4d9a-a8fe-a1c6f5fbbf75","Type":"ContainerStarted","Data":"48c4ded95e0ea4a2979c96187c7e631574d4d8bae93b46c2494f92a38342b376"} Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.359065 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7baf94d8-250c-4568-ab1c-a1429b8caf70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435ba16189d6d19892aaab158512c1a84a72103779e3cc4b2c634de344583c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e566f3b66d404dd8da93bba1745b9e67cd63532330591f5b735f6826c07eae2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ec64662f56c2609ebac1383a78840ba4e0d9fb1fe10e666169285d3852e6b40\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d8f079c2674e0c6bea1837755db23bc3205da6a01ac1896d6293226d9c6fb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:42Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.378654 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:42Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.398932 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://705766b44d481300d0fd8cc654aa2dc9046733e5a71ad3a7ab789f85e737dcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a69b8825692cbbb7cce0a52b1ccb7b137ce6c9f8a8d788fff95bb7228cb40e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:42Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.411248 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.411300 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.411313 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.411327 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.411338 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:42Z","lastTransitionTime":"2025-10-13T17:24:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.418525 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f85e4aab30e7f49ef05e48cc22fc22e0291e32604a8f74ae1cc2ea57b5889e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:42Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.435438 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f856022da70859465022db433585bd8cad12f8cd2aeae7943491d3f7e5049256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:42Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.450072 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmjlm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f85220d6-f940-4538-806a-2b26bacd7b09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6315723f6cd375a392c8c427a737021b7e01415c902514b09ee43407749caa34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7795z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmjlm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:42Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.469317 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce442c80-fcde-4b79-b6f9-f8f25771dfd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10e7a946185d6502e37a1ab60ad1eca36cce991f230188d22b2d6e2ea554f920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://864e330267c8cec8487e1dcf4a50bbb2ba37d81d80361f0873e6bf69de1ed721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-htwnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:42Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.502517 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5rgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d7408e5-b529-4396-92b3-2fed275c3250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5rgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:42Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.536846 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a1f1696-ed7f-4482-a300-ca25b68e09e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2507a30017fbd9947ca456b84812f0b26baabf572768870f5a34b3c7699443cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc60672e282de4d3d6a2d8fd64306ec0fc0d032ade655f88afb0c0b9e0a8e589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cf903094dae4e559bcef3e269240670b5d4693344d600779c394ce1e039b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06eea6556daedf913d87b8b0527b337070528b07cc026de878b4df332598a2d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b452d113d48729b6d4ab59f80138b15e7cb16cc16eef4ec9916901a067986d07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:42Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.540798 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.540823 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.540832 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.540843 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.540852 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:42Z","lastTransitionTime":"2025-10-13T17:24:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.556087 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:42Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.568456 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:42Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.583684 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea323a1a-c607-40cf-b61c-b7ff03399c45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88b8a46e2e2c2342bd22a4a9b42cadd0188ec83f23815773f6ffb46be79fdf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d875b0d7eb9f13bc697597a33f7196598ae679980b13842e92acf20477526f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e18d0e9df43d366a6e7088cc3d5bdc794882828f60f008eaf29e788e0141ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef2ba92cea782dd4fd31a511e1086777b58c00340014dff74ff6615b07bd10c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7977e20d8558810fb9f8f827830d0378de7d15c71340d396a95b506902c0962\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T17:24:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1013 17:24:28.736714 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 17:24:28.738546 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445610647/tls.crt::/tmp/serving-cert-445610647/tls.key\\\\\\\"\\\\nI1013 17:24:34.860768 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 17:24:34.864611 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 17:24:34.864632 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 17:24:34.864654 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 17:24:34.864660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 17:24:34.871230 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1013 17:24:34.871248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1013 17:24:34.871273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871290 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871305 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 17:24:34.871314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 17:24:34.871324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 17:24:34.871335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1013 17:24:34.872239 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd536f783e3f8f974f25772994c3d00a01c2ba66997a6e06236f1b83b890f8f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:42Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.596709 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxmjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b45ec2d-5bea-4007-a49f-224a866f93eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00868f87f1a3020a1db7e50377bbe90203d6e93799c8ec9f3100dd52a6c0cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xsrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxmjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:42Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.620430 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8064812e-b6aa-4f56-81c9-16154c00abad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:42Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.629917 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bvtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b35c333b-0d4e-4d9a-a8fe-a1c6f5fbbf75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g97k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bvtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:42Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.642712 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.642751 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.642762 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.642779 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.642791 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:42Z","lastTransitionTime":"2025-10-13T17:24:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.738244 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 17:24:42 crc kubenswrapper[4720]: E1013 17:24:42.738451 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 17:24:50.738399165 +0000 UTC m=+36.195649327 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.745854 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.745895 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.745908 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.745924 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.745935 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:42Z","lastTransitionTime":"2025-10-13T17:24:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.839847 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.839886 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.839914 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.839945 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:24:42 crc kubenswrapper[4720]: E1013 17:24:42.840017 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 13 17:24:42 crc kubenswrapper[4720]: E1013 17:24:42.840045 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 13 17:24:42 crc kubenswrapper[4720]: E1013 17:24:42.840060 4720 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 13 17:24:42 crc kubenswrapper[4720]: E1013 17:24:42.840058 4720 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 13 17:24:42 crc kubenswrapper[4720]: E1013 17:24:42.840077 4720 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 13 17:24:42 crc kubenswrapper[4720]: E1013 17:24:42.840092 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 13 17:24:42 crc kubenswrapper[4720]: E1013 17:24:42.840317 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 13 17:24:42 crc kubenswrapper[4720]: E1013 17:24:42.840332 4720 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 13 17:24:42 crc kubenswrapper[4720]: E1013 17:24:42.840137 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-13 17:24:50.84010913 +0000 UTC m=+36.297359262 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 13 17:24:42 crc kubenswrapper[4720]: E1013 17:24:42.840389 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-13 17:24:50.840372537 +0000 UTC m=+36.297622719 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 13 17:24:42 crc kubenswrapper[4720]: E1013 17:24:42.840414 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-13 17:24:50.840403488 +0000 UTC m=+36.297653740 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 13 17:24:42 crc kubenswrapper[4720]: E1013 17:24:42.840430 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-13 17:24:50.840423029 +0000 UTC m=+36.297673161 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.847902 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.847926 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.847934 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.847946 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.847955 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:42Z","lastTransitionTime":"2025-10-13T17:24:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.950567 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.950619 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.950638 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.950661 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:42 crc kubenswrapper[4720]: I1013 17:24:42.950687 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:42Z","lastTransitionTime":"2025-10-13T17:24:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.053707 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.053745 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.053755 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.053771 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.053781 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:43Z","lastTransitionTime":"2025-10-13T17:24:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.156305 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.156339 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.156348 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.156366 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.156378 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:43Z","lastTransitionTime":"2025-10-13T17:24:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.167433 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.167448 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.167464 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:24:43 crc kubenswrapper[4720]: E1013 17:24:43.167533 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 17:24:43 crc kubenswrapper[4720]: E1013 17:24:43.167679 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 17:24:43 crc kubenswrapper[4720]: E1013 17:24:43.167784 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.259119 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.259227 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.259253 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.259289 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.259313 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:43Z","lastTransitionTime":"2025-10-13T17:24:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.344374 4720 generic.go:334] "Generic (PLEG): container finished" podID="2d7408e5-b529-4396-92b3-2fed275c3250" containerID="f42ff97f0a5fb40b52d5445a00eb8f70f8028610b76baf33117b425b821b5ec2" exitCode=0 Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.344479 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5rgfd" event={"ID":"2d7408e5-b529-4396-92b3-2fed275c3250","Type":"ContainerDied","Data":"f42ff97f0a5fb40b52d5445a00eb8f70f8028610b76baf33117b425b821b5ec2"} Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.346934 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-bvtrx" event={"ID":"b35c333b-0d4e-4d9a-a8fe-a1c6f5fbbf75","Type":"ContainerStarted","Data":"03411ae3786161605ed0cb08a21661c7c7a4cc93f91320586547b8ffab3e90ce"} Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.356074 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" event={"ID":"8064812e-b6aa-4f56-81c9-16154c00abad","Type":"ContainerStarted","Data":"fd8a4dfb390a1643edb4eb4bd01970eaffd8dab0f748effac9e7c691ef4a3ef8"} Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.356131 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" event={"ID":"8064812e-b6aa-4f56-81c9-16154c00abad","Type":"ContainerStarted","Data":"013dfade47eccae4339fca0d379d914f63502613f87f1bc621113f7008634475"} Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.361966 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.362016 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.362034 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.362057 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.362075 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:43Z","lastTransitionTime":"2025-10-13T17:24:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.364859 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:43Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.382374 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:43Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.422026 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a1f1696-ed7f-4482-a300-ca25b68e09e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2507a30017fbd9947ca456b84812f0b26baabf572768870f5a34b3c7699443cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc60672e282de4d3d6a2d8fd64306ec0fc0d032ade655f88afb0c0b9e0a8e589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cf903094dae4e559bcef3e269240670b5d4693344d600779c394ce1e039b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06eea6556daedf913d87b8b0527b337070528b07cc026de878b4df332598a2d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b452d113d48729b6d4ab59f80138b15e7cb16cc16eef4ec9916901a067986d07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:43Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.454562 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8064812e-b6aa-4f56-81c9-16154c00abad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:43Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.465294 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.465368 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.465392 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.465422 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.465447 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:43Z","lastTransitionTime":"2025-10-13T17:24:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.471108 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bvtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b35c333b-0d4e-4d9a-a8fe-a1c6f5fbbf75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g97k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bvtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:43Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.497933 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea323a1a-c607-40cf-b61c-b7ff03399c45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88b8a46e2e2c2342bd22a4a9b42cadd0188ec83f23815773f6ffb46be79fdf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d875b0d7eb9f13bc697597a33f7196598ae679980b13842e92acf20477526f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e18d0e9df43d366a6e7088cc3d5bdc794882828f60f008eaf29e788e0141ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef2ba92cea782dd4fd31a511e1086777b58c00340014dff74ff6615b07bd10c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7977e20d8558810fb9f8f827830d0378de7d15c71340d396a95b506902c0962\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T17:24:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1013 17:24:28.736714 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 17:24:28.738546 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445610647/tls.crt::/tmp/serving-cert-445610647/tls.key\\\\\\\"\\\\nI1013 17:24:34.860768 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 17:24:34.864611 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 17:24:34.864632 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 17:24:34.864654 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 17:24:34.864660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 17:24:34.871230 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1013 17:24:34.871248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1013 17:24:34.871273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871290 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871305 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 17:24:34.871314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 17:24:34.871324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 17:24:34.871335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1013 17:24:34.872239 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd536f783e3f8f974f25772994c3d00a01c2ba66997a6e06236f1b83b890f8f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:43Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.518454 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxmjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b45ec2d-5bea-4007-a49f-224a866f93eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00868f87f1a3020a1db7e50377bbe90203d6e93799c8ec9f3100dd52a6c0cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xsrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxmjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:43Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.533612 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:43Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.552930 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://705766b44d481300d0fd8cc654aa2dc9046733e5a71ad3a7ab789f85e737dcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a69b8825692cbbb7cce0a52b1ccb7b137ce6c9f8a8d788fff95bb7228cb40e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:43Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.570117 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f85e4aab30e7f49ef05e48cc22fc22e0291e32604a8f74ae1cc2ea57b5889e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:43Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.571692 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.571749 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.571768 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.571793 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.571811 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:43Z","lastTransitionTime":"2025-10-13T17:24:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.589415 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f856022da70859465022db433585bd8cad12f8cd2aeae7943491d3f7e5049256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:43Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.602679 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmjlm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f85220d6-f940-4538-806a-2b26bacd7b09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6315723f6cd375a392c8c427a737021b7e01415c902514b09ee43407749caa34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7795z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmjlm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:43Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.618525 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7baf94d8-250c-4568-ab1c-a1429b8caf70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435ba16189d6d19892aaab158512c1a84a72103779e3cc4b2c634de344583c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e566f3b66d404dd8da93bba1745b9e67cd63532330591f5b735f6826c07eae2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ec64662f56c2609ebac1383a78840ba4e0d9fb1fe10e666169285d3852e6b40\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d8f079c2674e0c6bea1837755db23bc3205da6a01ac1896d6293226d9c6fb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:43Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.631681 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce442c80-fcde-4b79-b6f9-f8f25771dfd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10e7a946185d6502e37a1ab60ad1eca36cce991f230188d22b2d6e2ea554f920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://864e330267c8cec8487e1dcf4a50bbb2ba37d81d80361f0873e6bf69de1ed721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-htwnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:43Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.648219 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5rgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d7408e5-b529-4396-92b3-2fed275c3250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42ff97f0a5fb40b52d5445a00eb8f70f8028610b76baf33117b425b821b5ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f42ff97f0a5fb40b52d5445a00eb8f70f8028610b76baf33117b425b821b5ec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5rgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:43Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.671604 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxmjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b45ec2d-5bea-4007-a49f-224a866f93eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00868f87f1a3020a1db7e50377bbe90203d6e93799c8ec9f3100dd52a6c0cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xsrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxmjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:43Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.674964 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.675013 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.675030 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.675053 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.675070 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:43Z","lastTransitionTime":"2025-10-13T17:24:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.704264 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8064812e-b6aa-4f56-81c9-16154c00abad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:43Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.718155 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bvtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b35c333b-0d4e-4d9a-a8fe-a1c6f5fbbf75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03411ae3786161605ed0cb08a21661c7c7a4cc93f91320586547b8ffab3e90ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g97k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bvtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:43Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.743719 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea323a1a-c607-40cf-b61c-b7ff03399c45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88b8a46e2e2c2342bd22a4a9b42cadd0188ec83f23815773f6ffb46be79fdf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d875b0d7eb9f13bc697597a33f7196598ae679980b13842e92acf20477526f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e18d0e9df43d366a6e7088cc3d5bdc794882828f60f008eaf29e788e0141ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef2ba92cea782dd4fd31a511e1086777b58c00340014dff74ff6615b07bd10c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7977e20d8558810fb9f8f827830d0378de7d15c71340d396a95b506902c0962\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T17:24:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1013 17:24:28.736714 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 17:24:28.738546 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445610647/tls.crt::/tmp/serving-cert-445610647/tls.key\\\\\\\"\\\\nI1013 17:24:34.860768 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 17:24:34.864611 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 17:24:34.864632 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 17:24:34.864654 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 17:24:34.864660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 17:24:34.871230 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1013 17:24:34.871248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1013 17:24:34.871273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871290 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871305 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 17:24:34.871314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 17:24:34.871324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 17:24:34.871335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1013 17:24:34.872239 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd536f783e3f8f974f25772994c3d00a01c2ba66997a6e06236f1b83b890f8f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:43Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.763413 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:43Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.778677 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.778742 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.778764 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.778792 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.778814 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:43Z","lastTransitionTime":"2025-10-13T17:24:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.783138 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://705766b44d481300d0fd8cc654aa2dc9046733e5a71ad3a7ab789f85e737dcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a69b8825692cbbb7cce0a52b1ccb7b137ce6c9f8a8d788fff95bb7228cb40e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:43Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.800989 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f85e4aab30e7f49ef05e48cc22fc22e0291e32604a8f74ae1cc2ea57b5889e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:43Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.825223 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f856022da70859465022db433585bd8cad12f8cd2aeae7943491d3f7e5049256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:43Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.842905 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmjlm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f85220d6-f940-4538-806a-2b26bacd7b09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6315723f6cd375a392c8c427a737021b7e01415c902514b09ee43407749caa34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7795z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmjlm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:43Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.862754 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7baf94d8-250c-4568-ab1c-a1429b8caf70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435ba16189d6d19892aaab158512c1a84a72103779e3cc4b2c634de344583c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e566f3b66d404dd8da93bba1745b9e67cd63532330591f5b735f6826c07eae2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ec64662f56c2609ebac1383a78840ba4e0d9fb1fe10e666169285d3852e6b40\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d8f079c2674e0c6bea1837755db23bc3205da6a01ac1896d6293226d9c6fb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:43Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.881486 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.881520 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.881531 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.881544 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.881554 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:43Z","lastTransitionTime":"2025-10-13T17:24:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.885752 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5rgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d7408e5-b529-4396-92b3-2fed275c3250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42ff97f0a5fb40b52d5445a00eb8f70f8028610b76baf33117b425b821b5ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f42ff97f0a5fb40b52d5445a00eb8f70f8028610b76baf33117b425b821b5ec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5rgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:43Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.905336 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce442c80-fcde-4b79-b6f9-f8f25771dfd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10e7a946185d6502e37a1ab60ad1eca36cce991f230188d22b2d6e2ea554f920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://864e330267c8cec8487e1dcf4a50bbb2ba37d81d80361f0873e6bf69de1ed721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-htwnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:43Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.927952 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:43Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.946093 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:43Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.975775 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a1f1696-ed7f-4482-a300-ca25b68e09e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2507a30017fbd9947ca456b84812f0b26baabf572768870f5a34b3c7699443cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc60672e282de4d3d6a2d8fd64306ec0fc0d032ade655f88afb0c0b9e0a8e589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cf903094dae4e559bcef3e269240670b5d4693344d600779c394ce1e039b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06eea6556daedf913d87b8b0527b337070528b07cc026de878b4df332598a2d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b452d113d48729b6d4ab59f80138b15e7cb16cc16eef4ec9916901a067986d07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:43Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.984724 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.984795 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.984816 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.984845 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:43 crc kubenswrapper[4720]: I1013 17:24:43.984868 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:43Z","lastTransitionTime":"2025-10-13T17:24:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:44 crc kubenswrapper[4720]: I1013 17:24:44.087966 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:44 crc kubenswrapper[4720]: I1013 17:24:44.088064 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:44 crc kubenswrapper[4720]: I1013 17:24:44.088090 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:44 crc kubenswrapper[4720]: I1013 17:24:44.088123 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:44 crc kubenswrapper[4720]: I1013 17:24:44.088146 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:44Z","lastTransitionTime":"2025-10-13T17:24:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:44 crc kubenswrapper[4720]: I1013 17:24:44.190863 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:44 crc kubenswrapper[4720]: I1013 17:24:44.190896 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:44 crc kubenswrapper[4720]: I1013 17:24:44.190903 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:44 crc kubenswrapper[4720]: I1013 17:24:44.190916 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:44 crc kubenswrapper[4720]: I1013 17:24:44.190926 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:44Z","lastTransitionTime":"2025-10-13T17:24:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:44 crc kubenswrapper[4720]: I1013 17:24:44.294569 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:44 crc kubenswrapper[4720]: I1013 17:24:44.294641 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:44 crc kubenswrapper[4720]: I1013 17:24:44.294667 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:44 crc kubenswrapper[4720]: I1013 17:24:44.294695 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:44 crc kubenswrapper[4720]: I1013 17:24:44.294716 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:44Z","lastTransitionTime":"2025-10-13T17:24:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:44 crc kubenswrapper[4720]: I1013 17:24:44.365375 4720 generic.go:334] "Generic (PLEG): container finished" podID="2d7408e5-b529-4396-92b3-2fed275c3250" containerID="471bb77042e72bcac4bb28dd26f6151a7154bea0b3ec92272e405e9b22bc0170" exitCode=0 Oct 13 17:24:44 crc kubenswrapper[4720]: I1013 17:24:44.365447 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5rgfd" event={"ID":"2d7408e5-b529-4396-92b3-2fed275c3250","Type":"ContainerDied","Data":"471bb77042e72bcac4bb28dd26f6151a7154bea0b3ec92272e405e9b22bc0170"} Oct 13 17:24:44 crc kubenswrapper[4720]: I1013 17:24:44.391689 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://705766b44d481300d0fd8cc654aa2dc9046733e5a71ad3a7ab789f85e737dcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a69b8825692cbbb7cce0a52b1ccb7b137ce6c9f8a8d788fff95bb7228cb40e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:44Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:44 crc kubenswrapper[4720]: I1013 17:24:44.397899 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:44 crc kubenswrapper[4720]: I1013 17:24:44.397971 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:44 crc kubenswrapper[4720]: I1013 17:24:44.397997 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:44 crc kubenswrapper[4720]: I1013 17:24:44.398026 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:44 crc kubenswrapper[4720]: I1013 17:24:44.398074 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:44Z","lastTransitionTime":"2025-10-13T17:24:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:44 crc kubenswrapper[4720]: I1013 17:24:44.410427 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f85e4aab30e7f49ef05e48cc22fc22e0291e32604a8f74ae1cc2ea57b5889e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:44Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:44 crc kubenswrapper[4720]: I1013 17:24:44.426915 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f856022da70859465022db433585bd8cad12f8cd2aeae7943491d3f7e5049256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:44Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:44 crc kubenswrapper[4720]: I1013 17:24:44.440611 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmjlm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f85220d6-f940-4538-806a-2b26bacd7b09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6315723f6cd375a392c8c427a737021b7e01415c902514b09ee43407749caa34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7795z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmjlm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:44Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:44 crc kubenswrapper[4720]: I1013 17:24:44.462025 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7baf94d8-250c-4568-ab1c-a1429b8caf70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435ba16189d6d19892aaab158512c1a84a72103779e3cc4b2c634de344583c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e566f3b66d404dd8da93bba1745b9e67cd63532330591f5b735f6826c07eae2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ec64662f56c2609ebac1383a78840ba4e0d9fb1fe10e666169285d3852e6b40\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d8f079c2674e0c6bea1837755db23bc3205da6a01ac1896d6293226d9c6fb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:44Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:44 crc kubenswrapper[4720]: I1013 17:24:44.481248 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:44Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:44 crc kubenswrapper[4720]: I1013 17:24:44.501786 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:44 crc kubenswrapper[4720]: I1013 17:24:44.501824 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:44 crc kubenswrapper[4720]: I1013 17:24:44.501836 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:44 crc kubenswrapper[4720]: I1013 17:24:44.501869 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:44 crc kubenswrapper[4720]: I1013 17:24:44.501882 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:44Z","lastTransitionTime":"2025-10-13T17:24:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:44 crc kubenswrapper[4720]: I1013 17:24:44.503985 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce442c80-fcde-4b79-b6f9-f8f25771dfd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10e7a946185d6502e37a1ab60ad1eca36cce991f230188d22b2d6e2ea554f920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://864e330267c8cec8487e1dcf4a50bbb2ba37d81d80361f0873e6bf69de1ed721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-htwnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:44Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:44 crc kubenswrapper[4720]: I1013 17:24:44.528638 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5rgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d7408e5-b529-4396-92b3-2fed275c3250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42ff97f0a5fb40b52d5445a00eb8f70f8028610b76baf33117b425b821b5ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f42ff97f0a5fb40b52d5445a00eb8f70f8028610b76baf33117b425b821b5ec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://471bb77042e72bcac4bb28dd26f6151a7154bea0b3ec92272e405e9b22bc0170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://471bb77042e72bcac4bb28dd26f6151a7154bea0b3ec92272e405e9b22bc0170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5rgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:44Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:44 crc kubenswrapper[4720]: I1013 17:24:44.546732 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:44Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:44 crc kubenswrapper[4720]: I1013 17:24:44.576748 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a1f1696-ed7f-4482-a300-ca25b68e09e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2507a30017fbd9947ca456b84812f0b26baabf572768870f5a34b3c7699443cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc60672e282de4d3d6a2d8fd64306ec0fc0d032ade655f88afb0c0b9e0a8e589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cf903094dae4e559bcef3e269240670b5d4693344d600779c394ce1e039b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06eea6556daedf913d87b8b0527b337070528b07cc026de878b4df332598a2d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b452d113d48729b6d4ab59f80138b15e7cb16cc16eef4ec9916901a067986d07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:44Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:44 crc kubenswrapper[4720]: I1013 17:24:44.589832 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:44Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:44 crc kubenswrapper[4720]: I1013 17:24:44.600903 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bvtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b35c333b-0d4e-4d9a-a8fe-a1c6f5fbbf75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03411ae3786161605ed0cb08a21661c7c7a4cc93f91320586547b8ffab3e90ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g97k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bvtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:44Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:44 crc kubenswrapper[4720]: I1013 17:24:44.604171 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:44 crc kubenswrapper[4720]: I1013 17:24:44.604233 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:44 crc kubenswrapper[4720]: I1013 17:24:44.604249 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:44 crc kubenswrapper[4720]: I1013 17:24:44.604273 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:44 crc kubenswrapper[4720]: I1013 17:24:44.604287 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:44Z","lastTransitionTime":"2025-10-13T17:24:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:44 crc kubenswrapper[4720]: I1013 17:24:44.614351 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea323a1a-c607-40cf-b61c-b7ff03399c45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88b8a46e2e2c2342bd22a4a9b42cadd0188ec83f23815773f6ffb46be79fdf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d875b0d7eb9f13bc697597a33f7196598ae679980b13842e92acf20477526f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e18d0e9df43d366a6e7088cc3d5bdc794882828f60f008eaf29e788e0141ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef2ba92cea782dd4fd31a511e1086777b58c00340014dff74ff6615b07bd10c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7977e20d8558810fb9f8f827830d0378de7d15c71340d396a95b506902c0962\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T17:24:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1013 17:24:28.736714 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 17:24:28.738546 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445610647/tls.crt::/tmp/serving-cert-445610647/tls.key\\\\\\\"\\\\nI1013 17:24:34.860768 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 17:24:34.864611 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 17:24:34.864632 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 17:24:34.864654 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 17:24:34.864660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 17:24:34.871230 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1013 17:24:34.871248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1013 17:24:34.871273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871290 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871305 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 17:24:34.871314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 17:24:34.871324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 17:24:34.871335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1013 17:24:34.872239 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd536f783e3f8f974f25772994c3d00a01c2ba66997a6e06236f1b83b890f8f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:44Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:44 crc kubenswrapper[4720]: I1013 17:24:44.627987 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxmjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b45ec2d-5bea-4007-a49f-224a866f93eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00868f87f1a3020a1db7e50377bbe90203d6e93799c8ec9f3100dd52a6c0cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xsrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxmjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:44Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:44 crc kubenswrapper[4720]: I1013 17:24:44.650134 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8064812e-b6aa-4f56-81c9-16154c00abad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:44Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:44 crc kubenswrapper[4720]: I1013 17:24:44.706737 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:44 crc kubenswrapper[4720]: I1013 17:24:44.706770 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:44 crc kubenswrapper[4720]: I1013 17:24:44.706779 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:44 crc kubenswrapper[4720]: I1013 17:24:44.706795 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:44 crc kubenswrapper[4720]: I1013 17:24:44.706805 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:44Z","lastTransitionTime":"2025-10-13T17:24:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:44 crc kubenswrapper[4720]: I1013 17:24:44.809374 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:44 crc kubenswrapper[4720]: I1013 17:24:44.809427 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:44 crc kubenswrapper[4720]: I1013 17:24:44.809443 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:44 crc kubenswrapper[4720]: I1013 17:24:44.809464 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:44 crc kubenswrapper[4720]: I1013 17:24:44.809480 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:44Z","lastTransitionTime":"2025-10-13T17:24:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:44 crc kubenswrapper[4720]: I1013 17:24:44.911879 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:44 crc kubenswrapper[4720]: I1013 17:24:44.911921 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:44 crc kubenswrapper[4720]: I1013 17:24:44.911931 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:44 crc kubenswrapper[4720]: I1013 17:24:44.911946 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:44 crc kubenswrapper[4720]: I1013 17:24:44.911956 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:44Z","lastTransitionTime":"2025-10-13T17:24:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.014608 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.014656 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.014665 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.014684 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.014695 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:45Z","lastTransitionTime":"2025-10-13T17:24:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.117714 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.118009 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.118237 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.118471 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.118807 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:45Z","lastTransitionTime":"2025-10-13T17:24:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.167590 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.167673 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:24:45 crc kubenswrapper[4720]: E1013 17:24:45.167764 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.167776 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:24:45 crc kubenswrapper[4720]: E1013 17:24:45.167918 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 17:24:45 crc kubenswrapper[4720]: E1013 17:24:45.168163 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.187550 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce442c80-fcde-4b79-b6f9-f8f25771dfd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10e7a946185d6502e37a1ab60ad1eca36cce991f230188d22b2d6e2ea554f920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://864e330267c8cec8487e1dcf4a50bbb2ba37d81d80361f0873e6bf69de1ed721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-htwnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:45Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.212224 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5rgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d7408e5-b529-4396-92b3-2fed275c3250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42ff97f0a5fb40b52d5445a00eb8f70f8028610b76baf33117b425b821b5ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f42ff97f0a5fb40b52d5445a00eb8f70f8028610b76baf33117b425b821b5ec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://471bb77042e72bcac4bb28dd26f6151a7154bea0b3ec92272e405e9b22bc0170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://471bb77042e72bcac4bb28dd26f6151a7154bea0b3ec92272e405e9b22bc0170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5rgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:45Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.220822 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.220864 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.220875 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.220892 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.220906 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:45Z","lastTransitionTime":"2025-10-13T17:24:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.245313 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a1f1696-ed7f-4482-a300-ca25b68e09e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2507a30017fbd9947ca456b84812f0b26baabf572768870f5a34b3c7699443cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc60672e282de4d3d6a2d8fd64306ec0fc0d032ade655f88afb0c0b9e0a8e589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cf903094dae4e559bcef3e269240670b5d4693344d600779c394ce1e039b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06eea6556daedf913d87b8b0527b337070528b07cc026de878b4df332598a2d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b452d113d48729b6d4ab59f80138b15e7cb16cc16eef4ec9916901a067986d07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:45Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.260993 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:45Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.283324 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:45Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.303494 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea323a1a-c607-40cf-b61c-b7ff03399c45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88b8a46e2e2c2342bd22a4a9b42cadd0188ec83f23815773f6ffb46be79fdf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d875b0d7eb9f13bc697597a33f7196598ae679980b13842e92acf20477526f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e18d0e9df43d366a6e7088cc3d5bdc794882828f60f008eaf29e788e0141ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef2ba92cea782dd4fd31a511e1086777b58c00340014dff74ff6615b07bd10c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7977e20d8558810fb9f8f827830d0378de7d15c71340d396a95b506902c0962\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T17:24:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1013 17:24:28.736714 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 17:24:28.738546 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445610647/tls.crt::/tmp/serving-cert-445610647/tls.key\\\\\\\"\\\\nI1013 17:24:34.860768 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 17:24:34.864611 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 17:24:34.864632 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 17:24:34.864654 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 17:24:34.864660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 17:24:34.871230 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1013 17:24:34.871248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1013 17:24:34.871273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871290 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871305 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 17:24:34.871314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 17:24:34.871324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 17:24:34.871335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1013 17:24:34.872239 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd536f783e3f8f974f25772994c3d00a01c2ba66997a6e06236f1b83b890f8f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:45Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.321716 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxmjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b45ec2d-5bea-4007-a49f-224a866f93eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00868f87f1a3020a1db7e50377bbe90203d6e93799c8ec9f3100dd52a6c0cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xsrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxmjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:45Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.328330 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.328410 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.328443 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.328476 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.328498 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:45Z","lastTransitionTime":"2025-10-13T17:24:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.346335 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8064812e-b6aa-4f56-81c9-16154c00abad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:45Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.357443 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bvtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b35c333b-0d4e-4d9a-a8fe-a1c6f5fbbf75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03411ae3786161605ed0cb08a21661c7c7a4cc93f91320586547b8ffab3e90ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g97k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bvtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:45Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.372004 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7baf94d8-250c-4568-ab1c-a1429b8caf70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435ba16189d6d19892aaab158512c1a84a72103779e3cc4b2c634de344583c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e566f3b66d404dd8da93bba1745b9e67cd63532330591f5b735f6826c07eae2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ec64662f56c2609ebac1383a78840ba4e0d9fb1fe10e666169285d3852e6b40\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d8f079c2674e0c6bea1837755db23bc3205da6a01ac1896d6293226d9c6fb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:45Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.372783 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" event={"ID":"8064812e-b6aa-4f56-81c9-16154c00abad","Type":"ContainerStarted","Data":"34b1fffbebfbd3599bed22fe3fe80129f5b704560c54d09e9ab5e9ea8bba4513"} Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.377389 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5rgfd" event={"ID":"2d7408e5-b529-4396-92b3-2fed275c3250","Type":"ContainerDied","Data":"20cb8981843cc39e8722f560f7a7e75483c93a30edffde9835533f8dc5b50587"} Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.377497 4720 generic.go:334] "Generic (PLEG): container finished" podID="2d7408e5-b529-4396-92b3-2fed275c3250" containerID="20cb8981843cc39e8722f560f7a7e75483c93a30edffde9835533f8dc5b50587" exitCode=0 Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.389925 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:45Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.409006 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://705766b44d481300d0fd8cc654aa2dc9046733e5a71ad3a7ab789f85e737dcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a69b8825692cbbb7cce0a52b1ccb7b137ce6c9f8a8d788fff95bb7228cb40e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:45Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.428943 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f85e4aab30e7f49ef05e48cc22fc22e0291e32604a8f74ae1cc2ea57b5889e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:45Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.432851 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.432903 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.432922 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.432947 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.432965 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:45Z","lastTransitionTime":"2025-10-13T17:24:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.443831 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f856022da70859465022db433585bd8cad12f8cd2aeae7943491d3f7e5049256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:45Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.455073 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmjlm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f85220d6-f940-4538-806a-2b26bacd7b09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6315723f6cd375a392c8c427a737021b7e01415c902514b09ee43407749caa34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7795z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmjlm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:45Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.472882 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7baf94d8-250c-4568-ab1c-a1429b8caf70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435ba16189d6d19892aaab158512c1a84a72103779e3cc4b2c634de344583c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e566f3b66d404dd8da93bba1745b9e67cd63532330591f5b735f6826c07eae2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ec64662f56c2609ebac1383a78840ba4e0d9fb1fe10e666169285d3852e6b40\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d8f079c2674e0c6bea1837755db23bc3205da6a01ac1896d6293226d9c6fb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:45Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.490968 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:45Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.509529 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://705766b44d481300d0fd8cc654aa2dc9046733e5a71ad3a7ab789f85e737dcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a69b8825692cbbb7cce0a52b1ccb7b137ce6c9f8a8d788fff95bb7228cb40e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:45Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.528123 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f85e4aab30e7f49ef05e48cc22fc22e0291e32604a8f74ae1cc2ea57b5889e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:45Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.535379 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.535425 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.535442 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.535467 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.535484 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:45Z","lastTransitionTime":"2025-10-13T17:24:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.548957 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f856022da70859465022db433585bd8cad12f8cd2aeae7943491d3f7e5049256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:45Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.561220 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmjlm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f85220d6-f940-4538-806a-2b26bacd7b09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6315723f6cd375a392c8c427a737021b7e01415c902514b09ee43407749caa34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7795z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmjlm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:45Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.574613 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce442c80-fcde-4b79-b6f9-f8f25771dfd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10e7a946185d6502e37a1ab60ad1eca36cce991f230188d22b2d6e2ea554f920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://864e330267c8cec8487e1dcf4a50bbb2ba37d81d80361f0873e6bf69de1ed721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-htwnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:45Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.592976 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5rgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d7408e5-b529-4396-92b3-2fed275c3250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42ff97f0a5fb40b52d5445a00eb8f70f8028610b76baf33117b425b821b5ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f42ff97f0a5fb40b52d5445a00eb8f70f8028610b76baf33117b425b821b5ec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://471bb77042e72bcac4bb28dd26f6151a7154bea0b3ec92272e405e9b22bc0170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://471bb77042e72bcac4bb28dd26f6151a7154bea0b3ec92272e405e9b22bc0170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20cb8981843cc39e8722f560f7a7e75483c93a30edffde9835533f8dc5b50587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20cb8981843cc39e8722f560f7a7e75483c93a30edffde9835533f8dc5b50587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5rgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:45Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.627276 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a1f1696-ed7f-4482-a300-ca25b68e09e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2507a30017fbd9947ca456b84812f0b26baabf572768870f5a34b3c7699443cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc60672e282de4d3d6a2d8fd64306ec0fc0d032ade655f88afb0c0b9e0a8e589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cf903094dae4e559bcef3e269240670b5d4693344d600779c394ce1e039b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06eea6556daedf913d87b8b0527b337070528b07cc026de878b4df332598a2d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b452d113d48729b6d4ab59f80138b15e7cb16cc16eef4ec9916901a067986d07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:45Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.638211 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.638246 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.638257 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.638275 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.638287 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:45Z","lastTransitionTime":"2025-10-13T17:24:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.642105 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:45Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.653391 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:45Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.669257 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea323a1a-c607-40cf-b61c-b7ff03399c45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88b8a46e2e2c2342bd22a4a9b42cadd0188ec83f23815773f6ffb46be79fdf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d875b0d7eb9f13bc697597a33f7196598ae679980b13842e92acf20477526f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e18d0e9df43d366a6e7088cc3d5bdc794882828f60f008eaf29e788e0141ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef2ba92cea782dd4fd31a511e1086777b58c00340014dff74ff6615b07bd10c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7977e20d8558810fb9f8f827830d0378de7d15c71340d396a95b506902c0962\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T17:24:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1013 17:24:28.736714 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 17:24:28.738546 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445610647/tls.crt::/tmp/serving-cert-445610647/tls.key\\\\\\\"\\\\nI1013 17:24:34.860768 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 17:24:34.864611 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 17:24:34.864632 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 17:24:34.864654 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 17:24:34.864660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 17:24:34.871230 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1013 17:24:34.871248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1013 17:24:34.871273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871290 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871305 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 17:24:34.871314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 17:24:34.871324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 17:24:34.871335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1013 17:24:34.872239 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd536f783e3f8f974f25772994c3d00a01c2ba66997a6e06236f1b83b890f8f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:45Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.689116 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxmjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b45ec2d-5bea-4007-a49f-224a866f93eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00868f87f1a3020a1db7e50377bbe90203d6e93799c8ec9f3100dd52a6c0cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xsrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxmjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:45Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.717789 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8064812e-b6aa-4f56-81c9-16154c00abad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:45Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.733253 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bvtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b35c333b-0d4e-4d9a-a8fe-a1c6f5fbbf75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03411ae3786161605ed0cb08a21661c7c7a4cc93f91320586547b8ffab3e90ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g97k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bvtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:45Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.741418 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.741458 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.741470 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.741488 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.741502 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:45Z","lastTransitionTime":"2025-10-13T17:24:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.845334 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.845378 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.845389 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.845405 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.845415 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:45Z","lastTransitionTime":"2025-10-13T17:24:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.948488 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.948523 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.948532 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.948546 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:45 crc kubenswrapper[4720]: I1013 17:24:45.948557 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:45Z","lastTransitionTime":"2025-10-13T17:24:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.051138 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.051246 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.051271 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.051296 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.051313 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:46Z","lastTransitionTime":"2025-10-13T17:24:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.154377 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.154439 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.154484 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.154508 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.154526 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:46Z","lastTransitionTime":"2025-10-13T17:24:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.257874 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.257908 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.257919 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.257934 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.257945 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:46Z","lastTransitionTime":"2025-10-13T17:24:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.360284 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.360339 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.360357 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.360381 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.360400 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:46Z","lastTransitionTime":"2025-10-13T17:24:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.385148 4720 generic.go:334] "Generic (PLEG): container finished" podID="2d7408e5-b529-4396-92b3-2fed275c3250" containerID="79851ed8c84027d1bb9eccb69bed004ae1cdbe934245c66325eafe42b12ea808" exitCode=0 Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.385229 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5rgfd" event={"ID":"2d7408e5-b529-4396-92b3-2fed275c3250","Type":"ContainerDied","Data":"79851ed8c84027d1bb9eccb69bed004ae1cdbe934245c66325eafe42b12ea808"} Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.408388 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce442c80-fcde-4b79-b6f9-f8f25771dfd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10e7a946185d6502e37a1ab60ad1eca36cce991f230188d22b2d6e2ea554f920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://864e330267c8cec8487e1dcf4a50bbb2ba37d81d80361f0873e6bf69de1ed721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-htwnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:46Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.426154 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5rgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d7408e5-b529-4396-92b3-2fed275c3250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42ff97f0a5fb40b52d5445a00eb8f70f8028610b76baf33117b425b821b5ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f42ff97f0a5fb40b52d5445a00eb8f70f8028610b76baf33117b425b821b5ec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://471bb77042e72bcac4bb28dd26f6151a7154bea0b3ec92272e405e9b22bc0170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://471bb77042e72bcac4bb28dd26f6151a7154bea0b3ec92272e405e9b22bc0170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20cb8981843cc39e8722f560f7a7e75483c93a30edffde9835533f8dc5b50587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20cb8981843cc39e8722f560f7a7e75483c93a30edffde9835533f8dc5b50587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79851ed8c84027d1bb9eccb69bed004ae1cdbe934245c66325eafe42b12ea808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79851ed8c84027d1bb9eccb69bed004ae1cdbe934245c66325eafe42b12ea808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5rgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:46Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.446377 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a1f1696-ed7f-4482-a300-ca25b68e09e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2507a30017fbd9947ca456b84812f0b26baabf572768870f5a34b3c7699443cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc60672e282de4d3d6a2d8fd64306ec0fc0d032ade655f88afb0c0b9e0a8e589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cf903094dae4e559bcef3e269240670b5d4693344d600779c394ce1e039b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06eea6556daedf913d87b8b0527b337070528b07cc026de878b4df332598a2d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b452d113d48729b6d4ab59f80138b15e7cb16cc16eef4ec9916901a067986d07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:46Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.464112 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:46Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.466802 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.466860 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.466879 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.466905 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.466924 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:46Z","lastTransitionTime":"2025-10-13T17:24:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.478867 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:46Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.498689 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea323a1a-c607-40cf-b61c-b7ff03399c45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88b8a46e2e2c2342bd22a4a9b42cadd0188ec83f23815773f6ffb46be79fdf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d875b0d7eb9f13bc697597a33f7196598ae679980b13842e92acf20477526f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e18d0e9df43d366a6e7088cc3d5bdc794882828f60f008eaf29e788e0141ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef2ba92cea782dd4fd31a511e1086777b58c00340014dff74ff6615b07bd10c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7977e20d8558810fb9f8f827830d0378de7d15c71340d396a95b506902c0962\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T17:24:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1013 17:24:28.736714 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 17:24:28.738546 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445610647/tls.crt::/tmp/serving-cert-445610647/tls.key\\\\\\\"\\\\nI1013 17:24:34.860768 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 17:24:34.864611 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 17:24:34.864632 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 17:24:34.864654 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 17:24:34.864660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 17:24:34.871230 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1013 17:24:34.871248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1013 17:24:34.871273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871290 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871305 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 17:24:34.871314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 17:24:34.871324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 17:24:34.871335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1013 17:24:34.872239 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd536f783e3f8f974f25772994c3d00a01c2ba66997a6e06236f1b83b890f8f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:46Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.514948 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxmjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b45ec2d-5bea-4007-a49f-224a866f93eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00868f87f1a3020a1db7e50377bbe90203d6e93799c8ec9f3100dd52a6c0cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xsrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxmjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:46Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.535827 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8064812e-b6aa-4f56-81c9-16154c00abad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:46Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.549501 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bvtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b35c333b-0d4e-4d9a-a8fe-a1c6f5fbbf75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03411ae3786161605ed0cb08a21661c7c7a4cc93f91320586547b8ffab3e90ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g97k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bvtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:46Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.568751 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7baf94d8-250c-4568-ab1c-a1429b8caf70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435ba16189d6d19892aaab158512c1a84a72103779e3cc4b2c634de344583c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e566f3b66d404dd8da93bba1745b9e67cd63532330591f5b735f6826c07eae2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ec64662f56c2609ebac1383a78840ba4e0d9fb1fe10e666169285d3852e6b40\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d8f079c2674e0c6bea1837755db23bc3205da6a01ac1896d6293226d9c6fb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:46Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.571169 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.571250 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.571267 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.571295 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.571314 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:46Z","lastTransitionTime":"2025-10-13T17:24:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.586695 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:46Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.603126 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://705766b44d481300d0fd8cc654aa2dc9046733e5a71ad3a7ab789f85e737dcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a69b8825692cbbb7cce0a52b1ccb7b137ce6c9f8a8d788fff95bb7228cb40e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:46Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.624296 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f85e4aab30e7f49ef05e48cc22fc22e0291e32604a8f74ae1cc2ea57b5889e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:46Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.642889 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f856022da70859465022db433585bd8cad12f8cd2aeae7943491d3f7e5049256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:46Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.658871 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmjlm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f85220d6-f940-4538-806a-2b26bacd7b09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6315723f6cd375a392c8c427a737021b7e01415c902514b09ee43407749caa34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7795z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmjlm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:46Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.674766 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.674812 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.674828 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.674849 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.674865 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:46Z","lastTransitionTime":"2025-10-13T17:24:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.777558 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.777619 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.777636 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.777660 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.777678 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:46Z","lastTransitionTime":"2025-10-13T17:24:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.880718 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.880762 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.880776 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.880794 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.880806 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:46Z","lastTransitionTime":"2025-10-13T17:24:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.983895 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.983953 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.983969 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.983992 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:46 crc kubenswrapper[4720]: I1013 17:24:46.984010 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:46Z","lastTransitionTime":"2025-10-13T17:24:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.086633 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.086695 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.086713 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.086736 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.086750 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:47Z","lastTransitionTime":"2025-10-13T17:24:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.167783 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.167899 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:24:47 crc kubenswrapper[4720]: E1013 17:24:47.167978 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 17:24:47 crc kubenswrapper[4720]: E1013 17:24:47.168088 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.168238 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:24:47 crc kubenswrapper[4720]: E1013 17:24:47.168347 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.189283 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.189330 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.189346 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.189367 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.189384 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:47Z","lastTransitionTime":"2025-10-13T17:24:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.292928 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.292982 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.292998 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.293022 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.293039 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:47Z","lastTransitionTime":"2025-10-13T17:24:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.395492 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.395541 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.395558 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.395580 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.395597 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:47Z","lastTransitionTime":"2025-10-13T17:24:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.396373 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" event={"ID":"8064812e-b6aa-4f56-81c9-16154c00abad","Type":"ContainerStarted","Data":"709132dcc33ee0d4d48e7a21fd2471fd22c065819f93843026160310ecd43470"} Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.396861 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.403476 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5rgfd" event={"ID":"2d7408e5-b529-4396-92b3-2fed275c3250","Type":"ContainerStarted","Data":"439dea1ed9724a215fa1aa1f4a4bd84a4bb998f2f86814f5f4eb46989242f11c"} Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.425860 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a1f1696-ed7f-4482-a300-ca25b68e09e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2507a30017fbd9947ca456b84812f0b26baabf572768870f5a34b3c7699443cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc60672e282de4d3d6a2d8fd64306ec0fc0d032ade655f88afb0c0b9e0a8e589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cf903094dae4e559bcef3e269240670b5d4693344d600779c394ce1e039b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06eea6556daedf913d87b8b0527b337070528b07cc026de878b4df332598a2d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b452d113d48729b6d4ab59f80138b15e7cb16cc16eef4ec9916901a067986d07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:47Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.453623 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:47Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.458016 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.482661 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:47Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.498249 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.498290 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.498303 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.498319 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.498332 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:47Z","lastTransitionTime":"2025-10-13T17:24:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.511140 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea323a1a-c607-40cf-b61c-b7ff03399c45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88b8a46e2e2c2342bd22a4a9b42cadd0188ec83f23815773f6ffb46be79fdf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d875b0d7eb9f13bc697597a33f7196598ae679980b13842e92acf20477526f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e18d0e9df43d366a6e7088cc3d5bdc794882828f60f008eaf29e788e0141ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef2ba92cea782dd4fd31a511e1086777b58c00340014dff74ff6615b07bd10c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7977e20d8558810fb9f8f827830d0378de7d15c71340d396a95b506902c0962\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T17:24:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1013 17:24:28.736714 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 17:24:28.738546 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445610647/tls.crt::/tmp/serving-cert-445610647/tls.key\\\\\\\"\\\\nI1013 17:24:34.860768 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 17:24:34.864611 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 17:24:34.864632 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 17:24:34.864654 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 17:24:34.864660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 17:24:34.871230 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1013 17:24:34.871248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1013 17:24:34.871273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871290 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871305 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 17:24:34.871314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 17:24:34.871324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 17:24:34.871335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1013 17:24:34.872239 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd536f783e3f8f974f25772994c3d00a01c2ba66997a6e06236f1b83b890f8f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:47Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.525589 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxmjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b45ec2d-5bea-4007-a49f-224a866f93eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00868f87f1a3020a1db7e50377bbe90203d6e93799c8ec9f3100dd52a6c0cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xsrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxmjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:47Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.546902 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8064812e-b6aa-4f56-81c9-16154c00abad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c28fde07efa17d20ddecd1c8a0da09a6fa09e98a90b14718565c00aef9c0d25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bbd41d9b0518c8b060b1d01c335b6c9923e7196624fbd49326133faaa2e36ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd8a4dfb390a1643edb4eb4bd01970eaffd8dab0f748effac9e7c691ef4a3ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://013dfade47eccae4339fca0d379d914f63502613f87f1bc621113f7008634475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69d511a66a84fb0fa90f426bd4609c2b3ad915571802c6c024e0370d6e1683a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1aefa46ee989416f0e4fd05f6f12031da637a354fadd7f6caef75ddc2c74ddf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://709132dcc33ee0d4d48e7a21fd2471fd22c065819f93843026160310ecd43470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34b1fffbebfbd3599bed22fe3fe80129f5b704560c54d09e9ab5e9ea8bba4513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:47Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.558242 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bvtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b35c333b-0d4e-4d9a-a8fe-a1c6f5fbbf75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03411ae3786161605ed0cb08a21661c7c7a4cc93f91320586547b8ffab3e90ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g97k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bvtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:47Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.571405 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f856022da70859465022db433585bd8cad12f8cd2aeae7943491d3f7e5049256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:47Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.583268 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmjlm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f85220d6-f940-4538-806a-2b26bacd7b09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6315723f6cd375a392c8c427a737021b7e01415c902514b09ee43407749caa34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7795z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmjlm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:47Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.597691 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7baf94d8-250c-4568-ab1c-a1429b8caf70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435ba16189d6d19892aaab158512c1a84a72103779e3cc4b2c634de344583c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e566f3b66d404dd8da93bba1745b9e67cd63532330591f5b735f6826c07eae2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ec64662f56c2609ebac1383a78840ba4e0d9fb1fe10e666169285d3852e6b40\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d8f079c2674e0c6bea1837755db23bc3205da6a01ac1896d6293226d9c6fb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:47Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.600833 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.600872 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.600887 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.600906 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.600922 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:47Z","lastTransitionTime":"2025-10-13T17:24:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.614551 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:47Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.627605 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://705766b44d481300d0fd8cc654aa2dc9046733e5a71ad3a7ab789f85e737dcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a69b8825692cbbb7cce0a52b1ccb7b137ce6c9f8a8d788fff95bb7228cb40e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:47Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.638551 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f85e4aab30e7f49ef05e48cc22fc22e0291e32604a8f74ae1cc2ea57b5889e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:47Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.652039 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce442c80-fcde-4b79-b6f9-f8f25771dfd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10e7a946185d6502e37a1ab60ad1eca36cce991f230188d22b2d6e2ea554f920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://864e330267c8cec8487e1dcf4a50bbb2ba37d81d80361f0873e6bf69de1ed721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-htwnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:47Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.668856 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5rgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d7408e5-b529-4396-92b3-2fed275c3250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42ff97f0a5fb40b52d5445a00eb8f70f8028610b76baf33117b425b821b5ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f42ff97f0a5fb40b52d5445a00eb8f70f8028610b76baf33117b425b821b5ec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://471bb77042e72bcac4bb28dd26f6151a7154bea0b3ec92272e405e9b22bc0170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://471bb77042e72bcac4bb28dd26f6151a7154bea0b3ec92272e405e9b22bc0170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20cb8981843cc39e8722f560f7a7e75483c93a30edffde9835533f8dc5b50587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20cb8981843cc39e8722f560f7a7e75483c93a30edffde9835533f8dc5b50587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79851ed8c84027d1bb9eccb69bed004ae1cdbe934245c66325eafe42b12ea808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79851ed8c84027d1bb9eccb69bed004ae1cdbe934245c66325eafe42b12ea808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5rgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:47Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.680719 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce442c80-fcde-4b79-b6f9-f8f25771dfd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10e7a946185d6502e37a1ab60ad1eca36cce991f230188d22b2d6e2ea554f920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://864e330267c8cec8487e1dcf4a50bbb2ba37d81d80361f0873e6bf69de1ed721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-htwnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:47Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.696119 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5rgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d7408e5-b529-4396-92b3-2fed275c3250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://439dea1ed9724a215fa1aa1f4a4bd84a4bb998f2f86814f5f4eb46989242f11c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42ff97f0a5fb40b52d5445a00eb8f70f8028610b76baf33117b425b821b5ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f42ff97f0a5fb40b52d5445a00eb8f70f8028610b76baf33117b425b821b5ec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://471bb77042e72bcac4bb28dd26f6151a7154bea0b3ec92272e405e9b22bc0170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://471bb77042e72bcac4bb28dd26f6151a7154bea0b3ec92272e405e9b22bc0170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20cb8981843cc39e8722f560f7a7e75483c93a30edffde9835533f8dc5b50587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20cb8981843cc39e8722f560f7a7e75483c93a30edffde9835533f8dc5b50587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79851ed8c84027d1bb9eccb69bed004ae1cdbe934245c66325eafe42b12ea808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79851ed8c84027d1bb9eccb69bed004ae1cdbe934245c66325eafe42b12ea808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5rgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:47Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.703301 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.703356 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.703373 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.703397 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.703416 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:47Z","lastTransitionTime":"2025-10-13T17:24:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.709559 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:47Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.720814 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:47Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.741677 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a1f1696-ed7f-4482-a300-ca25b68e09e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2507a30017fbd9947ca456b84812f0b26baabf572768870f5a34b3c7699443cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc60672e282de4d3d6a2d8fd64306ec0fc0d032ade655f88afb0c0b9e0a8e589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cf903094dae4e559bcef3e269240670b5d4693344d600779c394ce1e039b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06eea6556daedf913d87b8b0527b337070528b07cc026de878b4df332598a2d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b452d113d48729b6d4ab59f80138b15e7cb16cc16eef4ec9916901a067986d07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:47Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.763545 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8064812e-b6aa-4f56-81c9-16154c00abad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c28fde07efa17d20ddecd1c8a0da09a6fa09e98a90b14718565c00aef9c0d25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bbd41d9b0518c8b060b1d01c335b6c9923e7196624fbd49326133faaa2e36ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd8a4dfb390a1643edb4eb4bd01970eaffd8dab0f748effac9e7c691ef4a3ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://013dfade47eccae4339fca0d379d914f63502613f87f1bc621113f7008634475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69d511a66a84fb0fa90f426bd4609c2b3ad915571802c6c024e0370d6e1683a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1aefa46ee989416f0e4fd05f6f12031da637a354fadd7f6caef75ddc2c74ddf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://709132dcc33ee0d4d48e7a21fd2471fd22c065819f93843026160310ecd43470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34b1fffbebfbd3599bed22fe3fe80129f5b704560c54d09e9ab5e9ea8bba4513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:47Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.779911 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bvtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b35c333b-0d4e-4d9a-a8fe-a1c6f5fbbf75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03411ae3786161605ed0cb08a21661c7c7a4cc93f91320586547b8ffab3e90ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g97k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bvtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:47Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.795890 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea323a1a-c607-40cf-b61c-b7ff03399c45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88b8a46e2e2c2342bd22a4a9b42cadd0188ec83f23815773f6ffb46be79fdf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d875b0d7eb9f13bc697597a33f7196598ae679980b13842e92acf20477526f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e18d0e9df43d366a6e7088cc3d5bdc794882828f60f008eaf29e788e0141ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef2ba92cea782dd4fd31a511e1086777b58c00340014dff74ff6615b07bd10c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7977e20d8558810fb9f8f827830d0378de7d15c71340d396a95b506902c0962\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T17:24:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1013 17:24:28.736714 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 17:24:28.738546 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445610647/tls.crt::/tmp/serving-cert-445610647/tls.key\\\\\\\"\\\\nI1013 17:24:34.860768 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 17:24:34.864611 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 17:24:34.864632 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 17:24:34.864654 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 17:24:34.864660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 17:24:34.871230 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1013 17:24:34.871248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1013 17:24:34.871273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871290 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871305 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 17:24:34.871314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 17:24:34.871324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 17:24:34.871335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1013 17:24:34.872239 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd536f783e3f8f974f25772994c3d00a01c2ba66997a6e06236f1b83b890f8f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:47Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.806438 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.806508 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.806525 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.806554 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.806572 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:47Z","lastTransitionTime":"2025-10-13T17:24:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.811160 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxmjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b45ec2d-5bea-4007-a49f-224a866f93eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00868f87f1a3020a1db7e50377bbe90203d6e93799c8ec9f3100dd52a6c0cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xsrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxmjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:47Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.824477 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:47Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.838984 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://705766b44d481300d0fd8cc654aa2dc9046733e5a71ad3a7ab789f85e737dcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a69b8825692cbbb7cce0a52b1ccb7b137ce6c9f8a8d788fff95bb7228cb40e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:47Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.852467 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f85e4aab30e7f49ef05e48cc22fc22e0291e32604a8f74ae1cc2ea57b5889e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:47Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.871506 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f856022da70859465022db433585bd8cad12f8cd2aeae7943491d3f7e5049256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:47Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.887524 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmjlm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f85220d6-f940-4538-806a-2b26bacd7b09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6315723f6cd375a392c8c427a737021b7e01415c902514b09ee43407749caa34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7795z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmjlm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:47Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.907549 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7baf94d8-250c-4568-ab1c-a1429b8caf70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435ba16189d6d19892aaab158512c1a84a72103779e3cc4b2c634de344583c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e566f3b66d404dd8da93bba1745b9e67cd63532330591f5b735f6826c07eae2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ec64662f56c2609ebac1383a78840ba4e0d9fb1fe10e666169285d3852e6b40\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d8f079c2674e0c6bea1837755db23bc3205da6a01ac1896d6293226d9c6fb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:47Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.909324 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.909385 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.909405 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.909434 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:47 crc kubenswrapper[4720]: I1013 17:24:47.909453 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:47Z","lastTransitionTime":"2025-10-13T17:24:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.012552 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.012644 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.012656 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.012672 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.012685 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:48Z","lastTransitionTime":"2025-10-13T17:24:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.116008 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.116061 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.116077 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.116099 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.116117 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:48Z","lastTransitionTime":"2025-10-13T17:24:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.218966 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.219043 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.219055 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.219071 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.219083 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:48Z","lastTransitionTime":"2025-10-13T17:24:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.321842 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.321890 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.321906 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.321928 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.321945 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:48Z","lastTransitionTime":"2025-10-13T17:24:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.407749 4720 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.408414 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.425480 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.425527 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.425547 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.425584 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.425603 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:48Z","lastTransitionTime":"2025-10-13T17:24:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.444445 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.463638 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce442c80-fcde-4b79-b6f9-f8f25771dfd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10e7a946185d6502e37a1ab60ad1eca36cce991f230188d22b2d6e2ea554f920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://864e330267c8cec8487e1dcf4a50bbb2ba37d81d80361f0873e6bf69de1ed721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-htwnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:48Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.481527 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5rgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d7408e5-b529-4396-92b3-2fed275c3250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://439dea1ed9724a215fa1aa1f4a4bd84a4bb998f2f86814f5f4eb46989242f11c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42ff97f0a5fb40b52d5445a00eb8f70f8028610b76baf33117b425b821b5ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f42ff97f0a5fb40b52d5445a00eb8f70f8028610b76baf33117b425b821b5ec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://471bb77042e72bcac4bb28dd26f6151a7154bea0b3ec92272e405e9b22bc0170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://471bb77042e72bcac4bb28dd26f6151a7154bea0b3ec92272e405e9b22bc0170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20cb8981843cc39e8722f560f7a7e75483c93a30edffde9835533f8dc5b50587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20cb8981843cc39e8722f560f7a7e75483c93a30edffde9835533f8dc5b50587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79851ed8c84027d1bb9eccb69bed004ae1cdbe934245c66325eafe42b12ea808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79851ed8c84027d1bb9eccb69bed004ae1cdbe934245c66325eafe42b12ea808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5rgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:48Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.497970 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:48Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.516501 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:48Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.528872 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.528919 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.528936 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.528961 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.528978 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:48Z","lastTransitionTime":"2025-10-13T17:24:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.540014 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a1f1696-ed7f-4482-a300-ca25b68e09e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2507a30017fbd9947ca456b84812f0b26baabf572768870f5a34b3c7699443cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc60672e282de4d3d6a2d8fd64306ec0fc0d032ade655f88afb0c0b9e0a8e589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cf903094dae4e559bcef3e269240670b5d4693344d600779c394ce1e039b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06eea6556daedf913d87b8b0527b337070528b07cc026de878b4df332598a2d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b452d113d48729b6d4ab59f80138b15e7cb16cc16eef4ec9916901a067986d07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:48Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.570517 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8064812e-b6aa-4f56-81c9-16154c00abad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c28fde07efa17d20ddecd1c8a0da09a6fa09e98a90b14718565c00aef9c0d25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bbd41d9b0518c8b060b1d01c335b6c9923e7196624fbd49326133faaa2e36ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd8a4dfb390a1643edb4eb4bd01970eaffd8dab0f748effac9e7c691ef4a3ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://013dfade47eccae4339fca0d379d914f63502613f87f1bc621113f7008634475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69d511a66a84fb0fa90f426bd4609c2b3ad915571802c6c024e0370d6e1683a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1aefa46ee989416f0e4fd05f6f12031da637a354fadd7f6caef75ddc2c74ddf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://709132dcc33ee0d4d48e7a21fd2471fd22c065819f93843026160310ecd43470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34b1fffbebfbd3599bed22fe3fe80129f5b704560c54d09e9ab5e9ea8bba4513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:48Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.583046 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bvtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b35c333b-0d4e-4d9a-a8fe-a1c6f5fbbf75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03411ae3786161605ed0cb08a21661c7c7a4cc93f91320586547b8ffab3e90ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g97k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bvtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:48Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.599083 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea323a1a-c607-40cf-b61c-b7ff03399c45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88b8a46e2e2c2342bd22a4a9b42cadd0188ec83f23815773f6ffb46be79fdf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d875b0d7eb9f13bc697597a33f7196598ae679980b13842e92acf20477526f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e18d0e9df43d366a6e7088cc3d5bdc794882828f60f008eaf29e788e0141ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef2ba92cea782dd4fd31a511e1086777b58c00340014dff74ff6615b07bd10c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7977e20d8558810fb9f8f827830d0378de7d15c71340d396a95b506902c0962\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T17:24:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1013 17:24:28.736714 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 17:24:28.738546 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445610647/tls.crt::/tmp/serving-cert-445610647/tls.key\\\\\\\"\\\\nI1013 17:24:34.860768 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 17:24:34.864611 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 17:24:34.864632 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 17:24:34.864654 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 17:24:34.864660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 17:24:34.871230 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1013 17:24:34.871248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1013 17:24:34.871273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871290 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871305 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 17:24:34.871314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 17:24:34.871324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 17:24:34.871335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1013 17:24:34.872239 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd536f783e3f8f974f25772994c3d00a01c2ba66997a6e06236f1b83b890f8f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:48Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.613376 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxmjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b45ec2d-5bea-4007-a49f-224a866f93eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00868f87f1a3020a1db7e50377bbe90203d6e93799c8ec9f3100dd52a6c0cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xsrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxmjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:48Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.626398 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:48Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.631679 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.631882 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.632069 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.632297 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.632477 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:48Z","lastTransitionTime":"2025-10-13T17:24:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.647984 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://705766b44d481300d0fd8cc654aa2dc9046733e5a71ad3a7ab789f85e737dcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a69b8825692cbbb7cce0a52b1ccb7b137ce6c9f8a8d788fff95bb7228cb40e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:48Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.662678 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f85e4aab30e7f49ef05e48cc22fc22e0291e32604a8f74ae1cc2ea57b5889e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:48Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.683085 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f856022da70859465022db433585bd8cad12f8cd2aeae7943491d3f7e5049256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:48Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.697788 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmjlm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f85220d6-f940-4538-806a-2b26bacd7b09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6315723f6cd375a392c8c427a737021b7e01415c902514b09ee43407749caa34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7795z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmjlm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:48Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.711892 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7baf94d8-250c-4568-ab1c-a1429b8caf70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435ba16189d6d19892aaab158512c1a84a72103779e3cc4b2c634de344583c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e566f3b66d404dd8da93bba1745b9e67cd63532330591f5b735f6826c07eae2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ec64662f56c2609ebac1383a78840ba4e0d9fb1fe10e666169285d3852e6b40\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d8f079c2674e0c6bea1837755db23bc3205da6a01ac1896d6293226d9c6fb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:48Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.734369 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.734417 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.734428 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.734445 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.734456 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:48Z","lastTransitionTime":"2025-10-13T17:24:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.836682 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.836887 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.836951 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.837037 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.837097 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:48Z","lastTransitionTime":"2025-10-13T17:24:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.939786 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.940085 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.940112 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.940153 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:48 crc kubenswrapper[4720]: I1013 17:24:48.940174 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:48Z","lastTransitionTime":"2025-10-13T17:24:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:49 crc kubenswrapper[4720]: I1013 17:24:49.043714 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:49 crc kubenswrapper[4720]: I1013 17:24:49.043983 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:49 crc kubenswrapper[4720]: I1013 17:24:49.044163 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:49 crc kubenswrapper[4720]: I1013 17:24:49.044363 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:49 crc kubenswrapper[4720]: I1013 17:24:49.044503 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:49Z","lastTransitionTime":"2025-10-13T17:24:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:49 crc kubenswrapper[4720]: I1013 17:24:49.148115 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:49 crc kubenswrapper[4720]: I1013 17:24:49.148452 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:49 crc kubenswrapper[4720]: I1013 17:24:49.148667 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:49 crc kubenswrapper[4720]: I1013 17:24:49.148814 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:49 crc kubenswrapper[4720]: I1013 17:24:49.148929 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:49Z","lastTransitionTime":"2025-10-13T17:24:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:49 crc kubenswrapper[4720]: I1013 17:24:49.168565 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:24:49 crc kubenswrapper[4720]: E1013 17:24:49.168729 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 17:24:49 crc kubenswrapper[4720]: I1013 17:24:49.169221 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:24:49 crc kubenswrapper[4720]: E1013 17:24:49.169288 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 17:24:49 crc kubenswrapper[4720]: I1013 17:24:49.169340 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:24:49 crc kubenswrapper[4720]: E1013 17:24:49.169383 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 17:24:49 crc kubenswrapper[4720]: I1013 17:24:49.251748 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:49 crc kubenswrapper[4720]: I1013 17:24:49.251799 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:49 crc kubenswrapper[4720]: I1013 17:24:49.251811 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:49 crc kubenswrapper[4720]: I1013 17:24:49.251831 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:49 crc kubenswrapper[4720]: I1013 17:24:49.251843 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:49Z","lastTransitionTime":"2025-10-13T17:24:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:49 crc kubenswrapper[4720]: I1013 17:24:49.355124 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:49 crc kubenswrapper[4720]: I1013 17:24:49.355161 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:49 crc kubenswrapper[4720]: I1013 17:24:49.355172 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:49 crc kubenswrapper[4720]: I1013 17:24:49.355204 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:49 crc kubenswrapper[4720]: I1013 17:24:49.355214 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:49Z","lastTransitionTime":"2025-10-13T17:24:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:49 crc kubenswrapper[4720]: I1013 17:24:49.410751 4720 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 17:24:49 crc kubenswrapper[4720]: I1013 17:24:49.457410 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:49 crc kubenswrapper[4720]: I1013 17:24:49.457451 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:49 crc kubenswrapper[4720]: I1013 17:24:49.457459 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:49 crc kubenswrapper[4720]: I1013 17:24:49.457473 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:49 crc kubenswrapper[4720]: I1013 17:24:49.457483 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:49Z","lastTransitionTime":"2025-10-13T17:24:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:49 crc kubenswrapper[4720]: I1013 17:24:49.560510 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:49 crc kubenswrapper[4720]: I1013 17:24:49.560557 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:49 crc kubenswrapper[4720]: I1013 17:24:49.560569 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:49 crc kubenswrapper[4720]: I1013 17:24:49.560593 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:49 crc kubenswrapper[4720]: I1013 17:24:49.560602 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:49Z","lastTransitionTime":"2025-10-13T17:24:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:49 crc kubenswrapper[4720]: I1013 17:24:49.664406 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:49 crc kubenswrapper[4720]: I1013 17:24:49.664460 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:49 crc kubenswrapper[4720]: I1013 17:24:49.664476 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:49 crc kubenswrapper[4720]: I1013 17:24:49.664499 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:49 crc kubenswrapper[4720]: I1013 17:24:49.664515 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:49Z","lastTransitionTime":"2025-10-13T17:24:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:49 crc kubenswrapper[4720]: I1013 17:24:49.767168 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:49 crc kubenswrapper[4720]: I1013 17:24:49.767223 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:49 crc kubenswrapper[4720]: I1013 17:24:49.767236 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:49 crc kubenswrapper[4720]: I1013 17:24:49.767253 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:49 crc kubenswrapper[4720]: I1013 17:24:49.767262 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:49Z","lastTransitionTime":"2025-10-13T17:24:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:49 crc kubenswrapper[4720]: I1013 17:24:49.869304 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:49 crc kubenswrapper[4720]: I1013 17:24:49.869346 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:49 crc kubenswrapper[4720]: I1013 17:24:49.869355 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:49 crc kubenswrapper[4720]: I1013 17:24:49.869370 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:49 crc kubenswrapper[4720]: I1013 17:24:49.869381 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:49Z","lastTransitionTime":"2025-10-13T17:24:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:49 crc kubenswrapper[4720]: I1013 17:24:49.971825 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:49 crc kubenswrapper[4720]: I1013 17:24:49.971868 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:49 crc kubenswrapper[4720]: I1013 17:24:49.971878 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:49 crc kubenswrapper[4720]: I1013 17:24:49.971897 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:49 crc kubenswrapper[4720]: I1013 17:24:49.971908 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:49Z","lastTransitionTime":"2025-10-13T17:24:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.073963 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.074029 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.074047 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.074072 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.074089 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:50Z","lastTransitionTime":"2025-10-13T17:24:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.176429 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.176491 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.176508 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.176530 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.176547 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:50Z","lastTransitionTime":"2025-10-13T17:24:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.278801 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.278856 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.278873 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.278896 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.278912 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:50Z","lastTransitionTime":"2025-10-13T17:24:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.381130 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.381181 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.381214 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.381231 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.381240 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:50Z","lastTransitionTime":"2025-10-13T17:24:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.416044 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pn6lz_8064812e-b6aa-4f56-81c9-16154c00abad/ovnkube-controller/0.log" Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.419463 4720 generic.go:334] "Generic (PLEG): container finished" podID="8064812e-b6aa-4f56-81c9-16154c00abad" containerID="709132dcc33ee0d4d48e7a21fd2471fd22c065819f93843026160310ecd43470" exitCode=1 Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.419518 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" event={"ID":"8064812e-b6aa-4f56-81c9-16154c00abad","Type":"ContainerDied","Data":"709132dcc33ee0d4d48e7a21fd2471fd22c065819f93843026160310ecd43470"} Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.420916 4720 scope.go:117] "RemoveContainer" containerID="709132dcc33ee0d4d48e7a21fd2471fd22c065819f93843026160310ecd43470" Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.439131 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:50Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.453827 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:50Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.483595 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.483632 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.483642 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.483656 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.483665 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:50Z","lastTransitionTime":"2025-10-13T17:24:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.486235 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a1f1696-ed7f-4482-a300-ca25b68e09e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2507a30017fbd9947ca456b84812f0b26baabf572768870f5a34b3c7699443cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc60672e282de4d3d6a2d8fd64306ec0fc0d032ade655f88afb0c0b9e0a8e589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cf903094dae4e559bcef3e269240670b5d4693344d600779c394ce1e039b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06eea6556daedf913d87b8b0527b337070528b07cc026de878b4df332598a2d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b452d113d48729b6d4ab59f80138b15e7cb16cc16eef4ec9916901a067986d07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:50Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.506770 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxmjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b45ec2d-5bea-4007-a49f-224a866f93eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00868f87f1a3020a1db7e50377bbe90203d6e93799c8ec9f3100dd52a6c0cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xsrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxmjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:50Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.536698 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8064812e-b6aa-4f56-81c9-16154c00abad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c28fde07efa17d20ddecd1c8a0da09a6fa09e98a90b14718565c00aef9c0d25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bbd41d9b0518c8b060b1d01c335b6c9923e7196624fbd49326133faaa2e36ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd8a4dfb390a1643edb4eb4bd01970eaffd8dab0f748effac9e7c691ef4a3ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://013dfade47eccae4339fca0d379d914f63502613f87f1bc621113f7008634475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69d511a66a84fb0fa90f426bd4609c2b3ad915571802c6c024e0370d6e1683a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1aefa46ee989416f0e4fd05f6f12031da637a354fadd7f6caef75ddc2c74ddf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://709132dcc33ee0d4d48e7a21fd2471fd22c065819f93843026160310ecd43470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://709132dcc33ee0d4d48e7a21fd2471fd22c065819f93843026160310ecd43470\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T17:24:49Z\\\",\\\"message\\\":\\\"3 17:24:49.925356 6051 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1013 17:24:49.925370 6051 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1013 17:24:49.925416 6051 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1013 17:24:49.925548 6051 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1013 17:24:49.925565 6051 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1013 17:24:49.925666 6051 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1013 17:24:49.925716 6051 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1013 17:24:49.925763 6051 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1013 17:24:49.925773 6051 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1013 17:24:49.925805 6051 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1013 17:24:49.925830 6051 factory.go:656] Stopping watch factory\\\\nI1013 17:24:49.925827 6051 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1013 17:24:49.925847 6051 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1013 17:24:49.925860 6051 handler.go:208] Removed *v1.Node event handler 2\\\\nI1013 17:24:49.925859 6051 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1013 17:24:49.925872 6051 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34b1fffbebfbd3599bed22fe3fe80129f5b704560c54d09e9ab5e9ea8bba4513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:50Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.548791 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bvtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b35c333b-0d4e-4d9a-a8fe-a1c6f5fbbf75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03411ae3786161605ed0cb08a21661c7c7a4cc93f91320586547b8ffab3e90ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g97k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bvtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:50Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.568059 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea323a1a-c607-40cf-b61c-b7ff03399c45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88b8a46e2e2c2342bd22a4a9b42cadd0188ec83f23815773f6ffb46be79fdf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d875b0d7eb9f13bc697597a33f7196598ae679980b13842e92acf20477526f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e18d0e9df43d366a6e7088cc3d5bdc794882828f60f008eaf29e788e0141ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef2ba92cea782dd4fd31a511e1086777b58c00340014dff74ff6615b07bd10c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7977e20d8558810fb9f8f827830d0378de7d15c71340d396a95b506902c0962\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T17:24:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1013 17:24:28.736714 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 17:24:28.738546 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445610647/tls.crt::/tmp/serving-cert-445610647/tls.key\\\\\\\"\\\\nI1013 17:24:34.860768 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 17:24:34.864611 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 17:24:34.864632 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 17:24:34.864654 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 17:24:34.864660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 17:24:34.871230 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1013 17:24:34.871248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1013 17:24:34.871273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871290 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871305 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 17:24:34.871314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 17:24:34.871324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 17:24:34.871335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1013 17:24:34.872239 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd536f783e3f8f974f25772994c3d00a01c2ba66997a6e06236f1b83b890f8f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:50Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.586016 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.586086 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.586109 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.586139 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.586165 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:50Z","lastTransitionTime":"2025-10-13T17:24:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.587107 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:50Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.609047 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://705766b44d481300d0fd8cc654aa2dc9046733e5a71ad3a7ab789f85e737dcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a69b8825692cbbb7cce0a52b1ccb7b137ce6c9f8a8d788fff95bb7228cb40e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:50Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.628777 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f85e4aab30e7f49ef05e48cc22fc22e0291e32604a8f74ae1cc2ea57b5889e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:50Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.648851 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f856022da70859465022db433585bd8cad12f8cd2aeae7943491d3f7e5049256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:50Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.663911 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmjlm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f85220d6-f940-4538-806a-2b26bacd7b09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6315723f6cd375a392c8c427a737021b7e01415c902514b09ee43407749caa34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7795z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmjlm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:50Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.679704 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7baf94d8-250c-4568-ab1c-a1429b8caf70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435ba16189d6d19892aaab158512c1a84a72103779e3cc4b2c634de344583c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e566f3b66d404dd8da93bba1745b9e67cd63532330591f5b735f6826c07eae2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ec64662f56c2609ebac1383a78840ba4e0d9fb1fe10e666169285d3852e6b40\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d8f079c2674e0c6bea1837755db23bc3205da6a01ac1896d6293226d9c6fb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:50Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.689103 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.689151 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.689164 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.689180 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.689216 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:50Z","lastTransitionTime":"2025-10-13T17:24:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.699217 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5rgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d7408e5-b529-4396-92b3-2fed275c3250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://439dea1ed9724a215fa1aa1f4a4bd84a4bb998f2f86814f5f4eb46989242f11c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42ff97f0a5fb40b52d5445a00eb8f70f8028610b76baf33117b425b821b5ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f42ff97f0a5fb40b52d5445a00eb8f70f8028610b76baf33117b425b821b5ec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://471bb77042e72bcac4bb28dd26f6151a7154bea0b3ec92272e405e9b22bc0170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://471bb77042e72bcac4bb28dd26f6151a7154bea0b3ec92272e405e9b22bc0170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20cb8981843cc39e8722f560f7a7e75483c93a30edffde9835533f8dc5b50587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20cb8981843cc39e8722f560f7a7e75483c93a30edffde9835533f8dc5b50587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79851ed8c84027d1bb9eccb69bed004ae1cdbe934245c66325eafe42b12ea808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79851ed8c84027d1bb9eccb69bed004ae1cdbe934245c66325eafe42b12ea808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5rgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:50Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.713881 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce442c80-fcde-4b79-b6f9-f8f25771dfd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10e7a946185d6502e37a1ab60ad1eca36cce991f230188d22b2d6e2ea554f920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://864e330267c8cec8487e1dcf4a50bbb2ba37d81d80361f0873e6bf69de1ed721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-htwnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:50Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.792116 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.792167 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.792191 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.792255 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.792273 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:50Z","lastTransitionTime":"2025-10-13T17:24:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.826852 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 17:24:50 crc kubenswrapper[4720]: E1013 17:24:50.827219 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 17:25:06.827140763 +0000 UTC m=+52.284390935 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.895772 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.895827 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.895848 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.895878 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.895903 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:50Z","lastTransitionTime":"2025-10-13T17:24:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.928363 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.928420 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.928450 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.928479 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:24:50 crc kubenswrapper[4720]: E1013 17:24:50.928571 4720 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 13 17:24:50 crc kubenswrapper[4720]: E1013 17:24:50.928636 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-13 17:25:06.928620383 +0000 UTC m=+52.385870525 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 13 17:24:50 crc kubenswrapper[4720]: E1013 17:24:50.928661 4720 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 13 17:24:50 crc kubenswrapper[4720]: E1013 17:24:50.928758 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 13 17:24:50 crc kubenswrapper[4720]: E1013 17:24:50.928765 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 13 17:24:50 crc kubenswrapper[4720]: E1013 17:24:50.928817 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-13 17:25:06.928780427 +0000 UTC m=+52.386030609 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 13 17:24:50 crc kubenswrapper[4720]: E1013 17:24:50.928838 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 13 17:24:50 crc kubenswrapper[4720]: E1013 17:24:50.928865 4720 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 13 17:24:50 crc kubenswrapper[4720]: E1013 17:24:50.928799 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 13 17:24:50 crc kubenswrapper[4720]: E1013 17:24:50.928959 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-13 17:25:06.928928571 +0000 UTC m=+52.386178753 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 13 17:24:50 crc kubenswrapper[4720]: E1013 17:24:50.928972 4720 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 13 17:24:50 crc kubenswrapper[4720]: E1013 17:24:50.929046 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-13 17:25:06.929025483 +0000 UTC m=+52.386275735 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.998676 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.998715 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.998730 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.998751 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:50 crc kubenswrapper[4720]: I1013 17:24:50.998767 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:50Z","lastTransitionTime":"2025-10-13T17:24:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.102016 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.102065 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.102076 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.102094 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.102105 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:51Z","lastTransitionTime":"2025-10-13T17:24:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.168019 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.168054 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.168084 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:24:51 crc kubenswrapper[4720]: E1013 17:24:51.168259 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 17:24:51 crc kubenswrapper[4720]: E1013 17:24:51.168331 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 17:24:51 crc kubenswrapper[4720]: E1013 17:24:51.168405 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.204296 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.204343 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.204355 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.204372 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.204385 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:51Z","lastTransitionTime":"2025-10-13T17:24:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.307440 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.307516 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.307539 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.307569 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.307592 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:51Z","lastTransitionTime":"2025-10-13T17:24:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.410714 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.410776 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.410795 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.410820 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.410838 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:51Z","lastTransitionTime":"2025-10-13T17:24:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.427359 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pn6lz_8064812e-b6aa-4f56-81c9-16154c00abad/ovnkube-controller/1.log" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.428330 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pn6lz_8064812e-b6aa-4f56-81c9-16154c00abad/ovnkube-controller/0.log" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.432891 4720 generic.go:334] "Generic (PLEG): container finished" podID="8064812e-b6aa-4f56-81c9-16154c00abad" containerID="696c3d0d1105e26a509a7a8f91eb2ade836194b7de933f28b7a3d380b5964ee3" exitCode=1 Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.432968 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" event={"ID":"8064812e-b6aa-4f56-81c9-16154c00abad","Type":"ContainerDied","Data":"696c3d0d1105e26a509a7a8f91eb2ade836194b7de933f28b7a3d380b5964ee3"} Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.433123 4720 scope.go:117] "RemoveContainer" containerID="709132dcc33ee0d4d48e7a21fd2471fd22c065819f93843026160310ecd43470" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.433962 4720 scope.go:117] "RemoveContainer" containerID="696c3d0d1105e26a509a7a8f91eb2ade836194b7de933f28b7a3d380b5964ee3" Oct 13 17:24:51 crc kubenswrapper[4720]: E1013 17:24:51.434263 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pn6lz_openshift-ovn-kubernetes(8064812e-b6aa-4f56-81c9-16154c00abad)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" podUID="8064812e-b6aa-4f56-81c9-16154c00abad" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.450918 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:51Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.470855 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:51Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.505521 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a1f1696-ed7f-4482-a300-ca25b68e09e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2507a30017fbd9947ca456b84812f0b26baabf572768870f5a34b3c7699443cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc60672e282de4d3d6a2d8fd64306ec0fc0d032ade655f88afb0c0b9e0a8e589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cf903094dae4e559bcef3e269240670b5d4693344d600779c394ce1e039b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06eea6556daedf913d87b8b0527b337070528b07cc026de878b4df332598a2d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b452d113d48729b6d4ab59f80138b15e7cb16cc16eef4ec9916901a067986d07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:51Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.513664 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.513704 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.513714 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.513730 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.513748 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:51Z","lastTransitionTime":"2025-10-13T17:24:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.527438 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxmjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b45ec2d-5bea-4007-a49f-224a866f93eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00868f87f1a3020a1db7e50377bbe90203d6e93799c8ec9f3100dd52a6c0cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xsrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxmjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:51Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.548116 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8064812e-b6aa-4f56-81c9-16154c00abad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c28fde07efa17d20ddecd1c8a0da09a6fa09e98a90b14718565c00aef9c0d25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bbd41d9b0518c8b060b1d01c335b6c9923e7196624fbd49326133faaa2e36ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd8a4dfb390a1643edb4eb4bd01970eaffd8dab0f748effac9e7c691ef4a3ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://013dfade47eccae4339fca0d379d914f63502613f87f1bc621113f7008634475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69d511a66a84fb0fa90f426bd4609c2b3ad915571802c6c024e0370d6e1683a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1aefa46ee989416f0e4fd05f6f12031da637a354fadd7f6caef75ddc2c74ddf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://696c3d0d1105e26a509a7a8f91eb2ade836194b7de933f28b7a3d380b5964ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://709132dcc33ee0d4d48e7a21fd2471fd22c065819f93843026160310ecd43470\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T17:24:49Z\\\",\\\"message\\\":\\\"3 17:24:49.925356 6051 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1013 17:24:49.925370 6051 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1013 17:24:49.925416 6051 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1013 17:24:49.925548 6051 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1013 17:24:49.925565 6051 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1013 17:24:49.925666 6051 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1013 17:24:49.925716 6051 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1013 17:24:49.925763 6051 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1013 17:24:49.925773 6051 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1013 17:24:49.925805 6051 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1013 17:24:49.925830 6051 factory.go:656] Stopping watch factory\\\\nI1013 17:24:49.925827 6051 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1013 17:24:49.925847 6051 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1013 17:24:49.925860 6051 handler.go:208] Removed *v1.Node event handler 2\\\\nI1013 17:24:49.925859 6051 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1013 17:24:49.925872 6051 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://696c3d0d1105e26a509a7a8f91eb2ade836194b7de933f28b7a3d380b5964ee3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T17:24:51Z\\\",\\\"message\\\":\\\"loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/catalog-operator-metrics\\\\\\\"}\\\\nI1013 17:24:51.349805 6169 services_controller.go:454] Service openshift-machine-config-operator/machine-config-daemon for network=default has 2 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI1013 17:24:51.349817 6169 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1013 17:24:51.349829 6169 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-authentication-operator/metrics\\\\\\\"}\\\\nI1013 17:24:51.349839 6169 services_controller.go:360] Finished syncing service metrics on namespace openshift-authentication-operator for network=default : 675.638µs\\\\nI1013 17:24:51.349848 6169 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF1013 17:24:51.349740 6169 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initializatio\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34b1fffbebfbd3599bed22fe3fe80129f5b704560c54d09e9ab5e9ea8bba4513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:51Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.562536 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bvtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b35c333b-0d4e-4d9a-a8fe-a1c6f5fbbf75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03411ae3786161605ed0cb08a21661c7c7a4cc93f91320586547b8ffab3e90ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g97k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bvtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:51Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.582652 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea323a1a-c607-40cf-b61c-b7ff03399c45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88b8a46e2e2c2342bd22a4a9b42cadd0188ec83f23815773f6ffb46be79fdf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d875b0d7eb9f13bc697597a33f7196598ae679980b13842e92acf20477526f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e18d0e9df43d366a6e7088cc3d5bdc794882828f60f008eaf29e788e0141ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef2ba92cea782dd4fd31a511e1086777b58c00340014dff74ff6615b07bd10c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7977e20d8558810fb9f8f827830d0378de7d15c71340d396a95b506902c0962\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T17:24:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1013 17:24:28.736714 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 17:24:28.738546 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445610647/tls.crt::/tmp/serving-cert-445610647/tls.key\\\\\\\"\\\\nI1013 17:24:34.860768 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 17:24:34.864611 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 17:24:34.864632 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 17:24:34.864654 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 17:24:34.864660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 17:24:34.871230 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1013 17:24:34.871248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1013 17:24:34.871273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871290 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871305 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 17:24:34.871314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 17:24:34.871324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 17:24:34.871335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1013 17:24:34.872239 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd536f783e3f8f974f25772994c3d00a01c2ba66997a6e06236f1b83b890f8f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:51Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.600978 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:51Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.617139 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://705766b44d481300d0fd8cc654aa2dc9046733e5a71ad3a7ab789f85e737dcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a69b8825692cbbb7cce0a52b1ccb7b137ce6c9f8a8d788fff95bb7228cb40e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:51Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.617414 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.617451 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.617467 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.617489 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.617504 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:51Z","lastTransitionTime":"2025-10-13T17:24:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.633887 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f85e4aab30e7f49ef05e48cc22fc22e0291e32604a8f74ae1cc2ea57b5889e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:51Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.647461 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.647662 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.647783 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.647918 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.648084 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:51Z","lastTransitionTime":"2025-10-13T17:24:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.654526 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f856022da70859465022db433585bd8cad12f8cd2aeae7943491d3f7e5049256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:51Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:51 crc kubenswrapper[4720]: E1013 17:24:51.667512 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:24:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:24:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:24:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:24:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a530de0f-daad-4050-9522-69c64a451e75\\\",\\\"systemUUID\\\":\\\"00bb7c43-79d6-45b5-bd02-4b71a0ba6837\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:51Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.669359 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmjlm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f85220d6-f940-4538-806a-2b26bacd7b09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6315723f6cd375a392c8c427a737021b7e01415c902514b09ee43407749caa34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7795z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmjlm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:51Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.672975 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.673032 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.673043 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.673056 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.673067 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:51Z","lastTransitionTime":"2025-10-13T17:24:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.690514 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7baf94d8-250c-4568-ab1c-a1429b8caf70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435ba16189d6d19892aaab158512c1a84a72103779e3cc4b2c634de344583c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e566f3b66d404dd8da93bba1745b9e67cd63532330591f5b735f6826c07eae2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ec64662f56c2609ebac1383a78840ba4e0d9fb1fe10e666169285d3852e6b40\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d8f079c2674e0c6bea1837755db23bc3205da6a01ac1896d6293226d9c6fb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:51Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:51 crc kubenswrapper[4720]: E1013 17:24:51.694043 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:24:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:24:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:24:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:24:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a530de0f-daad-4050-9522-69c64a451e75\\\",\\\"systemUUID\\\":\\\"00bb7c43-79d6-45b5-bd02-4b71a0ba6837\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:51Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.698937 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.698991 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.699007 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.699031 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.699050 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:51Z","lastTransitionTime":"2025-10-13T17:24:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.712242 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5rgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d7408e5-b529-4396-92b3-2fed275c3250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://439dea1ed9724a215fa1aa1f4a4bd84a4bb998f2f86814f5f4eb46989242f11c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42ff97f0a5fb40b52d5445a00eb8f70f8028610b76baf33117b425b821b5ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f42ff97f0a5fb40b52d5445a00eb8f70f8028610b76baf33117b425b821b5ec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://471bb77042e72bcac4bb28dd26f6151a7154bea0b3ec92272e405e9b22bc0170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://471bb77042e72bcac4bb28dd26f6151a7154bea0b3ec92272e405e9b22bc0170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20cb8981843cc39e8722f560f7a7e75483c93a30edffde9835533f8dc5b50587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20cb8981843cc39e8722f560f7a7e75483c93a30edffde9835533f8dc5b50587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79851ed8c84027d1bb9eccb69bed004ae1cdbe934245c66325eafe42b12ea808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79851ed8c84027d1bb9eccb69bed004ae1cdbe934245c66325eafe42b12ea808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5rgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:51Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:51 crc kubenswrapper[4720]: E1013 17:24:51.721764 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:24:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:24:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:24:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:24:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a530de0f-daad-4050-9522-69c64a451e75\\\",\\\"systemUUID\\\":\\\"00bb7c43-79d6-45b5-bd02-4b71a0ba6837\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:51Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.726317 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.726376 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.726393 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.726417 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.726436 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:51Z","lastTransitionTime":"2025-10-13T17:24:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.729553 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce442c80-fcde-4b79-b6f9-f8f25771dfd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10e7a946185d6502e37a1ab60ad1eca36cce991f230188d22b2d6e2ea554f920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://864e330267c8cec8487e1dcf4a50bbb2ba37d81d80361f0873e6bf69de1ed721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-htwnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:51Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:51 crc kubenswrapper[4720]: E1013 17:24:51.746527 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:24:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:24:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:24:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:24:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a530de0f-daad-4050-9522-69c64a451e75\\\",\\\"systemUUID\\\":\\\"00bb7c43-79d6-45b5-bd02-4b71a0ba6837\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:51Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.751116 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.751224 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.751256 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.751290 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.751314 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:51Z","lastTransitionTime":"2025-10-13T17:24:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:51 crc kubenswrapper[4720]: E1013 17:24:51.767518 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:24:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:24:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:24:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:24:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a530de0f-daad-4050-9522-69c64a451e75\\\",\\\"systemUUID\\\":\\\"00bb7c43-79d6-45b5-bd02-4b71a0ba6837\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:51Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:51 crc kubenswrapper[4720]: E1013 17:24:51.767875 4720 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.770461 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.770515 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.770534 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.770555 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.770572 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:51Z","lastTransitionTime":"2025-10-13T17:24:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.872834 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.872888 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.872905 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.872930 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.872951 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:51Z","lastTransitionTime":"2025-10-13T17:24:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.977996 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.978080 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.978106 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.978138 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:51 crc kubenswrapper[4720]: I1013 17:24:51.978171 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:51Z","lastTransitionTime":"2025-10-13T17:24:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.081031 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.081083 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.081101 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.081124 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.081141 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:52Z","lastTransitionTime":"2025-10-13T17:24:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.184698 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.184758 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.184778 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.184802 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.184821 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:52Z","lastTransitionTime":"2025-10-13T17:24:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.287754 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.287864 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.287880 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.287906 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.287925 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:52Z","lastTransitionTime":"2025-10-13T17:24:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.377235 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.390738 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.390795 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.390811 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.390835 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.390852 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:52Z","lastTransitionTime":"2025-10-13T17:24:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.399537 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7baf94d8-250c-4568-ab1c-a1429b8caf70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435ba16189d6d19892aaab158512c1a84a72103779e3cc4b2c634de344583c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e566f3b66d404dd8da93bba1745b9e67cd63532330591f5b735f6826c07eae2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ec64662f56c2609ebac1383a78840ba4e0d9fb1fe10e666169285d3852e6b40\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d8f079c2674e0c6bea1837755db23bc3205da6a01ac1896d6293226d9c6fb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:52Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.417795 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:52Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.436078 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://705766b44d481300d0fd8cc654aa2dc9046733e5a71ad3a7ab789f85e737dcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a69b8825692cbbb7cce0a52b1ccb7b137ce6c9f8a8d788fff95bb7228cb40e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:52Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.441090 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pn6lz_8064812e-b6aa-4f56-81c9-16154c00abad/ovnkube-controller/1.log" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.446559 4720 scope.go:117] "RemoveContainer" containerID="696c3d0d1105e26a509a7a8f91eb2ade836194b7de933f28b7a3d380b5964ee3" Oct 13 17:24:52 crc kubenswrapper[4720]: E1013 17:24:52.446826 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pn6lz_openshift-ovn-kubernetes(8064812e-b6aa-4f56-81c9-16154c00abad)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" podUID="8064812e-b6aa-4f56-81c9-16154c00abad" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.458007 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f85e4aab30e7f49ef05e48cc22fc22e0291e32604a8f74ae1cc2ea57b5889e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:52Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.479801 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f856022da70859465022db433585bd8cad12f8cd2aeae7943491d3f7e5049256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:52Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.493673 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.493703 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.493713 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.493727 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.493737 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:52Z","lastTransitionTime":"2025-10-13T17:24:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.497344 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmjlm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f85220d6-f940-4538-806a-2b26bacd7b09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6315723f6cd375a392c8c427a737021b7e01415c902514b09ee43407749caa34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7795z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmjlm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:52Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.515347 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce442c80-fcde-4b79-b6f9-f8f25771dfd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10e7a946185d6502e37a1ab60ad1eca36cce991f230188d22b2d6e2ea554f920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://864e330267c8cec8487e1dcf4a50bbb2ba37d81d80361f0873e6bf69de1ed721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-htwnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:52Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.537891 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5rgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d7408e5-b529-4396-92b3-2fed275c3250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://439dea1ed9724a215fa1aa1f4a4bd84a4bb998f2f86814f5f4eb46989242f11c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42ff97f0a5fb40b52d5445a00eb8f70f8028610b76baf33117b425b821b5ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f42ff97f0a5fb40b52d5445a00eb8f70f8028610b76baf33117b425b821b5ec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://471bb77042e72bcac4bb28dd26f6151a7154bea0b3ec92272e405e9b22bc0170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://471bb77042e72bcac4bb28dd26f6151a7154bea0b3ec92272e405e9b22bc0170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20cb8981843cc39e8722f560f7a7e75483c93a30edffde9835533f8dc5b50587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20cb8981843cc39e8722f560f7a7e75483c93a30edffde9835533f8dc5b50587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79851ed8c84027d1bb9eccb69bed004ae1cdbe934245c66325eafe42b12ea808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79851ed8c84027d1bb9eccb69bed004ae1cdbe934245c66325eafe42b12ea808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5rgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:52Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.574092 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a1f1696-ed7f-4482-a300-ca25b68e09e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2507a30017fbd9947ca456b84812f0b26baabf572768870f5a34b3c7699443cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc60672e282de4d3d6a2d8fd64306ec0fc0d032ade655f88afb0c0b9e0a8e589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cf903094dae4e559bcef3e269240670b5d4693344d600779c394ce1e039b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06eea6556daedf913d87b8b0527b337070528b07cc026de878b4df332598a2d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b452d113d48729b6d4ab59f80138b15e7cb16cc16eef4ec9916901a067986d07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:52Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.594154 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:52Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.596075 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.596115 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.596127 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.596143 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.596154 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:52Z","lastTransitionTime":"2025-10-13T17:24:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.613005 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:52Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.633616 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea323a1a-c607-40cf-b61c-b7ff03399c45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88b8a46e2e2c2342bd22a4a9b42cadd0188ec83f23815773f6ffb46be79fdf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d875b0d7eb9f13bc697597a33f7196598ae679980b13842e92acf20477526f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e18d0e9df43d366a6e7088cc3d5bdc794882828f60f008eaf29e788e0141ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef2ba92cea782dd4fd31a511e1086777b58c00340014dff74ff6615b07bd10c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7977e20d8558810fb9f8f827830d0378de7d15c71340d396a95b506902c0962\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T17:24:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1013 17:24:28.736714 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 17:24:28.738546 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445610647/tls.crt::/tmp/serving-cert-445610647/tls.key\\\\\\\"\\\\nI1013 17:24:34.860768 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 17:24:34.864611 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 17:24:34.864632 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 17:24:34.864654 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 17:24:34.864660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 17:24:34.871230 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1013 17:24:34.871248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1013 17:24:34.871273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871290 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871305 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 17:24:34.871314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 17:24:34.871324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 17:24:34.871335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1013 17:24:34.872239 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd536f783e3f8f974f25772994c3d00a01c2ba66997a6e06236f1b83b890f8f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:52Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.657021 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxmjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b45ec2d-5bea-4007-a49f-224a866f93eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00868f87f1a3020a1db7e50377bbe90203d6e93799c8ec9f3100dd52a6c0cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xsrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxmjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:52Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.686926 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8064812e-b6aa-4f56-81c9-16154c00abad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c28fde07efa17d20ddecd1c8a0da09a6fa09e98a90b14718565c00aef9c0d25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bbd41d9b0518c8b060b1d01c335b6c9923e7196624fbd49326133faaa2e36ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd8a4dfb390a1643edb4eb4bd01970eaffd8dab0f748effac9e7c691ef4a3ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://013dfade47eccae4339fca0d379d914f63502613f87f1bc621113f7008634475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69d511a66a84fb0fa90f426bd4609c2b3ad915571802c6c024e0370d6e1683a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1aefa46ee989416f0e4fd05f6f12031da637a354fadd7f6caef75ddc2c74ddf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://696c3d0d1105e26a509a7a8f91eb2ade836194b7de933f28b7a3d380b5964ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://709132dcc33ee0d4d48e7a21fd2471fd22c065819f93843026160310ecd43470\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T17:24:49Z\\\",\\\"message\\\":\\\"3 17:24:49.925356 6051 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1013 17:24:49.925370 6051 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1013 17:24:49.925416 6051 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1013 17:24:49.925548 6051 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1013 17:24:49.925565 6051 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1013 17:24:49.925666 6051 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1013 17:24:49.925716 6051 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1013 17:24:49.925763 6051 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1013 17:24:49.925773 6051 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1013 17:24:49.925805 6051 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1013 17:24:49.925830 6051 factory.go:656] Stopping watch factory\\\\nI1013 17:24:49.925827 6051 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1013 17:24:49.925847 6051 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1013 17:24:49.925860 6051 handler.go:208] Removed *v1.Node event handler 2\\\\nI1013 17:24:49.925859 6051 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1013 17:24:49.925872 6051 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://696c3d0d1105e26a509a7a8f91eb2ade836194b7de933f28b7a3d380b5964ee3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T17:24:51Z\\\",\\\"message\\\":\\\"loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/catalog-operator-metrics\\\\\\\"}\\\\nI1013 17:24:51.349805 6169 services_controller.go:454] Service openshift-machine-config-operator/machine-config-daemon for network=default has 2 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI1013 17:24:51.349817 6169 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1013 17:24:51.349829 6169 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-authentication-operator/metrics\\\\\\\"}\\\\nI1013 17:24:51.349839 6169 services_controller.go:360] Finished syncing service metrics on namespace openshift-authentication-operator for network=default : 675.638µs\\\\nI1013 17:24:51.349848 6169 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF1013 17:24:51.349740 6169 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initializatio\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34b1fffbebfbd3599bed22fe3fe80129f5b704560c54d09e9ab5e9ea8bba4513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:52Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.699162 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.699236 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.699251 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.699270 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.699282 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:52Z","lastTransitionTime":"2025-10-13T17:24:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.705447 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bvtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b35c333b-0d4e-4d9a-a8fe-a1c6f5fbbf75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03411ae3786161605ed0cb08a21661c7c7a4cc93f91320586547b8ffab3e90ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g97k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bvtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:52Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.722260 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea323a1a-c607-40cf-b61c-b7ff03399c45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88b8a46e2e2c2342bd22a4a9b42cadd0188ec83f23815773f6ffb46be79fdf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d875b0d7eb9f13bc697597a33f7196598ae679980b13842e92acf20477526f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e18d0e9df43d366a6e7088cc3d5bdc794882828f60f008eaf29e788e0141ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef2ba92cea782dd4fd31a511e1086777b58c00340014dff74ff6615b07bd10c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7977e20d8558810fb9f8f827830d0378de7d15c71340d396a95b506902c0962\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T17:24:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1013 17:24:28.736714 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 17:24:28.738546 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445610647/tls.crt::/tmp/serving-cert-445610647/tls.key\\\\\\\"\\\\nI1013 17:24:34.860768 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 17:24:34.864611 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 17:24:34.864632 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 17:24:34.864654 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 17:24:34.864660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 17:24:34.871230 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1013 17:24:34.871248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1013 17:24:34.871273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871290 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871305 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 17:24:34.871314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 17:24:34.871324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 17:24:34.871335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1013 17:24:34.872239 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd536f783e3f8f974f25772994c3d00a01c2ba66997a6e06236f1b83b890f8f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:52Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.740274 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxmjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b45ec2d-5bea-4007-a49f-224a866f93eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00868f87f1a3020a1db7e50377bbe90203d6e93799c8ec9f3100dd52a6c0cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xsrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxmjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:52Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.767978 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8064812e-b6aa-4f56-81c9-16154c00abad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c28fde07efa17d20ddecd1c8a0da09a6fa09e98a90b14718565c00aef9c0d25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bbd41d9b0518c8b060b1d01c335b6c9923e7196624fbd49326133faaa2e36ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd8a4dfb390a1643edb4eb4bd01970eaffd8dab0f748effac9e7c691ef4a3ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://013dfade47eccae4339fca0d379d914f63502613f87f1bc621113f7008634475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69d511a66a84fb0fa90f426bd4609c2b3ad915571802c6c024e0370d6e1683a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1aefa46ee989416f0e4fd05f6f12031da637a354fadd7f6caef75ddc2c74ddf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://696c3d0d1105e26a509a7a8f91eb2ade836194b7de933f28b7a3d380b5964ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://696c3d0d1105e26a509a7a8f91eb2ade836194b7de933f28b7a3d380b5964ee3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T17:24:51Z\\\",\\\"message\\\":\\\"loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/catalog-operator-metrics\\\\\\\"}\\\\nI1013 17:24:51.349805 6169 services_controller.go:454] Service openshift-machine-config-operator/machine-config-daemon for network=default has 2 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI1013 17:24:51.349817 6169 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1013 17:24:51.349829 6169 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-authentication-operator/metrics\\\\\\\"}\\\\nI1013 17:24:51.349839 6169 services_controller.go:360] Finished syncing service metrics on namespace openshift-authentication-operator for network=default : 675.638µs\\\\nI1013 17:24:51.349848 6169 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF1013 17:24:51.349740 6169 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initializatio\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pn6lz_openshift-ovn-kubernetes(8064812e-b6aa-4f56-81c9-16154c00abad)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34b1fffbebfbd3599bed22fe3fe80129f5b704560c54d09e9ab5e9ea8bba4513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:52Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.782365 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bvtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b35c333b-0d4e-4d9a-a8fe-a1c6f5fbbf75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03411ae3786161605ed0cb08a21661c7c7a4cc93f91320586547b8ffab3e90ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g97k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bvtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:52Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.798298 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f85e4aab30e7f49ef05e48cc22fc22e0291e32604a8f74ae1cc2ea57b5889e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:52Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.802471 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.802541 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.802564 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.802595 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.802619 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:52Z","lastTransitionTime":"2025-10-13T17:24:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.818897 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f856022da70859465022db433585bd8cad12f8cd2aeae7943491d3f7e5049256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:52Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.835154 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmjlm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f85220d6-f940-4538-806a-2b26bacd7b09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6315723f6cd375a392c8c427a737021b7e01415c902514b09ee43407749caa34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7795z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmjlm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:52Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.837097 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dnjrc"] Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.838048 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dnjrc" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.840696 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.843575 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.857037 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7baf94d8-250c-4568-ab1c-a1429b8caf70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435ba16189d6d19892aaab158512c1a84a72103779e3cc4b2c634de344583c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e566f3b66d404dd8da93bba1745b9e67cd63532330591f5b735f6826c07eae2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ec64662f56c2609ebac1383a78840ba4e0d9fb1fe10e666169285d3852e6b40\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d8f079c2674e0c6bea1837755db23bc3205da6a01ac1896d6293226d9c6fb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:52Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.870658 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:52Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.886308 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://705766b44d481300d0fd8cc654aa2dc9046733e5a71ad3a7ab789f85e737dcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a69b8825692cbbb7cce0a52b1ccb7b137ce6c9f8a8d788fff95bb7228cb40e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:52Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.902246 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce442c80-fcde-4b79-b6f9-f8f25771dfd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10e7a946185d6502e37a1ab60ad1eca36cce991f230188d22b2d6e2ea554f920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://864e330267c8cec8487e1dcf4a50bbb2ba37d81d80361f0873e6bf69de1ed721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-htwnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:52Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.905808 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.905859 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.905879 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.905905 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.905921 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:52Z","lastTransitionTime":"2025-10-13T17:24:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.915641 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5rgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d7408e5-b529-4396-92b3-2fed275c3250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://439dea1ed9724a215fa1aa1f4a4bd84a4bb998f2f86814f5f4eb46989242f11c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42ff97f0a5fb40b52d5445a00eb8f70f8028610b76baf33117b425b821b5ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f42ff97f0a5fb40b52d5445a00eb8f70f8028610b76baf33117b425b821b5ec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://471bb77042e72bcac4bb28dd26f6151a7154bea0b3ec92272e405e9b22bc0170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://471bb77042e72bcac4bb28dd26f6151a7154bea0b3ec92272e405e9b22bc0170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20cb8981843cc39e8722f560f7a7e75483c93a30edffde9835533f8dc5b50587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20cb8981843cc39e8722f560f7a7e75483c93a30edffde9835533f8dc5b50587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79851ed8c84027d1bb9eccb69bed004ae1cdbe934245c66325eafe42b12ea808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79851ed8c84027d1bb9eccb69bed004ae1cdbe934245c66325eafe42b12ea808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5rgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:52Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.946622 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a1f1696-ed7f-4482-a300-ca25b68e09e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2507a30017fbd9947ca456b84812f0b26baabf572768870f5a34b3c7699443cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc60672e282de4d3d6a2d8fd64306ec0fc0d032ade655f88afb0c0b9e0a8e589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cf903094dae4e559bcef3e269240670b5d4693344d600779c394ce1e039b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06eea6556daedf913d87b8b0527b337070528b07cc026de878b4df332598a2d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b452d113d48729b6d4ab59f80138b15e7cb16cc16eef4ec9916901a067986d07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:52Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.947500 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5e6efca3-dcc8-488e-8fb1-e14ee0396158-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-dnjrc\" (UID: \"5e6efca3-dcc8-488e-8fb1-e14ee0396158\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dnjrc" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.947570 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g676g\" (UniqueName: \"kubernetes.io/projected/5e6efca3-dcc8-488e-8fb1-e14ee0396158-kube-api-access-g676g\") pod \"ovnkube-control-plane-749d76644c-dnjrc\" (UID: \"5e6efca3-dcc8-488e-8fb1-e14ee0396158\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dnjrc" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.947622 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5e6efca3-dcc8-488e-8fb1-e14ee0396158-env-overrides\") pod \"ovnkube-control-plane-749d76644c-dnjrc\" (UID: \"5e6efca3-dcc8-488e-8fb1-e14ee0396158\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dnjrc" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.947653 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5e6efca3-dcc8-488e-8fb1-e14ee0396158-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-dnjrc\" (UID: \"5e6efca3-dcc8-488e-8fb1-e14ee0396158\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dnjrc" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.962599 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:52Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.977732 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:52Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:52 crc kubenswrapper[4720]: I1013 17:24:52.994095 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea323a1a-c607-40cf-b61c-b7ff03399c45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88b8a46e2e2c2342bd22a4a9b42cadd0188ec83f23815773f6ffb46be79fdf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d875b0d7eb9f13bc697597a33f7196598ae679980b13842e92acf20477526f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e18d0e9df43d366a6e7088cc3d5bdc794882828f60f008eaf29e788e0141ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef2ba92cea782dd4fd31a511e1086777b58c00340014dff74ff6615b07bd10c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7977e20d8558810fb9f8f827830d0378de7d15c71340d396a95b506902c0962\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T17:24:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1013 17:24:28.736714 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 17:24:28.738546 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445610647/tls.crt::/tmp/serving-cert-445610647/tls.key\\\\\\\"\\\\nI1013 17:24:34.860768 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 17:24:34.864611 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 17:24:34.864632 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 17:24:34.864654 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 17:24:34.864660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 17:24:34.871230 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1013 17:24:34.871248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1013 17:24:34.871273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871290 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871305 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 17:24:34.871314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 17:24:34.871324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 17:24:34.871335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1013 17:24:34.872239 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd536f783e3f8f974f25772994c3d00a01c2ba66997a6e06236f1b83b890f8f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:52Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.008834 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.008909 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.008929 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.008955 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.008972 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:53Z","lastTransitionTime":"2025-10-13T17:24:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.015681 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxmjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b45ec2d-5bea-4007-a49f-224a866f93eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00868f87f1a3020a1db7e50377bbe90203d6e93799c8ec9f3100dd52a6c0cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xsrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxmjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:53Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.048519 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g676g\" (UniqueName: \"kubernetes.io/projected/5e6efca3-dcc8-488e-8fb1-e14ee0396158-kube-api-access-g676g\") pod \"ovnkube-control-plane-749d76644c-dnjrc\" (UID: \"5e6efca3-dcc8-488e-8fb1-e14ee0396158\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dnjrc" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.048621 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5e6efca3-dcc8-488e-8fb1-e14ee0396158-env-overrides\") pod \"ovnkube-control-plane-749d76644c-dnjrc\" (UID: \"5e6efca3-dcc8-488e-8fb1-e14ee0396158\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dnjrc" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.048679 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5e6efca3-dcc8-488e-8fb1-e14ee0396158-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-dnjrc\" (UID: \"5e6efca3-dcc8-488e-8fb1-e14ee0396158\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dnjrc" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.048802 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5e6efca3-dcc8-488e-8fb1-e14ee0396158-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-dnjrc\" (UID: \"5e6efca3-dcc8-488e-8fb1-e14ee0396158\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dnjrc" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.048509 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8064812e-b6aa-4f56-81c9-16154c00abad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c28fde07efa17d20ddecd1c8a0da09a6fa09e98a90b14718565c00aef9c0d25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bbd41d9b0518c8b060b1d01c335b6c9923e7196624fbd49326133faaa2e36ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd8a4dfb390a1643edb4eb4bd01970eaffd8dab0f748effac9e7c691ef4a3ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://013dfade47eccae4339fca0d379d914f63502613f87f1bc621113f7008634475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69d511a66a84fb0fa90f426bd4609c2b3ad915571802c6c024e0370d6e1683a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1aefa46ee989416f0e4fd05f6f12031da637a354fadd7f6caef75ddc2c74ddf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://696c3d0d1105e26a509a7a8f91eb2ade836194b7de933f28b7a3d380b5964ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://696c3d0d1105e26a509a7a8f91eb2ade836194b7de933f28b7a3d380b5964ee3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T17:24:51Z\\\",\\\"message\\\":\\\"loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/catalog-operator-metrics\\\\\\\"}\\\\nI1013 17:24:51.349805 6169 services_controller.go:454] Service openshift-machine-config-operator/machine-config-daemon for network=default has 2 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI1013 17:24:51.349817 6169 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1013 17:24:51.349829 6169 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-authentication-operator/metrics\\\\\\\"}\\\\nI1013 17:24:51.349839 6169 services_controller.go:360] Finished syncing service metrics on namespace openshift-authentication-operator for network=default : 675.638µs\\\\nI1013 17:24:51.349848 6169 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF1013 17:24:51.349740 6169 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initializatio\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pn6lz_openshift-ovn-kubernetes(8064812e-b6aa-4f56-81c9-16154c00abad)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34b1fffbebfbd3599bed22fe3fe80129f5b704560c54d09e9ab5e9ea8bba4513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:53Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.049256 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5e6efca3-dcc8-488e-8fb1-e14ee0396158-env-overrides\") pod \"ovnkube-control-plane-749d76644c-dnjrc\" (UID: \"5e6efca3-dcc8-488e-8fb1-e14ee0396158\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dnjrc" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.049964 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5e6efca3-dcc8-488e-8fb1-e14ee0396158-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-dnjrc\" (UID: \"5e6efca3-dcc8-488e-8fb1-e14ee0396158\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dnjrc" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.059442 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5e6efca3-dcc8-488e-8fb1-e14ee0396158-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-dnjrc\" (UID: \"5e6efca3-dcc8-488e-8fb1-e14ee0396158\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dnjrc" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.065276 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bvtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b35c333b-0d4e-4d9a-a8fe-a1c6f5fbbf75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03411ae3786161605ed0cb08a21661c7c7a4cc93f91320586547b8ffab3e90ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g97k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bvtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:53Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.078628 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g676g\" (UniqueName: \"kubernetes.io/projected/5e6efca3-dcc8-488e-8fb1-e14ee0396158-kube-api-access-g676g\") pod \"ovnkube-control-plane-749d76644c-dnjrc\" (UID: \"5e6efca3-dcc8-488e-8fb1-e14ee0396158\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dnjrc" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.083275 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7baf94d8-250c-4568-ab1c-a1429b8caf70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435ba16189d6d19892aaab158512c1a84a72103779e3cc4b2c634de344583c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e566f3b66d404dd8da93bba1745b9e67cd63532330591f5b735f6826c07eae2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ec64662f56c2609ebac1383a78840ba4e0d9fb1fe10e666169285d3852e6b40\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d8f079c2674e0c6bea1837755db23bc3205da6a01ac1896d6293226d9c6fb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:53Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.095731 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:53Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.111130 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://705766b44d481300d0fd8cc654aa2dc9046733e5a71ad3a7ab789f85e737dcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a69b8825692cbbb7cce0a52b1ccb7b137ce6c9f8a8d788fff95bb7228cb40e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:53Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.111649 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.111815 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.111909 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.112008 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.112095 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:53Z","lastTransitionTime":"2025-10-13T17:24:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.125846 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f85e4aab30e7f49ef05e48cc22fc22e0291e32604a8f74ae1cc2ea57b5889e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:53Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.144246 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f856022da70859465022db433585bd8cad12f8cd2aeae7943491d3f7e5049256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:53Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.159147 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmjlm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f85220d6-f940-4538-806a-2b26bacd7b09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6315723f6cd375a392c8c427a737021b7e01415c902514b09ee43407749caa34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7795z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmjlm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:53Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.161428 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dnjrc" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.167847 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.167847 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:24:53 crc kubenswrapper[4720]: E1013 17:24:53.167985 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.168029 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:24:53 crc kubenswrapper[4720]: E1013 17:24:53.168152 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 17:24:53 crc kubenswrapper[4720]: E1013 17:24:53.168575 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.176882 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce442c80-fcde-4b79-b6f9-f8f25771dfd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10e7a946185d6502e37a1ab60ad1eca36cce991f230188d22b2d6e2ea554f920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://864e330267c8cec8487e1dcf4a50bbb2ba37d81d80361f0873e6bf69de1ed721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-htwnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:53Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:53 crc kubenswrapper[4720]: W1013 17:24:53.178142 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e6efca3_dcc8_488e_8fb1_e14ee0396158.slice/crio-dcd424a9a1a449cb3f67b7dc110444c749d354aacc8b3a4495108c94acb4eef4 WatchSource:0}: Error finding container dcd424a9a1a449cb3f67b7dc110444c749d354aacc8b3a4495108c94acb4eef4: Status 404 returned error can't find the container with id dcd424a9a1a449cb3f67b7dc110444c749d354aacc8b3a4495108c94acb4eef4 Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.197894 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5rgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d7408e5-b529-4396-92b3-2fed275c3250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://439dea1ed9724a215fa1aa1f4a4bd84a4bb998f2f86814f5f4eb46989242f11c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42ff97f0a5fb40b52d5445a00eb8f70f8028610b76baf33117b425b821b5ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f42ff97f0a5fb40b52d5445a00eb8f70f8028610b76baf33117b425b821b5ec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://471bb77042e72bcac4bb28dd26f6151a7154bea0b3ec92272e405e9b22bc0170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://471bb77042e72bcac4bb28dd26f6151a7154bea0b3ec92272e405e9b22bc0170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20cb8981843cc39e8722f560f7a7e75483c93a30edffde9835533f8dc5b50587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20cb8981843cc39e8722f560f7a7e75483c93a30edffde9835533f8dc5b50587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79851ed8c84027d1bb9eccb69bed004ae1cdbe934245c66325eafe42b12ea808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79851ed8c84027d1bb9eccb69bed004ae1cdbe934245c66325eafe42b12ea808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5rgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:53Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.214965 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.215010 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.215026 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.215048 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.215064 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:53Z","lastTransitionTime":"2025-10-13T17:24:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.237930 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a1f1696-ed7f-4482-a300-ca25b68e09e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2507a30017fbd9947ca456b84812f0b26baabf572768870f5a34b3c7699443cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc60672e282de4d3d6a2d8fd64306ec0fc0d032ade655f88afb0c0b9e0a8e589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cf903094dae4e559bcef3e269240670b5d4693344d600779c394ce1e039b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06eea6556daedf913d87b8b0527b337070528b07cc026de878b4df332598a2d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b452d113d48729b6d4ab59f80138b15e7cb16cc16eef4ec9916901a067986d07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:53Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.271675 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:53Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.289680 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:53Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.299807 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dnjrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e6efca3-dcc8-488e-8fb1-e14ee0396158\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g676g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g676g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dnjrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:53Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.316841 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.316880 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.316890 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.316905 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.316916 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:53Z","lastTransitionTime":"2025-10-13T17:24:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.419499 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.419540 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.419551 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.419567 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.419580 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:53Z","lastTransitionTime":"2025-10-13T17:24:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.455826 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dnjrc" event={"ID":"5e6efca3-dcc8-488e-8fb1-e14ee0396158","Type":"ContainerStarted","Data":"b0f539bcc67139c5b3d51ab01a3114afc20fc44cf5e286dce2b4cad0ea0629c5"} Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.455895 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dnjrc" event={"ID":"5e6efca3-dcc8-488e-8fb1-e14ee0396158","Type":"ContainerStarted","Data":"dcd424a9a1a449cb3f67b7dc110444c749d354aacc8b3a4495108c94acb4eef4"} Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.522923 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.522977 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.522994 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.523016 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.523032 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:53Z","lastTransitionTime":"2025-10-13T17:24:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.626024 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.626086 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.626107 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.626131 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.626150 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:53Z","lastTransitionTime":"2025-10-13T17:24:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.729029 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.729075 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.729114 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.729131 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.729143 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:53Z","lastTransitionTime":"2025-10-13T17:24:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.832519 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.832824 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.832948 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.833144 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.833358 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:53Z","lastTransitionTime":"2025-10-13T17:24:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.936460 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.936757 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.936880 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.936998 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:53 crc kubenswrapper[4720]: I1013 17:24:53.937109 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:53Z","lastTransitionTime":"2025-10-13T17:24:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.040746 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.040819 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.040842 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.040869 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.040889 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:54Z","lastTransitionTime":"2025-10-13T17:24:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.143551 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.143627 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.143649 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.143676 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.143699 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:54Z","lastTransitionTime":"2025-10-13T17:24:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.246673 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.246706 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.246717 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.246734 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.246745 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:54Z","lastTransitionTime":"2025-10-13T17:24:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.323428 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-c6ntg"] Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.324088 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c6ntg" Oct 13 17:24:54 crc kubenswrapper[4720]: E1013 17:24:54.324181 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c6ntg" podUID="c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.344441 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea323a1a-c607-40cf-b61c-b7ff03399c45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88b8a46e2e2c2342bd22a4a9b42cadd0188ec83f23815773f6ffb46be79fdf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d875b0d7eb9f13bc697597a33f7196598ae679980b13842e92acf20477526f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e18d0e9df43d366a6e7088cc3d5bdc794882828f60f008eaf29e788e0141ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef2ba92cea782dd4fd31a511e1086777b58c00340014dff74ff6615b07bd10c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7977e20d8558810fb9f8f827830d0378de7d15c71340d396a95b506902c0962\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T17:24:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1013 17:24:28.736714 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 17:24:28.738546 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445610647/tls.crt::/tmp/serving-cert-445610647/tls.key\\\\\\\"\\\\nI1013 17:24:34.860768 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 17:24:34.864611 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 17:24:34.864632 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 17:24:34.864654 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 17:24:34.864660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 17:24:34.871230 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1013 17:24:34.871248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1013 17:24:34.871273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871290 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871305 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 17:24:34.871314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 17:24:34.871324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 17:24:34.871335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1013 17:24:34.872239 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd536f783e3f8f974f25772994c3d00a01c2ba66997a6e06236f1b83b890f8f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:54Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.349301 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.349337 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.349346 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.349359 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.349370 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:54Z","lastTransitionTime":"2025-10-13T17:24:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.364418 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxmjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b45ec2d-5bea-4007-a49f-224a866f93eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00868f87f1a3020a1db7e50377bbe90203d6e93799c8ec9f3100dd52a6c0cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xsrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxmjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:54Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.369102 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61-metrics-certs\") pod \"network-metrics-daemon-c6ntg\" (UID: \"c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61\") " pod="openshift-multus/network-metrics-daemon-c6ntg" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.369174 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxbvp\" (UniqueName: \"kubernetes.io/projected/c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61-kube-api-access-mxbvp\") pod \"network-metrics-daemon-c6ntg\" (UID: \"c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61\") " pod="openshift-multus/network-metrics-daemon-c6ntg" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.393838 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8064812e-b6aa-4f56-81c9-16154c00abad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c28fde07efa17d20ddecd1c8a0da09a6fa09e98a90b14718565c00aef9c0d25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bbd41d9b0518c8b060b1d01c335b6c9923e7196624fbd49326133faaa2e36ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd8a4dfb390a1643edb4eb4bd01970eaffd8dab0f748effac9e7c691ef4a3ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://013dfade47eccae4339fca0d379d914f63502613f87f1bc621113f7008634475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69d511a66a84fb0fa90f426bd4609c2b3ad915571802c6c024e0370d6e1683a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1aefa46ee989416f0e4fd05f6f12031da637a354fadd7f6caef75ddc2c74ddf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://696c3d0d1105e26a509a7a8f91eb2ade836194b7de933f28b7a3d380b5964ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://696c3d0d1105e26a509a7a8f91eb2ade836194b7de933f28b7a3d380b5964ee3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T17:24:51Z\\\",\\\"message\\\":\\\"loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/catalog-operator-metrics\\\\\\\"}\\\\nI1013 17:24:51.349805 6169 services_controller.go:454] Service openshift-machine-config-operator/machine-config-daemon for network=default has 2 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI1013 17:24:51.349817 6169 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1013 17:24:51.349829 6169 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-authentication-operator/metrics\\\\\\\"}\\\\nI1013 17:24:51.349839 6169 services_controller.go:360] Finished syncing service metrics on namespace openshift-authentication-operator for network=default : 675.638µs\\\\nI1013 17:24:51.349848 6169 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF1013 17:24:51.349740 6169 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initializatio\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pn6lz_openshift-ovn-kubernetes(8064812e-b6aa-4f56-81c9-16154c00abad)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34b1fffbebfbd3599bed22fe3fe80129f5b704560c54d09e9ab5e9ea8bba4513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:54Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.409455 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bvtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b35c333b-0d4e-4d9a-a8fe-a1c6f5fbbf75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03411ae3786161605ed0cb08a21661c7c7a4cc93f91320586547b8ffab3e90ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g97k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bvtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:54Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.428553 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f856022da70859465022db433585bd8cad12f8cd2aeae7943491d3f7e5049256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:54Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.444437 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmjlm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f85220d6-f940-4538-806a-2b26bacd7b09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6315723f6cd375a392c8c427a737021b7e01415c902514b09ee43407749caa34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7795z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmjlm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:54Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.452837 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.452909 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.452932 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.452961 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.452989 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:54Z","lastTransitionTime":"2025-10-13T17:24:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.461865 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dnjrc" event={"ID":"5e6efca3-dcc8-488e-8fb1-e14ee0396158","Type":"ContainerStarted","Data":"c792a043326901c5cbcae8c96b168aee71b56f6e91d0edfc7f58abb5da9c9c28"} Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.461745 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7baf94d8-250c-4568-ab1c-a1429b8caf70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435ba16189d6d19892aaab158512c1a84a72103779e3cc4b2c634de344583c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e566f3b66d404dd8da93bba1745b9e67cd63532330591f5b735f6826c07eae2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ec64662f56c2609ebac1383a78840ba4e0d9fb1fe10e666169285d3852e6b40\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d8f079c2674e0c6bea1837755db23bc3205da6a01ac1896d6293226d9c6fb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:54Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.469704 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61-metrics-certs\") pod \"network-metrics-daemon-c6ntg\" (UID: \"c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61\") " pod="openshift-multus/network-metrics-daemon-c6ntg" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.469741 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxbvp\" (UniqueName: \"kubernetes.io/projected/c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61-kube-api-access-mxbvp\") pod \"network-metrics-daemon-c6ntg\" (UID: \"c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61\") " pod="openshift-multus/network-metrics-daemon-c6ntg" Oct 13 17:24:54 crc kubenswrapper[4720]: E1013 17:24:54.469998 4720 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 13 17:24:54 crc kubenswrapper[4720]: E1013 17:24:54.470122 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61-metrics-certs podName:c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61 nodeName:}" failed. No retries permitted until 2025-10-13 17:24:54.970089879 +0000 UTC m=+40.427340051 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61-metrics-certs") pod "network-metrics-daemon-c6ntg" (UID: "c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.477638 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:54Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.488514 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxbvp\" (UniqueName: \"kubernetes.io/projected/c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61-kube-api-access-mxbvp\") pod \"network-metrics-daemon-c6ntg\" (UID: \"c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61\") " pod="openshift-multus/network-metrics-daemon-c6ntg" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.490902 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://705766b44d481300d0fd8cc654aa2dc9046733e5a71ad3a7ab789f85e737dcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a69b8825692cbbb7cce0a52b1ccb7b137ce6c9f8a8d788fff95bb7228cb40e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:54Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.504713 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f85e4aab30e7f49ef05e48cc22fc22e0291e32604a8f74ae1cc2ea57b5889e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:54Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.517248 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce442c80-fcde-4b79-b6f9-f8f25771dfd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10e7a946185d6502e37a1ab60ad1eca36cce991f230188d22b2d6e2ea554f920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://864e330267c8cec8487e1dcf4a50bbb2ba37d81d80361f0873e6bf69de1ed721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-htwnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:54Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.536388 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5rgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d7408e5-b529-4396-92b3-2fed275c3250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://439dea1ed9724a215fa1aa1f4a4bd84a4bb998f2f86814f5f4eb46989242f11c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42ff97f0a5fb40b52d5445a00eb8f70f8028610b76baf33117b425b821b5ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f42ff97f0a5fb40b52d5445a00eb8f70f8028610b76baf33117b425b821b5ec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://471bb77042e72bcac4bb28dd26f6151a7154bea0b3ec92272e405e9b22bc0170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://471bb77042e72bcac4bb28dd26f6151a7154bea0b3ec92272e405e9b22bc0170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20cb8981843cc39e8722f560f7a7e75483c93a30edffde9835533f8dc5b50587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20cb8981843cc39e8722f560f7a7e75483c93a30edffde9835533f8dc5b50587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79851ed8c84027d1bb9eccb69bed004ae1cdbe934245c66325eafe42b12ea808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79851ed8c84027d1bb9eccb69bed004ae1cdbe934245c66325eafe42b12ea808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5rgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:54Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.549443 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c6ntg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxbvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxbvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c6ntg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:54Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.555146 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.555220 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.555237 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.555307 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.555326 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:54Z","lastTransitionTime":"2025-10-13T17:24:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.572597 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a1f1696-ed7f-4482-a300-ca25b68e09e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2507a30017fbd9947ca456b84812f0b26baabf572768870f5a34b3c7699443cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc60672e282de4d3d6a2d8fd64306ec0fc0d032ade655f88afb0c0b9e0a8e589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cf903094dae4e559bcef3e269240670b5d4693344d600779c394ce1e039b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06eea6556daedf913d87b8b0527b337070528b07cc026de878b4df332598a2d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b452d113d48729b6d4ab59f80138b15e7cb16cc16eef4ec9916901a067986d07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:54Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.590685 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:54Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.606947 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:54Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.619980 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dnjrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e6efca3-dcc8-488e-8fb1-e14ee0396158\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g676g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g676g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dnjrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:54Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.649051 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a1f1696-ed7f-4482-a300-ca25b68e09e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2507a30017fbd9947ca456b84812f0b26baabf572768870f5a34b3c7699443cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc60672e282de4d3d6a2d8fd64306ec0fc0d032ade655f88afb0c0b9e0a8e589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cf903094dae4e559bcef3e269240670b5d4693344d600779c394ce1e039b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06eea6556daedf913d87b8b0527b337070528b07cc026de878b4df332598a2d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b452d113d48729b6d4ab59f80138b15e7cb16cc16eef4ec9916901a067986d07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:54Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.657385 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.657451 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.657469 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.657493 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.657511 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:54Z","lastTransitionTime":"2025-10-13T17:24:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.664759 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:54Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.677781 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:54Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.689151 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dnjrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e6efca3-dcc8-488e-8fb1-e14ee0396158\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0f539bcc67139c5b3d51ab01a3114afc20fc44cf5e286dce2b4cad0ea0629c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g676g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c792a043326901c5cbcae8c96b168aee71b56f6e91d0edfc7f58abb5da9c9c28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g676g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dnjrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:54Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.707830 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea323a1a-c607-40cf-b61c-b7ff03399c45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88b8a46e2e2c2342bd22a4a9b42cadd0188ec83f23815773f6ffb46be79fdf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d875b0d7eb9f13bc697597a33f7196598ae679980b13842e92acf20477526f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e18d0e9df43d366a6e7088cc3d5bdc794882828f60f008eaf29e788e0141ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef2ba92cea782dd4fd31a511e1086777b58c00340014dff74ff6615b07bd10c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7977e20d8558810fb9f8f827830d0378de7d15c71340d396a95b506902c0962\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T17:24:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1013 17:24:28.736714 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 17:24:28.738546 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445610647/tls.crt::/tmp/serving-cert-445610647/tls.key\\\\\\\"\\\\nI1013 17:24:34.860768 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 17:24:34.864611 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 17:24:34.864632 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 17:24:34.864654 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 17:24:34.864660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 17:24:34.871230 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1013 17:24:34.871248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1013 17:24:34.871273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871290 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871305 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 17:24:34.871314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 17:24:34.871324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 17:24:34.871335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1013 17:24:34.872239 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd536f783e3f8f974f25772994c3d00a01c2ba66997a6e06236f1b83b890f8f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:54Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.725446 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxmjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b45ec2d-5bea-4007-a49f-224a866f93eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00868f87f1a3020a1db7e50377bbe90203d6e93799c8ec9f3100dd52a6c0cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xsrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxmjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:54Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.752487 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8064812e-b6aa-4f56-81c9-16154c00abad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c28fde07efa17d20ddecd1c8a0da09a6fa09e98a90b14718565c00aef9c0d25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bbd41d9b0518c8b060b1d01c335b6c9923e7196624fbd49326133faaa2e36ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd8a4dfb390a1643edb4eb4bd01970eaffd8dab0f748effac9e7c691ef4a3ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://013dfade47eccae4339fca0d379d914f63502613f87f1bc621113f7008634475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69d511a66a84fb0fa90f426bd4609c2b3ad915571802c6c024e0370d6e1683a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1aefa46ee989416f0e4fd05f6f12031da637a354fadd7f6caef75ddc2c74ddf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://696c3d0d1105e26a509a7a8f91eb2ade836194b7de933f28b7a3d380b5964ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://696c3d0d1105e26a509a7a8f91eb2ade836194b7de933f28b7a3d380b5964ee3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T17:24:51Z\\\",\\\"message\\\":\\\"loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/catalog-operator-metrics\\\\\\\"}\\\\nI1013 17:24:51.349805 6169 services_controller.go:454] Service openshift-machine-config-operator/machine-config-daemon for network=default has 2 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI1013 17:24:51.349817 6169 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1013 17:24:51.349829 6169 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-authentication-operator/metrics\\\\\\\"}\\\\nI1013 17:24:51.349839 6169 services_controller.go:360] Finished syncing service metrics on namespace openshift-authentication-operator for network=default : 675.638µs\\\\nI1013 17:24:51.349848 6169 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF1013 17:24:51.349740 6169 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initializatio\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pn6lz_openshift-ovn-kubernetes(8064812e-b6aa-4f56-81c9-16154c00abad)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34b1fffbebfbd3599bed22fe3fe80129f5b704560c54d09e9ab5e9ea8bba4513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:54Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.759878 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.760311 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.760333 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.760357 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.760376 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:54Z","lastTransitionTime":"2025-10-13T17:24:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.765433 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bvtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b35c333b-0d4e-4d9a-a8fe-a1c6f5fbbf75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03411ae3786161605ed0cb08a21661c7c7a4cc93f91320586547b8ffab3e90ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g97k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bvtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:54Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.783832 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7baf94d8-250c-4568-ab1c-a1429b8caf70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435ba16189d6d19892aaab158512c1a84a72103779e3cc4b2c634de344583c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e566f3b66d404dd8da93bba1745b9e67cd63532330591f5b735f6826c07eae2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ec64662f56c2609ebac1383a78840ba4e0d9fb1fe10e666169285d3852e6b40\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d8f079c2674e0c6bea1837755db23bc3205da6a01ac1896d6293226d9c6fb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:54Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.798680 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:54Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.815755 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://705766b44d481300d0fd8cc654aa2dc9046733e5a71ad3a7ab789f85e737dcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a69b8825692cbbb7cce0a52b1ccb7b137ce6c9f8a8d788fff95bb7228cb40e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:54Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.831178 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f85e4aab30e7f49ef05e48cc22fc22e0291e32604a8f74ae1cc2ea57b5889e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:54Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.848572 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f856022da70859465022db433585bd8cad12f8cd2aeae7943491d3f7e5049256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:54Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.860381 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmjlm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f85220d6-f940-4538-806a-2b26bacd7b09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6315723f6cd375a392c8c427a737021b7e01415c902514b09ee43407749caa34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7795z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmjlm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:54Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.862847 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.862865 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.862873 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.862887 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.862896 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:54Z","lastTransitionTime":"2025-10-13T17:24:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.873762 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce442c80-fcde-4b79-b6f9-f8f25771dfd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10e7a946185d6502e37a1ab60ad1eca36cce991f230188d22b2d6e2ea554f920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://864e330267c8cec8487e1dcf4a50bbb2ba37d81d80361f0873e6bf69de1ed721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-htwnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:54Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.893274 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5rgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d7408e5-b529-4396-92b3-2fed275c3250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://439dea1ed9724a215fa1aa1f4a4bd84a4bb998f2f86814f5f4eb46989242f11c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42ff97f0a5fb40b52d5445a00eb8f70f8028610b76baf33117b425b821b5ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f42ff97f0a5fb40b52d5445a00eb8f70f8028610b76baf33117b425b821b5ec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://471bb77042e72bcac4bb28dd26f6151a7154bea0b3ec92272e405e9b22bc0170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://471bb77042e72bcac4bb28dd26f6151a7154bea0b3ec92272e405e9b22bc0170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20cb8981843cc39e8722f560f7a7e75483c93a30edffde9835533f8dc5b50587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20cb8981843cc39e8722f560f7a7e75483c93a30edffde9835533f8dc5b50587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79851ed8c84027d1bb9eccb69bed004ae1cdbe934245c66325eafe42b12ea808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79851ed8c84027d1bb9eccb69bed004ae1cdbe934245c66325eafe42b12ea808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5rgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:54Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.906169 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c6ntg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxbvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxbvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c6ntg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:54Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.964903 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.964961 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.964978 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.965000 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.965017 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:54Z","lastTransitionTime":"2025-10-13T17:24:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:54 crc kubenswrapper[4720]: I1013 17:24:54.973993 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61-metrics-certs\") pod \"network-metrics-daemon-c6ntg\" (UID: \"c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61\") " pod="openshift-multus/network-metrics-daemon-c6ntg" Oct 13 17:24:54 crc kubenswrapper[4720]: E1013 17:24:54.974254 4720 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 13 17:24:54 crc kubenswrapper[4720]: E1013 17:24:54.974353 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61-metrics-certs podName:c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61 nodeName:}" failed. No retries permitted until 2025-10-13 17:24:55.974328073 +0000 UTC m=+41.431578245 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61-metrics-certs") pod "network-metrics-daemon-c6ntg" (UID: "c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 13 17:24:55 crc kubenswrapper[4720]: I1013 17:24:55.068047 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:55 crc kubenswrapper[4720]: I1013 17:24:55.068118 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:55 crc kubenswrapper[4720]: I1013 17:24:55.068135 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:55 crc kubenswrapper[4720]: I1013 17:24:55.068151 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:55 crc kubenswrapper[4720]: I1013 17:24:55.068162 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:55Z","lastTransitionTime":"2025-10-13T17:24:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:55 crc kubenswrapper[4720]: I1013 17:24:55.168056 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:24:55 crc kubenswrapper[4720]: I1013 17:24:55.168098 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:24:55 crc kubenswrapper[4720]: I1013 17:24:55.168111 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:24:55 crc kubenswrapper[4720]: E1013 17:24:55.168213 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 17:24:55 crc kubenswrapper[4720]: E1013 17:24:55.168361 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 17:24:55 crc kubenswrapper[4720]: E1013 17:24:55.168471 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 17:24:55 crc kubenswrapper[4720]: I1013 17:24:55.170102 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:55 crc kubenswrapper[4720]: I1013 17:24:55.170153 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:55 crc kubenswrapper[4720]: I1013 17:24:55.170170 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:55 crc kubenswrapper[4720]: I1013 17:24:55.170234 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:55 crc kubenswrapper[4720]: I1013 17:24:55.170303 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:55Z","lastTransitionTime":"2025-10-13T17:24:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:55 crc kubenswrapper[4720]: I1013 17:24:55.184743 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:55Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:55 crc kubenswrapper[4720]: I1013 17:24:55.200724 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dnjrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e6efca3-dcc8-488e-8fb1-e14ee0396158\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0f539bcc67139c5b3d51ab01a3114afc20fc44cf5e286dce2b4cad0ea0629c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g676g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c792a043326901c5cbcae8c96b168aee71b56f6e91d0edfc7f58abb5da9c9c28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g676g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dnjrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:55Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:55 crc kubenswrapper[4720]: I1013 17:24:55.223514 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a1f1696-ed7f-4482-a300-ca25b68e09e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2507a30017fbd9947ca456b84812f0b26baabf572768870f5a34b3c7699443cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc60672e282de4d3d6a2d8fd64306ec0fc0d032ade655f88afb0c0b9e0a8e589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cf903094dae4e559bcef3e269240670b5d4693344d600779c394ce1e039b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06eea6556daedf913d87b8b0527b337070528b07cc026de878b4df332598a2d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b452d113d48729b6d4ab59f80138b15e7cb16cc16eef4ec9916901a067986d07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:55Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:55 crc kubenswrapper[4720]: I1013 17:24:55.238526 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:55Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:55 crc kubenswrapper[4720]: I1013 17:24:55.250404 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bvtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b35c333b-0d4e-4d9a-a8fe-a1c6f5fbbf75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03411ae3786161605ed0cb08a21661c7c7a4cc93f91320586547b8ffab3e90ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g97k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bvtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:55Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:55 crc kubenswrapper[4720]: I1013 17:24:55.266051 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea323a1a-c607-40cf-b61c-b7ff03399c45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88b8a46e2e2c2342bd22a4a9b42cadd0188ec83f23815773f6ffb46be79fdf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d875b0d7eb9f13bc697597a33f7196598ae679980b13842e92acf20477526f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e18d0e9df43d366a6e7088cc3d5bdc794882828f60f008eaf29e788e0141ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef2ba92cea782dd4fd31a511e1086777b58c00340014dff74ff6615b07bd10c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7977e20d8558810fb9f8f827830d0378de7d15c71340d396a95b506902c0962\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T17:24:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1013 17:24:28.736714 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 17:24:28.738546 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445610647/tls.crt::/tmp/serving-cert-445610647/tls.key\\\\\\\"\\\\nI1013 17:24:34.860768 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 17:24:34.864611 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 17:24:34.864632 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 17:24:34.864654 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 17:24:34.864660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 17:24:34.871230 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1013 17:24:34.871248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1013 17:24:34.871273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871290 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871305 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 17:24:34.871314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 17:24:34.871324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 17:24:34.871335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1013 17:24:34.872239 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd536f783e3f8f974f25772994c3d00a01c2ba66997a6e06236f1b83b890f8f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:55Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:55 crc kubenswrapper[4720]: I1013 17:24:55.272320 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:55 crc kubenswrapper[4720]: I1013 17:24:55.272346 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:55 crc kubenswrapper[4720]: I1013 17:24:55.272354 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:55 crc kubenswrapper[4720]: I1013 17:24:55.272387 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:55 crc kubenswrapper[4720]: I1013 17:24:55.272400 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:55Z","lastTransitionTime":"2025-10-13T17:24:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:55 crc kubenswrapper[4720]: I1013 17:24:55.286290 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxmjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b45ec2d-5bea-4007-a49f-224a866f93eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00868f87f1a3020a1db7e50377bbe90203d6e93799c8ec9f3100dd52a6c0cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xsrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxmjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:55Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:55 crc kubenswrapper[4720]: I1013 17:24:55.312724 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8064812e-b6aa-4f56-81c9-16154c00abad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c28fde07efa17d20ddecd1c8a0da09a6fa09e98a90b14718565c00aef9c0d25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bbd41d9b0518c8b060b1d01c335b6c9923e7196624fbd49326133faaa2e36ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd8a4dfb390a1643edb4eb4bd01970eaffd8dab0f748effac9e7c691ef4a3ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://013dfade47eccae4339fca0d379d914f63502613f87f1bc621113f7008634475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69d511a66a84fb0fa90f426bd4609c2b3ad915571802c6c024e0370d6e1683a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1aefa46ee989416f0e4fd05f6f12031da637a354fadd7f6caef75ddc2c74ddf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://696c3d0d1105e26a509a7a8f91eb2ade836194b7de933f28b7a3d380b5964ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://696c3d0d1105e26a509a7a8f91eb2ade836194b7de933f28b7a3d380b5964ee3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T17:24:51Z\\\",\\\"message\\\":\\\"loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/catalog-operator-metrics\\\\\\\"}\\\\nI1013 17:24:51.349805 6169 services_controller.go:454] Service openshift-machine-config-operator/machine-config-daemon for network=default has 2 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI1013 17:24:51.349817 6169 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1013 17:24:51.349829 6169 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-authentication-operator/metrics\\\\\\\"}\\\\nI1013 17:24:51.349839 6169 services_controller.go:360] Finished syncing service metrics on namespace openshift-authentication-operator for network=default : 675.638µs\\\\nI1013 17:24:51.349848 6169 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF1013 17:24:51.349740 6169 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initializatio\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pn6lz_openshift-ovn-kubernetes(8064812e-b6aa-4f56-81c9-16154c00abad)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34b1fffbebfbd3599bed22fe3fe80129f5b704560c54d09e9ab5e9ea8bba4513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:55Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:55 crc kubenswrapper[4720]: I1013 17:24:55.329632 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://705766b44d481300d0fd8cc654aa2dc9046733e5a71ad3a7ab789f85e737dcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a69b8825692cbbb7cce0a52b1ccb7b137ce6c9f8a8d788fff95bb7228cb40e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:55Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:55 crc kubenswrapper[4720]: I1013 17:24:55.345670 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f85e4aab30e7f49ef05e48cc22fc22e0291e32604a8f74ae1cc2ea57b5889e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:55Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:55 crc kubenswrapper[4720]: I1013 17:24:55.367598 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f856022da70859465022db433585bd8cad12f8cd2aeae7943491d3f7e5049256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:55Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:55 crc kubenswrapper[4720]: I1013 17:24:55.375786 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:55 crc kubenswrapper[4720]: I1013 17:24:55.375831 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:55 crc kubenswrapper[4720]: I1013 17:24:55.375851 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:55 crc kubenswrapper[4720]: I1013 17:24:55.375873 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:55 crc kubenswrapper[4720]: I1013 17:24:55.375889 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:55Z","lastTransitionTime":"2025-10-13T17:24:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:55 crc kubenswrapper[4720]: I1013 17:24:55.380779 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmjlm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f85220d6-f940-4538-806a-2b26bacd7b09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6315723f6cd375a392c8c427a737021b7e01415c902514b09ee43407749caa34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7795z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmjlm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:55Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:55 crc kubenswrapper[4720]: I1013 17:24:55.397087 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7baf94d8-250c-4568-ab1c-a1429b8caf70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435ba16189d6d19892aaab158512c1a84a72103779e3cc4b2c634de344583c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e566f3b66d404dd8da93bba1745b9e67cd63532330591f5b735f6826c07eae2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ec64662f56c2609ebac1383a78840ba4e0d9fb1fe10e666169285d3852e6b40\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d8f079c2674e0c6bea1837755db23bc3205da6a01ac1896d6293226d9c6fb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:55Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:55 crc kubenswrapper[4720]: I1013 17:24:55.413085 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:55Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:55 crc kubenswrapper[4720]: I1013 17:24:55.424659 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce442c80-fcde-4b79-b6f9-f8f25771dfd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10e7a946185d6502e37a1ab60ad1eca36cce991f230188d22b2d6e2ea554f920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://864e330267c8cec8487e1dcf4a50bbb2ba37d81d80361f0873e6bf69de1ed721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-htwnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:55Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:55 crc kubenswrapper[4720]: I1013 17:24:55.442147 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5rgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d7408e5-b529-4396-92b3-2fed275c3250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://439dea1ed9724a215fa1aa1f4a4bd84a4bb998f2f86814f5f4eb46989242f11c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42ff97f0a5fb40b52d5445a00eb8f70f8028610b76baf33117b425b821b5ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f42ff97f0a5fb40b52d5445a00eb8f70f8028610b76baf33117b425b821b5ec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://471bb77042e72bcac4bb28dd26f6151a7154bea0b3ec92272e405e9b22bc0170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://471bb77042e72bcac4bb28dd26f6151a7154bea0b3ec92272e405e9b22bc0170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20cb8981843cc39e8722f560f7a7e75483c93a30edffde9835533f8dc5b50587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20cb8981843cc39e8722f560f7a7e75483c93a30edffde9835533f8dc5b50587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79851ed8c84027d1bb9eccb69bed004ae1cdbe934245c66325eafe42b12ea808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79851ed8c84027d1bb9eccb69bed004ae1cdbe934245c66325eafe42b12ea808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5rgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:55Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:55 crc kubenswrapper[4720]: I1013 17:24:55.454173 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c6ntg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxbvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxbvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c6ntg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:24:55Z is after 2025-08-24T17:21:41Z" Oct 13 17:24:55 crc kubenswrapper[4720]: I1013 17:24:55.477824 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:55 crc kubenswrapper[4720]: I1013 17:24:55.477882 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:55 crc kubenswrapper[4720]: I1013 17:24:55.477898 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:55 crc kubenswrapper[4720]: I1013 17:24:55.477921 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:55 crc kubenswrapper[4720]: I1013 17:24:55.477939 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:55Z","lastTransitionTime":"2025-10-13T17:24:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:55 crc kubenswrapper[4720]: I1013 17:24:55.580268 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:55 crc kubenswrapper[4720]: I1013 17:24:55.580330 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:55 crc kubenswrapper[4720]: I1013 17:24:55.580359 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:55 crc kubenswrapper[4720]: I1013 17:24:55.580388 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:55 crc kubenswrapper[4720]: I1013 17:24:55.580411 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:55Z","lastTransitionTime":"2025-10-13T17:24:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:55 crc kubenswrapper[4720]: I1013 17:24:55.708871 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:55 crc kubenswrapper[4720]: I1013 17:24:55.709145 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:55 crc kubenswrapper[4720]: I1013 17:24:55.709278 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:55 crc kubenswrapper[4720]: I1013 17:24:55.709377 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:55 crc kubenswrapper[4720]: I1013 17:24:55.709462 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:55Z","lastTransitionTime":"2025-10-13T17:24:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:55 crc kubenswrapper[4720]: I1013 17:24:55.811980 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:55 crc kubenswrapper[4720]: I1013 17:24:55.812039 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:55 crc kubenswrapper[4720]: I1013 17:24:55.812056 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:55 crc kubenswrapper[4720]: I1013 17:24:55.812078 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:55 crc kubenswrapper[4720]: I1013 17:24:55.812094 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:55Z","lastTransitionTime":"2025-10-13T17:24:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:55 crc kubenswrapper[4720]: I1013 17:24:55.914846 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:55 crc kubenswrapper[4720]: I1013 17:24:55.914891 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:55 crc kubenswrapper[4720]: I1013 17:24:55.914902 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:55 crc kubenswrapper[4720]: I1013 17:24:55.914917 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:55 crc kubenswrapper[4720]: I1013 17:24:55.914927 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:55Z","lastTransitionTime":"2025-10-13T17:24:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:56 crc kubenswrapper[4720]: I1013 17:24:56.009939 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61-metrics-certs\") pod \"network-metrics-daemon-c6ntg\" (UID: \"c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61\") " pod="openshift-multus/network-metrics-daemon-c6ntg" Oct 13 17:24:56 crc kubenswrapper[4720]: E1013 17:24:56.010162 4720 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 13 17:24:56 crc kubenswrapper[4720]: E1013 17:24:56.010282 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61-metrics-certs podName:c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61 nodeName:}" failed. No retries permitted until 2025-10-13 17:24:58.010259636 +0000 UTC m=+43.467509798 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61-metrics-certs") pod "network-metrics-daemon-c6ntg" (UID: "c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 13 17:24:56 crc kubenswrapper[4720]: I1013 17:24:56.017121 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:56 crc kubenswrapper[4720]: I1013 17:24:56.017390 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:56 crc kubenswrapper[4720]: I1013 17:24:56.017470 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:56 crc kubenswrapper[4720]: I1013 17:24:56.017548 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:56 crc kubenswrapper[4720]: I1013 17:24:56.017629 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:56Z","lastTransitionTime":"2025-10-13T17:24:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:56 crc kubenswrapper[4720]: I1013 17:24:56.120099 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:56 crc kubenswrapper[4720]: I1013 17:24:56.120131 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:56 crc kubenswrapper[4720]: I1013 17:24:56.120139 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:56 crc kubenswrapper[4720]: I1013 17:24:56.120153 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:56 crc kubenswrapper[4720]: I1013 17:24:56.120163 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:56Z","lastTransitionTime":"2025-10-13T17:24:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:56 crc kubenswrapper[4720]: I1013 17:24:56.167913 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c6ntg" Oct 13 17:24:56 crc kubenswrapper[4720]: E1013 17:24:56.168083 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c6ntg" podUID="c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61" Oct 13 17:24:56 crc kubenswrapper[4720]: I1013 17:24:56.222689 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:56 crc kubenswrapper[4720]: I1013 17:24:56.222829 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:56 crc kubenswrapper[4720]: I1013 17:24:56.222898 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:56 crc kubenswrapper[4720]: I1013 17:24:56.222979 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:56 crc kubenswrapper[4720]: I1013 17:24:56.223045 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:56Z","lastTransitionTime":"2025-10-13T17:24:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:56 crc kubenswrapper[4720]: I1013 17:24:56.325802 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:56 crc kubenswrapper[4720]: I1013 17:24:56.325847 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:56 crc kubenswrapper[4720]: I1013 17:24:56.325858 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:56 crc kubenswrapper[4720]: I1013 17:24:56.325874 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:56 crc kubenswrapper[4720]: I1013 17:24:56.325884 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:56Z","lastTransitionTime":"2025-10-13T17:24:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:56 crc kubenswrapper[4720]: I1013 17:24:56.428265 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:56 crc kubenswrapper[4720]: I1013 17:24:56.428486 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:56 crc kubenswrapper[4720]: I1013 17:24:56.428653 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:56 crc kubenswrapper[4720]: I1013 17:24:56.428813 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:56 crc kubenswrapper[4720]: I1013 17:24:56.428953 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:56Z","lastTransitionTime":"2025-10-13T17:24:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:56 crc kubenswrapper[4720]: I1013 17:24:56.531554 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:56 crc kubenswrapper[4720]: I1013 17:24:56.531630 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:56 crc kubenswrapper[4720]: I1013 17:24:56.531652 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:56 crc kubenswrapper[4720]: I1013 17:24:56.531682 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:56 crc kubenswrapper[4720]: I1013 17:24:56.531704 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:56Z","lastTransitionTime":"2025-10-13T17:24:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:56 crc kubenswrapper[4720]: I1013 17:24:56.634505 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:56 crc kubenswrapper[4720]: I1013 17:24:56.634571 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:56 crc kubenswrapper[4720]: I1013 17:24:56.634593 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:56 crc kubenswrapper[4720]: I1013 17:24:56.634629 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:56 crc kubenswrapper[4720]: I1013 17:24:56.634654 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:56Z","lastTransitionTime":"2025-10-13T17:24:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:56 crc kubenswrapper[4720]: I1013 17:24:56.737146 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:56 crc kubenswrapper[4720]: I1013 17:24:56.737251 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:56 crc kubenswrapper[4720]: I1013 17:24:56.737273 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:56 crc kubenswrapper[4720]: I1013 17:24:56.737295 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:56 crc kubenswrapper[4720]: I1013 17:24:56.737312 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:56Z","lastTransitionTime":"2025-10-13T17:24:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:56 crc kubenswrapper[4720]: I1013 17:24:56.839345 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:56 crc kubenswrapper[4720]: I1013 17:24:56.839396 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:56 crc kubenswrapper[4720]: I1013 17:24:56.839409 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:56 crc kubenswrapper[4720]: I1013 17:24:56.839425 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:56 crc kubenswrapper[4720]: I1013 17:24:56.839436 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:56Z","lastTransitionTime":"2025-10-13T17:24:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:56 crc kubenswrapper[4720]: I1013 17:24:56.942041 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:56 crc kubenswrapper[4720]: I1013 17:24:56.942102 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:56 crc kubenswrapper[4720]: I1013 17:24:56.942120 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:56 crc kubenswrapper[4720]: I1013 17:24:56.942142 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:56 crc kubenswrapper[4720]: I1013 17:24:56.942159 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:56Z","lastTransitionTime":"2025-10-13T17:24:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:57 crc kubenswrapper[4720]: I1013 17:24:57.045441 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:57 crc kubenswrapper[4720]: I1013 17:24:57.045495 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:57 crc kubenswrapper[4720]: I1013 17:24:57.045510 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:57 crc kubenswrapper[4720]: I1013 17:24:57.045533 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:57 crc kubenswrapper[4720]: I1013 17:24:57.045549 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:57Z","lastTransitionTime":"2025-10-13T17:24:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:57 crc kubenswrapper[4720]: I1013 17:24:57.147811 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:57 crc kubenswrapper[4720]: I1013 17:24:57.147839 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:57 crc kubenswrapper[4720]: I1013 17:24:57.147848 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:57 crc kubenswrapper[4720]: I1013 17:24:57.147861 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:57 crc kubenswrapper[4720]: I1013 17:24:57.147870 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:57Z","lastTransitionTime":"2025-10-13T17:24:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:57 crc kubenswrapper[4720]: I1013 17:24:57.167412 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:24:57 crc kubenswrapper[4720]: I1013 17:24:57.167444 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:24:57 crc kubenswrapper[4720]: I1013 17:24:57.167503 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:24:57 crc kubenswrapper[4720]: E1013 17:24:57.167775 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 17:24:57 crc kubenswrapper[4720]: E1013 17:24:57.168074 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 17:24:57 crc kubenswrapper[4720]: E1013 17:24:57.167948 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 17:24:57 crc kubenswrapper[4720]: I1013 17:24:57.250153 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:57 crc kubenswrapper[4720]: I1013 17:24:57.250234 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:57 crc kubenswrapper[4720]: I1013 17:24:57.250244 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:57 crc kubenswrapper[4720]: I1013 17:24:57.250260 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:57 crc kubenswrapper[4720]: I1013 17:24:57.250270 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:57Z","lastTransitionTime":"2025-10-13T17:24:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:57 crc kubenswrapper[4720]: I1013 17:24:57.353796 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:57 crc kubenswrapper[4720]: I1013 17:24:57.353897 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:57 crc kubenswrapper[4720]: I1013 17:24:57.353920 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:57 crc kubenswrapper[4720]: I1013 17:24:57.353967 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:57 crc kubenswrapper[4720]: I1013 17:24:57.353989 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:57Z","lastTransitionTime":"2025-10-13T17:24:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:57 crc kubenswrapper[4720]: I1013 17:24:57.457150 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:57 crc kubenswrapper[4720]: I1013 17:24:57.457250 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:57 crc kubenswrapper[4720]: I1013 17:24:57.457272 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:57 crc kubenswrapper[4720]: I1013 17:24:57.457298 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:57 crc kubenswrapper[4720]: I1013 17:24:57.457317 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:57Z","lastTransitionTime":"2025-10-13T17:24:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:57 crc kubenswrapper[4720]: I1013 17:24:57.508664 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:24:57 crc kubenswrapper[4720]: I1013 17:24:57.509422 4720 scope.go:117] "RemoveContainer" containerID="696c3d0d1105e26a509a7a8f91eb2ade836194b7de933f28b7a3d380b5964ee3" Oct 13 17:24:57 crc kubenswrapper[4720]: E1013 17:24:57.509564 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pn6lz_openshift-ovn-kubernetes(8064812e-b6aa-4f56-81c9-16154c00abad)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" podUID="8064812e-b6aa-4f56-81c9-16154c00abad" Oct 13 17:24:57 crc kubenswrapper[4720]: I1013 17:24:57.559391 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:57 crc kubenswrapper[4720]: I1013 17:24:57.559490 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:57 crc kubenswrapper[4720]: I1013 17:24:57.559500 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:57 crc kubenswrapper[4720]: I1013 17:24:57.559514 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:57 crc kubenswrapper[4720]: I1013 17:24:57.559524 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:57Z","lastTransitionTime":"2025-10-13T17:24:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:57 crc kubenswrapper[4720]: I1013 17:24:57.661823 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:57 crc kubenswrapper[4720]: I1013 17:24:57.661865 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:57 crc kubenswrapper[4720]: I1013 17:24:57.661875 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:57 crc kubenswrapper[4720]: I1013 17:24:57.661891 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:57 crc kubenswrapper[4720]: I1013 17:24:57.661903 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:57Z","lastTransitionTime":"2025-10-13T17:24:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:57 crc kubenswrapper[4720]: I1013 17:24:57.764690 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:57 crc kubenswrapper[4720]: I1013 17:24:57.764729 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:57 crc kubenswrapper[4720]: I1013 17:24:57.764738 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:57 crc kubenswrapper[4720]: I1013 17:24:57.764752 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:57 crc kubenswrapper[4720]: I1013 17:24:57.764762 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:57Z","lastTransitionTime":"2025-10-13T17:24:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:57 crc kubenswrapper[4720]: I1013 17:24:57.867740 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:57 crc kubenswrapper[4720]: I1013 17:24:57.867806 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:57 crc kubenswrapper[4720]: I1013 17:24:57.867816 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:57 crc kubenswrapper[4720]: I1013 17:24:57.867833 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:57 crc kubenswrapper[4720]: I1013 17:24:57.867846 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:57Z","lastTransitionTime":"2025-10-13T17:24:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:57 crc kubenswrapper[4720]: I1013 17:24:57.971094 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:57 crc kubenswrapper[4720]: I1013 17:24:57.971144 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:57 crc kubenswrapper[4720]: I1013 17:24:57.971157 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:57 crc kubenswrapper[4720]: I1013 17:24:57.971177 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:57 crc kubenswrapper[4720]: I1013 17:24:57.971206 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:57Z","lastTransitionTime":"2025-10-13T17:24:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:58 crc kubenswrapper[4720]: I1013 17:24:58.032531 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61-metrics-certs\") pod \"network-metrics-daemon-c6ntg\" (UID: \"c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61\") " pod="openshift-multus/network-metrics-daemon-c6ntg" Oct 13 17:24:58 crc kubenswrapper[4720]: E1013 17:24:58.032742 4720 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 13 17:24:58 crc kubenswrapper[4720]: E1013 17:24:58.032844 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61-metrics-certs podName:c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61 nodeName:}" failed. No retries permitted until 2025-10-13 17:25:02.032816798 +0000 UTC m=+47.490066970 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61-metrics-certs") pod "network-metrics-daemon-c6ntg" (UID: "c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 13 17:24:58 crc kubenswrapper[4720]: I1013 17:24:58.073627 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:58 crc kubenswrapper[4720]: I1013 17:24:58.073672 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:58 crc kubenswrapper[4720]: I1013 17:24:58.073683 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:58 crc kubenswrapper[4720]: I1013 17:24:58.073701 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:58 crc kubenswrapper[4720]: I1013 17:24:58.073713 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:58Z","lastTransitionTime":"2025-10-13T17:24:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:58 crc kubenswrapper[4720]: I1013 17:24:58.167184 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c6ntg" Oct 13 17:24:58 crc kubenswrapper[4720]: E1013 17:24:58.167405 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c6ntg" podUID="c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61" Oct 13 17:24:58 crc kubenswrapper[4720]: I1013 17:24:58.176763 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:58 crc kubenswrapper[4720]: I1013 17:24:58.176826 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:58 crc kubenswrapper[4720]: I1013 17:24:58.176837 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:58 crc kubenswrapper[4720]: I1013 17:24:58.176872 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:58 crc kubenswrapper[4720]: I1013 17:24:58.176886 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:58Z","lastTransitionTime":"2025-10-13T17:24:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:58 crc kubenswrapper[4720]: I1013 17:24:58.280300 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:58 crc kubenswrapper[4720]: I1013 17:24:58.280364 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:58 crc kubenswrapper[4720]: I1013 17:24:58.280381 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:58 crc kubenswrapper[4720]: I1013 17:24:58.280406 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:58 crc kubenswrapper[4720]: I1013 17:24:58.280422 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:58Z","lastTransitionTime":"2025-10-13T17:24:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:58 crc kubenswrapper[4720]: I1013 17:24:58.382879 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:58 crc kubenswrapper[4720]: I1013 17:24:58.382958 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:58 crc kubenswrapper[4720]: I1013 17:24:58.382976 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:58 crc kubenswrapper[4720]: I1013 17:24:58.382992 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:58 crc kubenswrapper[4720]: I1013 17:24:58.383005 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:58Z","lastTransitionTime":"2025-10-13T17:24:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:58 crc kubenswrapper[4720]: I1013 17:24:58.485601 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:58 crc kubenswrapper[4720]: I1013 17:24:58.485672 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:58 crc kubenswrapper[4720]: I1013 17:24:58.485693 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:58 crc kubenswrapper[4720]: I1013 17:24:58.485721 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:58 crc kubenswrapper[4720]: I1013 17:24:58.485741 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:58Z","lastTransitionTime":"2025-10-13T17:24:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:58 crc kubenswrapper[4720]: I1013 17:24:58.587951 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:58 crc kubenswrapper[4720]: I1013 17:24:58.587985 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:58 crc kubenswrapper[4720]: I1013 17:24:58.587994 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:58 crc kubenswrapper[4720]: I1013 17:24:58.588018 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:58 crc kubenswrapper[4720]: I1013 17:24:58.588026 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:58Z","lastTransitionTime":"2025-10-13T17:24:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:58 crc kubenswrapper[4720]: I1013 17:24:58.690017 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:58 crc kubenswrapper[4720]: I1013 17:24:58.690064 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:58 crc kubenswrapper[4720]: I1013 17:24:58.690077 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:58 crc kubenswrapper[4720]: I1013 17:24:58.690095 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:58 crc kubenswrapper[4720]: I1013 17:24:58.690110 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:58Z","lastTransitionTime":"2025-10-13T17:24:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:58 crc kubenswrapper[4720]: I1013 17:24:58.792495 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:58 crc kubenswrapper[4720]: I1013 17:24:58.792561 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:58 crc kubenswrapper[4720]: I1013 17:24:58.792577 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:58 crc kubenswrapper[4720]: I1013 17:24:58.792604 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:58 crc kubenswrapper[4720]: I1013 17:24:58.792623 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:58Z","lastTransitionTime":"2025-10-13T17:24:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:58 crc kubenswrapper[4720]: I1013 17:24:58.894571 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:58 crc kubenswrapper[4720]: I1013 17:24:58.894626 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:58 crc kubenswrapper[4720]: I1013 17:24:58.894635 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:58 crc kubenswrapper[4720]: I1013 17:24:58.894653 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:58 crc kubenswrapper[4720]: I1013 17:24:58.894663 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:58Z","lastTransitionTime":"2025-10-13T17:24:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:58 crc kubenswrapper[4720]: I1013 17:24:58.997052 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:58 crc kubenswrapper[4720]: I1013 17:24:58.997106 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:58 crc kubenswrapper[4720]: I1013 17:24:58.997123 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:58 crc kubenswrapper[4720]: I1013 17:24:58.997148 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:58 crc kubenswrapper[4720]: I1013 17:24:58.997165 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:58Z","lastTransitionTime":"2025-10-13T17:24:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:59 crc kubenswrapper[4720]: I1013 17:24:59.099693 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:59 crc kubenswrapper[4720]: I1013 17:24:59.099735 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:59 crc kubenswrapper[4720]: I1013 17:24:59.099747 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:59 crc kubenswrapper[4720]: I1013 17:24:59.099762 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:59 crc kubenswrapper[4720]: I1013 17:24:59.099773 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:59Z","lastTransitionTime":"2025-10-13T17:24:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:59 crc kubenswrapper[4720]: I1013 17:24:59.167144 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:24:59 crc kubenswrapper[4720]: I1013 17:24:59.167174 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:24:59 crc kubenswrapper[4720]: I1013 17:24:59.167341 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:24:59 crc kubenswrapper[4720]: E1013 17:24:59.167331 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 17:24:59 crc kubenswrapper[4720]: E1013 17:24:59.167451 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 17:24:59 crc kubenswrapper[4720]: E1013 17:24:59.167532 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 17:24:59 crc kubenswrapper[4720]: I1013 17:24:59.201530 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:59 crc kubenswrapper[4720]: I1013 17:24:59.201581 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:59 crc kubenswrapper[4720]: I1013 17:24:59.201594 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:59 crc kubenswrapper[4720]: I1013 17:24:59.201610 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:59 crc kubenswrapper[4720]: I1013 17:24:59.201622 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:59Z","lastTransitionTime":"2025-10-13T17:24:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:59 crc kubenswrapper[4720]: I1013 17:24:59.304337 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:59 crc kubenswrapper[4720]: I1013 17:24:59.304386 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:59 crc kubenswrapper[4720]: I1013 17:24:59.304400 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:59 crc kubenswrapper[4720]: I1013 17:24:59.304418 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:59 crc kubenswrapper[4720]: I1013 17:24:59.304430 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:59Z","lastTransitionTime":"2025-10-13T17:24:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:59 crc kubenswrapper[4720]: I1013 17:24:59.406748 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:59 crc kubenswrapper[4720]: I1013 17:24:59.406782 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:59 crc kubenswrapper[4720]: I1013 17:24:59.406833 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:59 crc kubenswrapper[4720]: I1013 17:24:59.406847 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:59 crc kubenswrapper[4720]: I1013 17:24:59.406858 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:59Z","lastTransitionTime":"2025-10-13T17:24:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:59 crc kubenswrapper[4720]: I1013 17:24:59.509784 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:59 crc kubenswrapper[4720]: I1013 17:24:59.509851 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:59 crc kubenswrapper[4720]: I1013 17:24:59.509873 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:59 crc kubenswrapper[4720]: I1013 17:24:59.509903 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:59 crc kubenswrapper[4720]: I1013 17:24:59.509940 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:59Z","lastTransitionTime":"2025-10-13T17:24:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:59 crc kubenswrapper[4720]: I1013 17:24:59.612954 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:59 crc kubenswrapper[4720]: I1013 17:24:59.613005 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:59 crc kubenswrapper[4720]: I1013 17:24:59.613016 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:59 crc kubenswrapper[4720]: I1013 17:24:59.613034 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:59 crc kubenswrapper[4720]: I1013 17:24:59.613047 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:59Z","lastTransitionTime":"2025-10-13T17:24:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:59 crc kubenswrapper[4720]: I1013 17:24:59.716355 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:59 crc kubenswrapper[4720]: I1013 17:24:59.716462 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:59 crc kubenswrapper[4720]: I1013 17:24:59.716728 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:59 crc kubenswrapper[4720]: I1013 17:24:59.716754 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:59 crc kubenswrapper[4720]: I1013 17:24:59.716797 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:59Z","lastTransitionTime":"2025-10-13T17:24:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:59 crc kubenswrapper[4720]: I1013 17:24:59.819470 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:59 crc kubenswrapper[4720]: I1013 17:24:59.819524 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:59 crc kubenswrapper[4720]: I1013 17:24:59.819535 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:59 crc kubenswrapper[4720]: I1013 17:24:59.819553 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:59 crc kubenswrapper[4720]: I1013 17:24:59.819565 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:59Z","lastTransitionTime":"2025-10-13T17:24:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:24:59 crc kubenswrapper[4720]: I1013 17:24:59.922522 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:24:59 crc kubenswrapper[4720]: I1013 17:24:59.922586 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:24:59 crc kubenswrapper[4720]: I1013 17:24:59.922605 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:24:59 crc kubenswrapper[4720]: I1013 17:24:59.922630 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:24:59 crc kubenswrapper[4720]: I1013 17:24:59.922647 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:24:59Z","lastTransitionTime":"2025-10-13T17:24:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:00 crc kubenswrapper[4720]: I1013 17:25:00.025589 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:00 crc kubenswrapper[4720]: I1013 17:25:00.025640 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:00 crc kubenswrapper[4720]: I1013 17:25:00.025655 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:00 crc kubenswrapper[4720]: I1013 17:25:00.025673 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:00 crc kubenswrapper[4720]: I1013 17:25:00.025684 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:00Z","lastTransitionTime":"2025-10-13T17:25:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:00 crc kubenswrapper[4720]: I1013 17:25:00.128083 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:00 crc kubenswrapper[4720]: I1013 17:25:00.128125 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:00 crc kubenswrapper[4720]: I1013 17:25:00.128135 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:00 crc kubenswrapper[4720]: I1013 17:25:00.128149 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:00 crc kubenswrapper[4720]: I1013 17:25:00.128158 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:00Z","lastTransitionTime":"2025-10-13T17:25:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:00 crc kubenswrapper[4720]: I1013 17:25:00.168019 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c6ntg" Oct 13 17:25:00 crc kubenswrapper[4720]: E1013 17:25:00.168164 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c6ntg" podUID="c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61" Oct 13 17:25:00 crc kubenswrapper[4720]: I1013 17:25:00.230393 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:00 crc kubenswrapper[4720]: I1013 17:25:00.230475 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:00 crc kubenswrapper[4720]: I1013 17:25:00.230523 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:00 crc kubenswrapper[4720]: I1013 17:25:00.230539 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:00 crc kubenswrapper[4720]: I1013 17:25:00.230550 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:00Z","lastTransitionTime":"2025-10-13T17:25:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:00 crc kubenswrapper[4720]: I1013 17:25:00.332645 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:00 crc kubenswrapper[4720]: I1013 17:25:00.332679 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:00 crc kubenswrapper[4720]: I1013 17:25:00.332708 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:00 crc kubenswrapper[4720]: I1013 17:25:00.332726 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:00 crc kubenswrapper[4720]: I1013 17:25:00.332739 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:00Z","lastTransitionTime":"2025-10-13T17:25:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:00 crc kubenswrapper[4720]: I1013 17:25:00.436203 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:00 crc kubenswrapper[4720]: I1013 17:25:00.436242 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:00 crc kubenswrapper[4720]: I1013 17:25:00.436252 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:00 crc kubenswrapper[4720]: I1013 17:25:00.436268 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:00 crc kubenswrapper[4720]: I1013 17:25:00.436278 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:00Z","lastTransitionTime":"2025-10-13T17:25:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:00 crc kubenswrapper[4720]: I1013 17:25:00.538443 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:00 crc kubenswrapper[4720]: I1013 17:25:00.538498 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:00 crc kubenswrapper[4720]: I1013 17:25:00.538513 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:00 crc kubenswrapper[4720]: I1013 17:25:00.538536 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:00 crc kubenswrapper[4720]: I1013 17:25:00.538554 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:00Z","lastTransitionTime":"2025-10-13T17:25:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:00 crc kubenswrapper[4720]: I1013 17:25:00.640164 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:00 crc kubenswrapper[4720]: I1013 17:25:00.640213 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:00 crc kubenswrapper[4720]: I1013 17:25:00.640225 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:00 crc kubenswrapper[4720]: I1013 17:25:00.640238 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:00 crc kubenswrapper[4720]: I1013 17:25:00.640248 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:00Z","lastTransitionTime":"2025-10-13T17:25:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:00 crc kubenswrapper[4720]: I1013 17:25:00.743294 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:00 crc kubenswrapper[4720]: I1013 17:25:00.743344 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:00 crc kubenswrapper[4720]: I1013 17:25:00.743361 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:00 crc kubenswrapper[4720]: I1013 17:25:00.743387 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:00 crc kubenswrapper[4720]: I1013 17:25:00.743405 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:00Z","lastTransitionTime":"2025-10-13T17:25:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:00 crc kubenswrapper[4720]: I1013 17:25:00.845504 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:00 crc kubenswrapper[4720]: I1013 17:25:00.845551 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:00 crc kubenswrapper[4720]: I1013 17:25:00.845561 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:00 crc kubenswrapper[4720]: I1013 17:25:00.845579 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:00 crc kubenswrapper[4720]: I1013 17:25:00.845592 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:00Z","lastTransitionTime":"2025-10-13T17:25:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:00 crc kubenswrapper[4720]: I1013 17:25:00.948847 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:00 crc kubenswrapper[4720]: I1013 17:25:00.948907 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:00 crc kubenswrapper[4720]: I1013 17:25:00.948925 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:00 crc kubenswrapper[4720]: I1013 17:25:00.948950 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:00 crc kubenswrapper[4720]: I1013 17:25:00.948967 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:00Z","lastTransitionTime":"2025-10-13T17:25:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.052028 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.052092 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.052114 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.052156 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.052182 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:01Z","lastTransitionTime":"2025-10-13T17:25:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.155643 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.155732 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.155749 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.155775 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.155792 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:01Z","lastTransitionTime":"2025-10-13T17:25:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.167309 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.167354 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.167409 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:25:01 crc kubenswrapper[4720]: E1013 17:25:01.167483 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 17:25:01 crc kubenswrapper[4720]: E1013 17:25:01.167591 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 17:25:01 crc kubenswrapper[4720]: E1013 17:25:01.167741 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.258241 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.258307 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.258325 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.258349 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.258366 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:01Z","lastTransitionTime":"2025-10-13T17:25:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.361088 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.361137 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.361148 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.361167 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.361179 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:01Z","lastTransitionTime":"2025-10-13T17:25:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.463732 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.463824 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.463842 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.463870 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.463892 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:01Z","lastTransitionTime":"2025-10-13T17:25:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.566276 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.566339 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.566349 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.566370 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.566382 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:01Z","lastTransitionTime":"2025-10-13T17:25:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.669056 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.669099 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.669111 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.669149 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.669162 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:01Z","lastTransitionTime":"2025-10-13T17:25:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.772608 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.772685 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.772705 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.772737 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.772757 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:01Z","lastTransitionTime":"2025-10-13T17:25:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.872136 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.872254 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.872277 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.872306 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.872329 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:01Z","lastTransitionTime":"2025-10-13T17:25:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:01 crc kubenswrapper[4720]: E1013 17:25:01.892611 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a530de0f-daad-4050-9522-69c64a451e75\\\",\\\"systemUUID\\\":\\\"00bb7c43-79d6-45b5-bd02-4b71a0ba6837\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:01Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.898559 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.898637 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.898656 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.898693 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.898718 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:01Z","lastTransitionTime":"2025-10-13T17:25:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:01 crc kubenswrapper[4720]: E1013 17:25:01.919598 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a530de0f-daad-4050-9522-69c64a451e75\\\",\\\"systemUUID\\\":\\\"00bb7c43-79d6-45b5-bd02-4b71a0ba6837\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:01Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.925779 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.925825 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.925842 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.925867 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.925884 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:01Z","lastTransitionTime":"2025-10-13T17:25:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:01 crc kubenswrapper[4720]: E1013 17:25:01.944521 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a530de0f-daad-4050-9522-69c64a451e75\\\",\\\"systemUUID\\\":\\\"00bb7c43-79d6-45b5-bd02-4b71a0ba6837\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:01Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.949152 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.949241 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.949259 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.949312 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.949331 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:01Z","lastTransitionTime":"2025-10-13T17:25:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:01 crc kubenswrapper[4720]: E1013 17:25:01.969466 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a530de0f-daad-4050-9522-69c64a451e75\\\",\\\"systemUUID\\\":\\\"00bb7c43-79d6-45b5-bd02-4b71a0ba6837\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:01Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.974064 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.974104 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.974122 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.974147 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.974165 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:01Z","lastTransitionTime":"2025-10-13T17:25:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:01 crc kubenswrapper[4720]: E1013 17:25:01.992879 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a530de0f-daad-4050-9522-69c64a451e75\\\",\\\"systemUUID\\\":\\\"00bb7c43-79d6-45b5-bd02-4b71a0ba6837\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:01Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:01 crc kubenswrapper[4720]: E1013 17:25:01.993101 4720 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.995322 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.995366 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.995384 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.995410 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:01 crc kubenswrapper[4720]: I1013 17:25:01.995429 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:01Z","lastTransitionTime":"2025-10-13T17:25:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:02 crc kubenswrapper[4720]: I1013 17:25:02.075175 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61-metrics-certs\") pod \"network-metrics-daemon-c6ntg\" (UID: \"c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61\") " pod="openshift-multus/network-metrics-daemon-c6ntg" Oct 13 17:25:02 crc kubenswrapper[4720]: E1013 17:25:02.075400 4720 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 13 17:25:02 crc kubenswrapper[4720]: E1013 17:25:02.075697 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61-metrics-certs podName:c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61 nodeName:}" failed. No retries permitted until 2025-10-13 17:25:10.075676443 +0000 UTC m=+55.532926575 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61-metrics-certs") pod "network-metrics-daemon-c6ntg" (UID: "c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 13 17:25:02 crc kubenswrapper[4720]: I1013 17:25:02.098596 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:02 crc kubenswrapper[4720]: I1013 17:25:02.098649 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:02 crc kubenswrapper[4720]: I1013 17:25:02.098668 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:02 crc kubenswrapper[4720]: I1013 17:25:02.098691 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:02 crc kubenswrapper[4720]: I1013 17:25:02.098711 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:02Z","lastTransitionTime":"2025-10-13T17:25:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:02 crc kubenswrapper[4720]: I1013 17:25:02.167867 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c6ntg" Oct 13 17:25:02 crc kubenswrapper[4720]: E1013 17:25:02.168438 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c6ntg" podUID="c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61" Oct 13 17:25:02 crc kubenswrapper[4720]: I1013 17:25:02.202072 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:02 crc kubenswrapper[4720]: I1013 17:25:02.202138 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:02 crc kubenswrapper[4720]: I1013 17:25:02.202155 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:02 crc kubenswrapper[4720]: I1013 17:25:02.202179 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:02 crc kubenswrapper[4720]: I1013 17:25:02.202233 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:02Z","lastTransitionTime":"2025-10-13T17:25:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:02 crc kubenswrapper[4720]: I1013 17:25:02.305533 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:02 crc kubenswrapper[4720]: I1013 17:25:02.305572 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:02 crc kubenswrapper[4720]: I1013 17:25:02.305581 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:02 crc kubenswrapper[4720]: I1013 17:25:02.305595 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:02 crc kubenswrapper[4720]: I1013 17:25:02.305604 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:02Z","lastTransitionTime":"2025-10-13T17:25:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:02 crc kubenswrapper[4720]: I1013 17:25:02.408582 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:02 crc kubenswrapper[4720]: I1013 17:25:02.408644 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:02 crc kubenswrapper[4720]: I1013 17:25:02.408665 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:02 crc kubenswrapper[4720]: I1013 17:25:02.408689 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:02 crc kubenswrapper[4720]: I1013 17:25:02.408705 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:02Z","lastTransitionTime":"2025-10-13T17:25:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:02 crc kubenswrapper[4720]: I1013 17:25:02.511267 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:02 crc kubenswrapper[4720]: I1013 17:25:02.511504 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:02 crc kubenswrapper[4720]: I1013 17:25:02.511609 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:02 crc kubenswrapper[4720]: I1013 17:25:02.511695 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:02 crc kubenswrapper[4720]: I1013 17:25:02.511779 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:02Z","lastTransitionTime":"2025-10-13T17:25:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:02 crc kubenswrapper[4720]: I1013 17:25:02.614503 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:02 crc kubenswrapper[4720]: I1013 17:25:02.614550 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:02 crc kubenswrapper[4720]: I1013 17:25:02.614567 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:02 crc kubenswrapper[4720]: I1013 17:25:02.614592 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:02 crc kubenswrapper[4720]: I1013 17:25:02.614608 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:02Z","lastTransitionTime":"2025-10-13T17:25:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:02 crc kubenswrapper[4720]: I1013 17:25:02.717045 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:02 crc kubenswrapper[4720]: I1013 17:25:02.717088 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:02 crc kubenswrapper[4720]: I1013 17:25:02.717098 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:02 crc kubenswrapper[4720]: I1013 17:25:02.717113 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:02 crc kubenswrapper[4720]: I1013 17:25:02.717123 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:02Z","lastTransitionTime":"2025-10-13T17:25:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:02 crc kubenswrapper[4720]: I1013 17:25:02.820427 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:02 crc kubenswrapper[4720]: I1013 17:25:02.820494 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:02 crc kubenswrapper[4720]: I1013 17:25:02.820505 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:02 crc kubenswrapper[4720]: I1013 17:25:02.820522 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:02 crc kubenswrapper[4720]: I1013 17:25:02.820533 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:02Z","lastTransitionTime":"2025-10-13T17:25:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:02 crc kubenswrapper[4720]: I1013 17:25:02.923904 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:02 crc kubenswrapper[4720]: I1013 17:25:02.923936 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:02 crc kubenswrapper[4720]: I1013 17:25:02.923945 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:02 crc kubenswrapper[4720]: I1013 17:25:02.923960 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:02 crc kubenswrapper[4720]: I1013 17:25:02.923969 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:02Z","lastTransitionTime":"2025-10-13T17:25:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:03 crc kubenswrapper[4720]: I1013 17:25:03.027353 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:03 crc kubenswrapper[4720]: I1013 17:25:03.027415 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:03 crc kubenswrapper[4720]: I1013 17:25:03.027426 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:03 crc kubenswrapper[4720]: I1013 17:25:03.027440 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:03 crc kubenswrapper[4720]: I1013 17:25:03.027451 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:03Z","lastTransitionTime":"2025-10-13T17:25:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:03 crc kubenswrapper[4720]: I1013 17:25:03.130308 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:03 crc kubenswrapper[4720]: I1013 17:25:03.130352 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:03 crc kubenswrapper[4720]: I1013 17:25:03.130364 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:03 crc kubenswrapper[4720]: I1013 17:25:03.130382 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:03 crc kubenswrapper[4720]: I1013 17:25:03.130393 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:03Z","lastTransitionTime":"2025-10-13T17:25:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:03 crc kubenswrapper[4720]: I1013 17:25:03.167433 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:25:03 crc kubenswrapper[4720]: I1013 17:25:03.167457 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:25:03 crc kubenswrapper[4720]: E1013 17:25:03.167595 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 17:25:03 crc kubenswrapper[4720]: I1013 17:25:03.167707 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:25:03 crc kubenswrapper[4720]: E1013 17:25:03.167792 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 17:25:03 crc kubenswrapper[4720]: E1013 17:25:03.167879 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 17:25:03 crc kubenswrapper[4720]: I1013 17:25:03.233882 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:03 crc kubenswrapper[4720]: I1013 17:25:03.233966 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:03 crc kubenswrapper[4720]: I1013 17:25:03.233988 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:03 crc kubenswrapper[4720]: I1013 17:25:03.234014 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:03 crc kubenswrapper[4720]: I1013 17:25:03.234038 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:03Z","lastTransitionTime":"2025-10-13T17:25:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:03 crc kubenswrapper[4720]: I1013 17:25:03.336511 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:03 crc kubenswrapper[4720]: I1013 17:25:03.336550 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:03 crc kubenswrapper[4720]: I1013 17:25:03.336562 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:03 crc kubenswrapper[4720]: I1013 17:25:03.336575 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:03 crc kubenswrapper[4720]: I1013 17:25:03.336585 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:03Z","lastTransitionTime":"2025-10-13T17:25:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:03 crc kubenswrapper[4720]: I1013 17:25:03.439711 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:03 crc kubenswrapper[4720]: I1013 17:25:03.439990 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:03 crc kubenswrapper[4720]: I1013 17:25:03.440164 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:03 crc kubenswrapper[4720]: I1013 17:25:03.440332 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:03 crc kubenswrapper[4720]: I1013 17:25:03.440448 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:03Z","lastTransitionTime":"2025-10-13T17:25:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:03 crc kubenswrapper[4720]: I1013 17:25:03.542735 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:03 crc kubenswrapper[4720]: I1013 17:25:03.542764 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:03 crc kubenswrapper[4720]: I1013 17:25:03.542774 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:03 crc kubenswrapper[4720]: I1013 17:25:03.542789 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:03 crc kubenswrapper[4720]: I1013 17:25:03.542800 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:03Z","lastTransitionTime":"2025-10-13T17:25:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:03 crc kubenswrapper[4720]: I1013 17:25:03.645708 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:03 crc kubenswrapper[4720]: I1013 17:25:03.645757 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:03 crc kubenswrapper[4720]: I1013 17:25:03.645768 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:03 crc kubenswrapper[4720]: I1013 17:25:03.645787 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:03 crc kubenswrapper[4720]: I1013 17:25:03.645800 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:03Z","lastTransitionTime":"2025-10-13T17:25:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:03 crc kubenswrapper[4720]: I1013 17:25:03.747959 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:03 crc kubenswrapper[4720]: I1013 17:25:03.748006 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:03 crc kubenswrapper[4720]: I1013 17:25:03.748017 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:03 crc kubenswrapper[4720]: I1013 17:25:03.748033 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:03 crc kubenswrapper[4720]: I1013 17:25:03.748044 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:03Z","lastTransitionTime":"2025-10-13T17:25:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:03 crc kubenswrapper[4720]: I1013 17:25:03.850666 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:03 crc kubenswrapper[4720]: I1013 17:25:03.850703 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:03 crc kubenswrapper[4720]: I1013 17:25:03.850712 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:03 crc kubenswrapper[4720]: I1013 17:25:03.850725 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:03 crc kubenswrapper[4720]: I1013 17:25:03.850736 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:03Z","lastTransitionTime":"2025-10-13T17:25:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:03 crc kubenswrapper[4720]: I1013 17:25:03.953761 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:03 crc kubenswrapper[4720]: I1013 17:25:03.953836 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:03 crc kubenswrapper[4720]: I1013 17:25:03.953858 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:03 crc kubenswrapper[4720]: I1013 17:25:03.953888 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:03 crc kubenswrapper[4720]: I1013 17:25:03.953908 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:03Z","lastTransitionTime":"2025-10-13T17:25:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:04 crc kubenswrapper[4720]: I1013 17:25:04.056477 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:04 crc kubenswrapper[4720]: I1013 17:25:04.056549 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:04 crc kubenswrapper[4720]: I1013 17:25:04.056571 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:04 crc kubenswrapper[4720]: I1013 17:25:04.056602 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:04 crc kubenswrapper[4720]: I1013 17:25:04.056625 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:04Z","lastTransitionTime":"2025-10-13T17:25:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:04 crc kubenswrapper[4720]: I1013 17:25:04.159838 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:04 crc kubenswrapper[4720]: I1013 17:25:04.159899 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:04 crc kubenswrapper[4720]: I1013 17:25:04.159910 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:04 crc kubenswrapper[4720]: I1013 17:25:04.159925 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:04 crc kubenswrapper[4720]: I1013 17:25:04.159936 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:04Z","lastTransitionTime":"2025-10-13T17:25:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:04 crc kubenswrapper[4720]: I1013 17:25:04.167122 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c6ntg" Oct 13 17:25:04 crc kubenswrapper[4720]: E1013 17:25:04.167383 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c6ntg" podUID="c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61" Oct 13 17:25:04 crc kubenswrapper[4720]: I1013 17:25:04.262113 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:04 crc kubenswrapper[4720]: I1013 17:25:04.262148 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:04 crc kubenswrapper[4720]: I1013 17:25:04.262157 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:04 crc kubenswrapper[4720]: I1013 17:25:04.262171 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:04 crc kubenswrapper[4720]: I1013 17:25:04.262202 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:04Z","lastTransitionTime":"2025-10-13T17:25:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:04 crc kubenswrapper[4720]: I1013 17:25:04.365023 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:04 crc kubenswrapper[4720]: I1013 17:25:04.365065 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:04 crc kubenswrapper[4720]: I1013 17:25:04.365076 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:04 crc kubenswrapper[4720]: I1013 17:25:04.365090 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:04 crc kubenswrapper[4720]: I1013 17:25:04.365099 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:04Z","lastTransitionTime":"2025-10-13T17:25:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:04 crc kubenswrapper[4720]: I1013 17:25:04.467667 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:04 crc kubenswrapper[4720]: I1013 17:25:04.467723 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:04 crc kubenswrapper[4720]: I1013 17:25:04.467737 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:04 crc kubenswrapper[4720]: I1013 17:25:04.467756 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:04 crc kubenswrapper[4720]: I1013 17:25:04.467768 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:04Z","lastTransitionTime":"2025-10-13T17:25:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:04 crc kubenswrapper[4720]: I1013 17:25:04.569670 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:04 crc kubenswrapper[4720]: I1013 17:25:04.569727 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:04 crc kubenswrapper[4720]: I1013 17:25:04.569745 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:04 crc kubenswrapper[4720]: I1013 17:25:04.569767 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:04 crc kubenswrapper[4720]: I1013 17:25:04.569788 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:04Z","lastTransitionTime":"2025-10-13T17:25:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:04 crc kubenswrapper[4720]: I1013 17:25:04.671732 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:04 crc kubenswrapper[4720]: I1013 17:25:04.671781 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:04 crc kubenswrapper[4720]: I1013 17:25:04.671797 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:04 crc kubenswrapper[4720]: I1013 17:25:04.671820 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:04 crc kubenswrapper[4720]: I1013 17:25:04.671837 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:04Z","lastTransitionTime":"2025-10-13T17:25:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:04 crc kubenswrapper[4720]: I1013 17:25:04.774602 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:04 crc kubenswrapper[4720]: I1013 17:25:04.774668 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:04 crc kubenswrapper[4720]: I1013 17:25:04.774680 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:04 crc kubenswrapper[4720]: I1013 17:25:04.774706 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:04 crc kubenswrapper[4720]: I1013 17:25:04.774724 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:04Z","lastTransitionTime":"2025-10-13T17:25:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:04 crc kubenswrapper[4720]: I1013 17:25:04.878445 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:04 crc kubenswrapper[4720]: I1013 17:25:04.878518 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:04 crc kubenswrapper[4720]: I1013 17:25:04.878530 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:04 crc kubenswrapper[4720]: I1013 17:25:04.878550 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:04 crc kubenswrapper[4720]: I1013 17:25:04.878564 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:04Z","lastTransitionTime":"2025-10-13T17:25:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:04 crc kubenswrapper[4720]: I1013 17:25:04.981377 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:04 crc kubenswrapper[4720]: I1013 17:25:04.981428 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:04 crc kubenswrapper[4720]: I1013 17:25:04.981439 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:04 crc kubenswrapper[4720]: I1013 17:25:04.981457 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:04 crc kubenswrapper[4720]: I1013 17:25:04.981469 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:04Z","lastTransitionTime":"2025-10-13T17:25:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:05 crc kubenswrapper[4720]: I1013 17:25:05.084264 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:05 crc kubenswrapper[4720]: I1013 17:25:05.084369 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:05 crc kubenswrapper[4720]: I1013 17:25:05.084380 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:05 crc kubenswrapper[4720]: I1013 17:25:05.084398 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:05 crc kubenswrapper[4720]: I1013 17:25:05.084410 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:05Z","lastTransitionTime":"2025-10-13T17:25:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:05 crc kubenswrapper[4720]: I1013 17:25:05.168104 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:25:05 crc kubenswrapper[4720]: I1013 17:25:05.168238 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:25:05 crc kubenswrapper[4720]: I1013 17:25:05.168123 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:25:05 crc kubenswrapper[4720]: E1013 17:25:05.168450 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 17:25:05 crc kubenswrapper[4720]: E1013 17:25:05.168547 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 17:25:05 crc kubenswrapper[4720]: E1013 17:25:05.168800 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 17:25:05 crc kubenswrapper[4720]: I1013 17:25:05.186437 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:05 crc kubenswrapper[4720]: I1013 17:25:05.186449 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:05Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:05 crc kubenswrapper[4720]: I1013 17:25:05.186496 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:05 crc kubenswrapper[4720]: I1013 17:25:05.186665 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:05 crc kubenswrapper[4720]: I1013 17:25:05.186696 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:05 crc kubenswrapper[4720]: I1013 17:25:05.186709 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:05Z","lastTransitionTime":"2025-10-13T17:25:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:05 crc kubenswrapper[4720]: I1013 17:25:05.204878 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://705766b44d481300d0fd8cc654aa2dc9046733e5a71ad3a7ab789f85e737dcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a69b8825692cbbb7cce0a52b1ccb7b137ce6c9f8a8d788fff95bb7228cb40e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:05Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:05 crc kubenswrapper[4720]: I1013 17:25:05.219169 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f85e4aab30e7f49ef05e48cc22fc22e0291e32604a8f74ae1cc2ea57b5889e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:05Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:05 crc kubenswrapper[4720]: I1013 17:25:05.244240 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f856022da70859465022db433585bd8cad12f8cd2aeae7943491d3f7e5049256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:05Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:05 crc kubenswrapper[4720]: I1013 17:25:05.257300 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmjlm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f85220d6-f940-4538-806a-2b26bacd7b09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6315723f6cd375a392c8c427a737021b7e01415c902514b09ee43407749caa34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7795z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmjlm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:05Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:05 crc kubenswrapper[4720]: I1013 17:25:05.274473 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7baf94d8-250c-4568-ab1c-a1429b8caf70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435ba16189d6d19892aaab158512c1a84a72103779e3cc4b2c634de344583c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e566f3b66d404dd8da93bba1745b9e67cd63532330591f5b735f6826c07eae2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ec64662f56c2609ebac1383a78840ba4e0d9fb1fe10e666169285d3852e6b40\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d8f079c2674e0c6bea1837755db23bc3205da6a01ac1896d6293226d9c6fb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:05Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:05 crc kubenswrapper[4720]: I1013 17:25:05.284673 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c6ntg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxbvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxbvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c6ntg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:05Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:05 crc kubenswrapper[4720]: I1013 17:25:05.289474 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:05 crc kubenswrapper[4720]: I1013 17:25:05.289504 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:05 crc kubenswrapper[4720]: I1013 17:25:05.289512 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:05 crc kubenswrapper[4720]: I1013 17:25:05.289548 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:05 crc kubenswrapper[4720]: I1013 17:25:05.289561 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:05Z","lastTransitionTime":"2025-10-13T17:25:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:05 crc kubenswrapper[4720]: I1013 17:25:05.302151 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce442c80-fcde-4b79-b6f9-f8f25771dfd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10e7a946185d6502e37a1ab60ad1eca36cce991f230188d22b2d6e2ea554f920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://864e330267c8cec8487e1dcf4a50bbb2ba37d81d80361f0873e6bf69de1ed721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-htwnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:05Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:05 crc kubenswrapper[4720]: I1013 17:25:05.324714 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5rgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d7408e5-b529-4396-92b3-2fed275c3250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://439dea1ed9724a215fa1aa1f4a4bd84a4bb998f2f86814f5f4eb46989242f11c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42ff97f0a5fb40b52d5445a00eb8f70f8028610b76baf33117b425b821b5ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f42ff97f0a5fb40b52d5445a00eb8f70f8028610b76baf33117b425b821b5ec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://471bb77042e72bcac4bb28dd26f6151a7154bea0b3ec92272e405e9b22bc0170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://471bb77042e72bcac4bb28dd26f6151a7154bea0b3ec92272e405e9b22bc0170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20cb8981843cc39e8722f560f7a7e75483c93a30edffde9835533f8dc5b50587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20cb8981843cc39e8722f560f7a7e75483c93a30edffde9835533f8dc5b50587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79851ed8c84027d1bb9eccb69bed004ae1cdbe934245c66325eafe42b12ea808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79851ed8c84027d1bb9eccb69bed004ae1cdbe934245c66325eafe42b12ea808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5rgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:05Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:05 crc kubenswrapper[4720]: I1013 17:25:05.345454 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:05Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:05 crc kubenswrapper[4720]: I1013 17:25:05.366168 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:05Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:05 crc kubenswrapper[4720]: I1013 17:25:05.383506 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dnjrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e6efca3-dcc8-488e-8fb1-e14ee0396158\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0f539bcc67139c5b3d51ab01a3114afc20fc44cf5e286dce2b4cad0ea0629c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g676g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c792a043326901c5cbcae8c96b168aee71b56f6e91d0edfc7f58abb5da9c9c28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g676g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dnjrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:05Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:05 crc kubenswrapper[4720]: I1013 17:25:05.393395 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:05 crc kubenswrapper[4720]: I1013 17:25:05.393542 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:05 crc kubenswrapper[4720]: I1013 17:25:05.393573 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:05 crc kubenswrapper[4720]: I1013 17:25:05.393656 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:05 crc kubenswrapper[4720]: I1013 17:25:05.393897 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:05Z","lastTransitionTime":"2025-10-13T17:25:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:05 crc kubenswrapper[4720]: I1013 17:25:05.411487 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a1f1696-ed7f-4482-a300-ca25b68e09e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2507a30017fbd9947ca456b84812f0b26baabf572768870f5a34b3c7699443cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc60672e282de4d3d6a2d8fd64306ec0fc0d032ade655f88afb0c0b9e0a8e589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cf903094dae4e559bcef3e269240670b5d4693344d600779c394ce1e039b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06eea6556daedf913d87b8b0527b337070528b07cc026de878b4df332598a2d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b452d113d48729b6d4ab59f80138b15e7cb16cc16eef4ec9916901a067986d07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:05Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:05 crc kubenswrapper[4720]: I1013 17:25:05.441210 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8064812e-b6aa-4f56-81c9-16154c00abad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c28fde07efa17d20ddecd1c8a0da09a6fa09e98a90b14718565c00aef9c0d25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bbd41d9b0518c8b060b1d01c335b6c9923e7196624fbd49326133faaa2e36ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd8a4dfb390a1643edb4eb4bd01970eaffd8dab0f748effac9e7c691ef4a3ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://013dfade47eccae4339fca0d379d914f63502613f87f1bc621113f7008634475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69d511a66a84fb0fa90f426bd4609c2b3ad915571802c6c024e0370d6e1683a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1aefa46ee989416f0e4fd05f6f12031da637a354fadd7f6caef75ddc2c74ddf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://696c3d0d1105e26a509a7a8f91eb2ade836194b7de933f28b7a3d380b5964ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://696c3d0d1105e26a509a7a8f91eb2ade836194b7de933f28b7a3d380b5964ee3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T17:24:51Z\\\",\\\"message\\\":\\\"loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/catalog-operator-metrics\\\\\\\"}\\\\nI1013 17:24:51.349805 6169 services_controller.go:454] Service openshift-machine-config-operator/machine-config-daemon for network=default has 2 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI1013 17:24:51.349817 6169 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1013 17:24:51.349829 6169 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-authentication-operator/metrics\\\\\\\"}\\\\nI1013 17:24:51.349839 6169 services_controller.go:360] Finished syncing service metrics on namespace openshift-authentication-operator for network=default : 675.638µs\\\\nI1013 17:24:51.349848 6169 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF1013 17:24:51.349740 6169 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initializatio\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pn6lz_openshift-ovn-kubernetes(8064812e-b6aa-4f56-81c9-16154c00abad)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34b1fffbebfbd3599bed22fe3fe80129f5b704560c54d09e9ab5e9ea8bba4513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:05Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:05 crc kubenswrapper[4720]: I1013 17:25:05.454319 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bvtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b35c333b-0d4e-4d9a-a8fe-a1c6f5fbbf75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03411ae3786161605ed0cb08a21661c7c7a4cc93f91320586547b8ffab3e90ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g97k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bvtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:05Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:05 crc kubenswrapper[4720]: I1013 17:25:05.469597 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea323a1a-c607-40cf-b61c-b7ff03399c45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88b8a46e2e2c2342bd22a4a9b42cadd0188ec83f23815773f6ffb46be79fdf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d875b0d7eb9f13bc697597a33f7196598ae679980b13842e92acf20477526f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e18d0e9df43d366a6e7088cc3d5bdc794882828f60f008eaf29e788e0141ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef2ba92cea782dd4fd31a511e1086777b58c00340014dff74ff6615b07bd10c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7977e20d8558810fb9f8f827830d0378de7d15c71340d396a95b506902c0962\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T17:24:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1013 17:24:28.736714 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 17:24:28.738546 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445610647/tls.crt::/tmp/serving-cert-445610647/tls.key\\\\\\\"\\\\nI1013 17:24:34.860768 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 17:24:34.864611 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 17:24:34.864632 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 17:24:34.864654 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 17:24:34.864660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 17:24:34.871230 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1013 17:24:34.871248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1013 17:24:34.871273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871290 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871305 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 17:24:34.871314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 17:24:34.871324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 17:24:34.871335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1013 17:24:34.872239 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd536f783e3f8f974f25772994c3d00a01c2ba66997a6e06236f1b83b890f8f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:05Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:05 crc kubenswrapper[4720]: I1013 17:25:05.483803 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxmjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b45ec2d-5bea-4007-a49f-224a866f93eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00868f87f1a3020a1db7e50377bbe90203d6e93799c8ec9f3100dd52a6c0cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xsrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxmjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:05Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:05 crc kubenswrapper[4720]: I1013 17:25:05.496247 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:05 crc kubenswrapper[4720]: I1013 17:25:05.496294 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:05 crc kubenswrapper[4720]: I1013 17:25:05.496306 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:05 crc kubenswrapper[4720]: I1013 17:25:05.496327 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:05 crc kubenswrapper[4720]: I1013 17:25:05.496339 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:05Z","lastTransitionTime":"2025-10-13T17:25:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:05 crc kubenswrapper[4720]: I1013 17:25:05.598365 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:05 crc kubenswrapper[4720]: I1013 17:25:05.598406 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:05 crc kubenswrapper[4720]: I1013 17:25:05.598415 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:05 crc kubenswrapper[4720]: I1013 17:25:05.598429 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:05 crc kubenswrapper[4720]: I1013 17:25:05.598439 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:05Z","lastTransitionTime":"2025-10-13T17:25:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:05 crc kubenswrapper[4720]: I1013 17:25:05.701064 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:05 crc kubenswrapper[4720]: I1013 17:25:05.701123 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:05 crc kubenswrapper[4720]: I1013 17:25:05.701145 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:05 crc kubenswrapper[4720]: I1013 17:25:05.701174 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:05 crc kubenswrapper[4720]: I1013 17:25:05.701237 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:05Z","lastTransitionTime":"2025-10-13T17:25:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:05 crc kubenswrapper[4720]: I1013 17:25:05.803859 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:05 crc kubenswrapper[4720]: I1013 17:25:05.803934 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:05 crc kubenswrapper[4720]: I1013 17:25:05.803953 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:05 crc kubenswrapper[4720]: I1013 17:25:05.803978 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:05 crc kubenswrapper[4720]: I1013 17:25:05.803997 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:05Z","lastTransitionTime":"2025-10-13T17:25:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:05 crc kubenswrapper[4720]: I1013 17:25:05.906347 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:05 crc kubenswrapper[4720]: I1013 17:25:05.906392 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:05 crc kubenswrapper[4720]: I1013 17:25:05.906408 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:05 crc kubenswrapper[4720]: I1013 17:25:05.906429 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:05 crc kubenswrapper[4720]: I1013 17:25:05.906446 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:05Z","lastTransitionTime":"2025-10-13T17:25:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:06 crc kubenswrapper[4720]: I1013 17:25:06.008794 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:06 crc kubenswrapper[4720]: I1013 17:25:06.008832 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:06 crc kubenswrapper[4720]: I1013 17:25:06.008841 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:06 crc kubenswrapper[4720]: I1013 17:25:06.008855 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:06 crc kubenswrapper[4720]: I1013 17:25:06.008867 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:06Z","lastTransitionTime":"2025-10-13T17:25:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:06 crc kubenswrapper[4720]: I1013 17:25:06.111971 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:06 crc kubenswrapper[4720]: I1013 17:25:06.112011 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:06 crc kubenswrapper[4720]: I1013 17:25:06.112021 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:06 crc kubenswrapper[4720]: I1013 17:25:06.112037 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:06 crc kubenswrapper[4720]: I1013 17:25:06.112047 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:06Z","lastTransitionTime":"2025-10-13T17:25:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:06 crc kubenswrapper[4720]: I1013 17:25:06.168005 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c6ntg" Oct 13 17:25:06 crc kubenswrapper[4720]: E1013 17:25:06.168154 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c6ntg" podUID="c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61" Oct 13 17:25:06 crc kubenswrapper[4720]: I1013 17:25:06.215034 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:06 crc kubenswrapper[4720]: I1013 17:25:06.215119 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:06 crc kubenswrapper[4720]: I1013 17:25:06.215142 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:06 crc kubenswrapper[4720]: I1013 17:25:06.215175 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:06 crc kubenswrapper[4720]: I1013 17:25:06.215224 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:06Z","lastTransitionTime":"2025-10-13T17:25:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:06 crc kubenswrapper[4720]: I1013 17:25:06.318448 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:06 crc kubenswrapper[4720]: I1013 17:25:06.318494 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:06 crc kubenswrapper[4720]: I1013 17:25:06.318505 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:06 crc kubenswrapper[4720]: I1013 17:25:06.318521 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:06 crc kubenswrapper[4720]: I1013 17:25:06.318531 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:06Z","lastTransitionTime":"2025-10-13T17:25:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:06 crc kubenswrapper[4720]: I1013 17:25:06.421786 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:06 crc kubenswrapper[4720]: I1013 17:25:06.421845 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:06 crc kubenswrapper[4720]: I1013 17:25:06.421864 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:06 crc kubenswrapper[4720]: I1013 17:25:06.421887 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:06 crc kubenswrapper[4720]: I1013 17:25:06.421906 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:06Z","lastTransitionTime":"2025-10-13T17:25:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:06 crc kubenswrapper[4720]: I1013 17:25:06.524595 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:06 crc kubenswrapper[4720]: I1013 17:25:06.524626 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:06 crc kubenswrapper[4720]: I1013 17:25:06.524638 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:06 crc kubenswrapper[4720]: I1013 17:25:06.524650 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:06 crc kubenswrapper[4720]: I1013 17:25:06.524659 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:06Z","lastTransitionTime":"2025-10-13T17:25:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:06 crc kubenswrapper[4720]: I1013 17:25:06.627523 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:06 crc kubenswrapper[4720]: I1013 17:25:06.627903 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:06 crc kubenswrapper[4720]: I1013 17:25:06.628090 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:06 crc kubenswrapper[4720]: I1013 17:25:06.628322 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:06 crc kubenswrapper[4720]: I1013 17:25:06.628560 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:06Z","lastTransitionTime":"2025-10-13T17:25:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:06 crc kubenswrapper[4720]: I1013 17:25:06.731944 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:06 crc kubenswrapper[4720]: I1013 17:25:06.731994 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:06 crc kubenswrapper[4720]: I1013 17:25:06.732006 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:06 crc kubenswrapper[4720]: I1013 17:25:06.732021 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:06 crc kubenswrapper[4720]: I1013 17:25:06.732031 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:06Z","lastTransitionTime":"2025-10-13T17:25:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:06 crc kubenswrapper[4720]: I1013 17:25:06.835057 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:06 crc kubenswrapper[4720]: I1013 17:25:06.836120 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:06 crc kubenswrapper[4720]: I1013 17:25:06.836377 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:06 crc kubenswrapper[4720]: I1013 17:25:06.836534 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:06 crc kubenswrapper[4720]: I1013 17:25:06.836805 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:06Z","lastTransitionTime":"2025-10-13T17:25:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:06 crc kubenswrapper[4720]: I1013 17:25:06.925470 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 17:25:06 crc kubenswrapper[4720]: E1013 17:25:06.925681 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 17:25:38.925655177 +0000 UTC m=+84.382905309 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:25:06 crc kubenswrapper[4720]: I1013 17:25:06.939116 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:06 crc kubenswrapper[4720]: I1013 17:25:06.939158 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:06 crc kubenswrapper[4720]: I1013 17:25:06.939168 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:06 crc kubenswrapper[4720]: I1013 17:25:06.939182 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:06 crc kubenswrapper[4720]: I1013 17:25:06.939211 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:06Z","lastTransitionTime":"2025-10-13T17:25:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:07 crc kubenswrapper[4720]: I1013 17:25:07.027392 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:25:07 crc kubenswrapper[4720]: I1013 17:25:07.027471 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:25:07 crc kubenswrapper[4720]: I1013 17:25:07.027520 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:25:07 crc kubenswrapper[4720]: I1013 17:25:07.027576 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:25:07 crc kubenswrapper[4720]: E1013 17:25:07.027693 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 13 17:25:07 crc kubenswrapper[4720]: E1013 17:25:07.027750 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 13 17:25:07 crc kubenswrapper[4720]: E1013 17:25:07.027771 4720 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 13 17:25:07 crc kubenswrapper[4720]: E1013 17:25:07.027794 4720 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 13 17:25:07 crc kubenswrapper[4720]: E1013 17:25:07.027693 4720 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 13 17:25:07 crc kubenswrapper[4720]: E1013 17:25:07.027876 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-13 17:25:39.027850757 +0000 UTC m=+84.485100919 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 13 17:25:07 crc kubenswrapper[4720]: E1013 17:25:07.028129 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-13 17:25:39.028004611 +0000 UTC m=+84.485254783 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 13 17:25:07 crc kubenswrapper[4720]: E1013 17:25:07.028211 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-13 17:25:39.028175976 +0000 UTC m=+84.485426148 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 13 17:25:07 crc kubenswrapper[4720]: E1013 17:25:07.028293 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 13 17:25:07 crc kubenswrapper[4720]: E1013 17:25:07.028318 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 13 17:25:07 crc kubenswrapper[4720]: E1013 17:25:07.028336 4720 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 13 17:25:07 crc kubenswrapper[4720]: E1013 17:25:07.028393 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-13 17:25:39.028376161 +0000 UTC m=+84.485626323 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 13 17:25:07 crc kubenswrapper[4720]: I1013 17:25:07.042182 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:07 crc kubenswrapper[4720]: I1013 17:25:07.042236 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:07 crc kubenswrapper[4720]: I1013 17:25:07.042247 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:07 crc kubenswrapper[4720]: I1013 17:25:07.042263 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:07 crc kubenswrapper[4720]: I1013 17:25:07.042274 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:07Z","lastTransitionTime":"2025-10-13T17:25:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:07 crc kubenswrapper[4720]: I1013 17:25:07.144812 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:07 crc kubenswrapper[4720]: I1013 17:25:07.144869 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:07 crc kubenswrapper[4720]: I1013 17:25:07.144885 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:07 crc kubenswrapper[4720]: I1013 17:25:07.144908 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:07 crc kubenswrapper[4720]: I1013 17:25:07.144927 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:07Z","lastTransitionTime":"2025-10-13T17:25:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:07 crc kubenswrapper[4720]: I1013 17:25:07.167983 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:25:07 crc kubenswrapper[4720]: E1013 17:25:07.168102 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 17:25:07 crc kubenswrapper[4720]: I1013 17:25:07.168229 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:25:07 crc kubenswrapper[4720]: I1013 17:25:07.167983 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:25:07 crc kubenswrapper[4720]: E1013 17:25:07.168373 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 17:25:07 crc kubenswrapper[4720]: E1013 17:25:07.168938 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 17:25:07 crc kubenswrapper[4720]: I1013 17:25:07.248614 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:07 crc kubenswrapper[4720]: I1013 17:25:07.248676 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:07 crc kubenswrapper[4720]: I1013 17:25:07.248694 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:07 crc kubenswrapper[4720]: I1013 17:25:07.248718 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:07 crc kubenswrapper[4720]: I1013 17:25:07.248735 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:07Z","lastTransitionTime":"2025-10-13T17:25:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:07 crc kubenswrapper[4720]: I1013 17:25:07.351889 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:07 crc kubenswrapper[4720]: I1013 17:25:07.351939 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:07 crc kubenswrapper[4720]: I1013 17:25:07.351955 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:07 crc kubenswrapper[4720]: I1013 17:25:07.351977 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:07 crc kubenswrapper[4720]: I1013 17:25:07.351994 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:07Z","lastTransitionTime":"2025-10-13T17:25:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:07 crc kubenswrapper[4720]: I1013 17:25:07.453955 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:07 crc kubenswrapper[4720]: I1013 17:25:07.454017 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:07 crc kubenswrapper[4720]: I1013 17:25:07.454040 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:07 crc kubenswrapper[4720]: I1013 17:25:07.454068 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:07 crc kubenswrapper[4720]: I1013 17:25:07.454090 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:07Z","lastTransitionTime":"2025-10-13T17:25:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:07 crc kubenswrapper[4720]: I1013 17:25:07.556560 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:07 crc kubenswrapper[4720]: I1013 17:25:07.556625 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:07 crc kubenswrapper[4720]: I1013 17:25:07.556648 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:07 crc kubenswrapper[4720]: I1013 17:25:07.556679 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:07 crc kubenswrapper[4720]: I1013 17:25:07.556702 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:07Z","lastTransitionTime":"2025-10-13T17:25:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:07 crc kubenswrapper[4720]: I1013 17:25:07.660038 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:07 crc kubenswrapper[4720]: I1013 17:25:07.660077 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:07 crc kubenswrapper[4720]: I1013 17:25:07.660087 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:07 crc kubenswrapper[4720]: I1013 17:25:07.660102 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:07 crc kubenswrapper[4720]: I1013 17:25:07.660113 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:07Z","lastTransitionTime":"2025-10-13T17:25:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:07 crc kubenswrapper[4720]: I1013 17:25:07.763260 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:07 crc kubenswrapper[4720]: I1013 17:25:07.763300 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:07 crc kubenswrapper[4720]: I1013 17:25:07.763310 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:07 crc kubenswrapper[4720]: I1013 17:25:07.763325 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:07 crc kubenswrapper[4720]: I1013 17:25:07.763336 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:07Z","lastTransitionTime":"2025-10-13T17:25:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:07 crc kubenswrapper[4720]: I1013 17:25:07.866441 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:07 crc kubenswrapper[4720]: I1013 17:25:07.866496 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:07 crc kubenswrapper[4720]: I1013 17:25:07.866512 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:07 crc kubenswrapper[4720]: I1013 17:25:07.866536 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:07 crc kubenswrapper[4720]: I1013 17:25:07.866554 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:07Z","lastTransitionTime":"2025-10-13T17:25:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:07 crc kubenswrapper[4720]: I1013 17:25:07.968838 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:07 crc kubenswrapper[4720]: I1013 17:25:07.968881 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:07 crc kubenswrapper[4720]: I1013 17:25:07.968898 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:07 crc kubenswrapper[4720]: I1013 17:25:07.968920 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:07 crc kubenswrapper[4720]: I1013 17:25:07.968936 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:07Z","lastTransitionTime":"2025-10-13T17:25:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.071180 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.071360 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.071380 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.071420 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.071437 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:08Z","lastTransitionTime":"2025-10-13T17:25:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.167840 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c6ntg" Oct 13 17:25:08 crc kubenswrapper[4720]: E1013 17:25:08.168036 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c6ntg" podUID="c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61" Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.174335 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.174398 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.174410 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.174426 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.174441 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:08Z","lastTransitionTime":"2025-10-13T17:25:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.253520 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.268456 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.272872 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce442c80-fcde-4b79-b6f9-f8f25771dfd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10e7a946185d6502e37a1ab60ad1eca36cce991f230188d22b2d6e2ea554f920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://864e330267c8cec8487e1dcf4a50bbb2ba37d81d80361f0873e6bf69de1ed721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-htwnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:08Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.276741 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.276803 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.276823 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.276846 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.276864 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:08Z","lastTransitionTime":"2025-10-13T17:25:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.298039 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5rgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d7408e5-b529-4396-92b3-2fed275c3250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://439dea1ed9724a215fa1aa1f4a4bd84a4bb998f2f86814f5f4eb46989242f11c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42ff97f0a5fb40b52d5445a00eb8f70f8028610b76baf33117b425b821b5ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f42ff97f0a5fb40b52d5445a00eb8f70f8028610b76baf33117b425b821b5ec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://471bb77042e72bcac4bb28dd26f6151a7154bea0b3ec92272e405e9b22bc0170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://471bb77042e72bcac4bb28dd26f6151a7154bea0b3ec92272e405e9b22bc0170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20cb8981843cc39e8722f560f7a7e75483c93a30edffde9835533f8dc5b50587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20cb8981843cc39e8722f560f7a7e75483c93a30edffde9835533f8dc5b50587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79851ed8c84027d1bb9eccb69bed004ae1cdbe934245c66325eafe42b12ea808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79851ed8c84027d1bb9eccb69bed004ae1cdbe934245c66325eafe42b12ea808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5rgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:08Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.310145 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c6ntg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxbvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxbvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c6ntg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:08Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.342312 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a1f1696-ed7f-4482-a300-ca25b68e09e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2507a30017fbd9947ca456b84812f0b26baabf572768870f5a34b3c7699443cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc60672e282de4d3d6a2d8fd64306ec0fc0d032ade655f88afb0c0b9e0a8e589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cf903094dae4e559bcef3e269240670b5d4693344d600779c394ce1e039b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06eea6556daedf913d87b8b0527b337070528b07cc026de878b4df332598a2d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b452d113d48729b6d4ab59f80138b15e7cb16cc16eef4ec9916901a067986d07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:08Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.358750 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:08Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.375538 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:08Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.379992 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.380037 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.380052 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.380075 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.380094 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:08Z","lastTransitionTime":"2025-10-13T17:25:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.389145 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dnjrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e6efca3-dcc8-488e-8fb1-e14ee0396158\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0f539bcc67139c5b3d51ab01a3114afc20fc44cf5e286dce2b4cad0ea0629c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g676g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c792a043326901c5cbcae8c96b168aee71b56f6e91d0edfc7f58abb5da9c9c28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g676g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dnjrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:08Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.408692 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea323a1a-c607-40cf-b61c-b7ff03399c45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88b8a46e2e2c2342bd22a4a9b42cadd0188ec83f23815773f6ffb46be79fdf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d875b0d7eb9f13bc697597a33f7196598ae679980b13842e92acf20477526f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e18d0e9df43d366a6e7088cc3d5bdc794882828f60f008eaf29e788e0141ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef2ba92cea782dd4fd31a511e1086777b58c00340014dff74ff6615b07bd10c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7977e20d8558810fb9f8f827830d0378de7d15c71340d396a95b506902c0962\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T17:24:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1013 17:24:28.736714 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 17:24:28.738546 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445610647/tls.crt::/tmp/serving-cert-445610647/tls.key\\\\\\\"\\\\nI1013 17:24:34.860768 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 17:24:34.864611 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 17:24:34.864632 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 17:24:34.864654 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 17:24:34.864660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 17:24:34.871230 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1013 17:24:34.871248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1013 17:24:34.871273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871290 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871305 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 17:24:34.871314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 17:24:34.871324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 17:24:34.871335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1013 17:24:34.872239 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd536f783e3f8f974f25772994c3d00a01c2ba66997a6e06236f1b83b890f8f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:08Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.429477 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxmjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b45ec2d-5bea-4007-a49f-224a866f93eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00868f87f1a3020a1db7e50377bbe90203d6e93799c8ec9f3100dd52a6c0cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xsrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxmjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:08Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.464154 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8064812e-b6aa-4f56-81c9-16154c00abad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c28fde07efa17d20ddecd1c8a0da09a6fa09e98a90b14718565c00aef9c0d25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bbd41d9b0518c8b060b1d01c335b6c9923e7196624fbd49326133faaa2e36ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd8a4dfb390a1643edb4eb4bd01970eaffd8dab0f748effac9e7c691ef4a3ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://013dfade47eccae4339fca0d379d914f63502613f87f1bc621113f7008634475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69d511a66a84fb0fa90f426bd4609c2b3ad915571802c6c024e0370d6e1683a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1aefa46ee989416f0e4fd05f6f12031da637a354fadd7f6caef75ddc2c74ddf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://696c3d0d1105e26a509a7a8f91eb2ade836194b7de933f28b7a3d380b5964ee3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://696c3d0d1105e26a509a7a8f91eb2ade836194b7de933f28b7a3d380b5964ee3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T17:24:51Z\\\",\\\"message\\\":\\\"loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/catalog-operator-metrics\\\\\\\"}\\\\nI1013 17:24:51.349805 6169 services_controller.go:454] Service openshift-machine-config-operator/machine-config-daemon for network=default has 2 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI1013 17:24:51.349817 6169 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1013 17:24:51.349829 6169 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-authentication-operator/metrics\\\\\\\"}\\\\nI1013 17:24:51.349839 6169 services_controller.go:360] Finished syncing service metrics on namespace openshift-authentication-operator for network=default : 675.638µs\\\\nI1013 17:24:51.349848 6169 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF1013 17:24:51.349740 6169 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initializatio\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pn6lz_openshift-ovn-kubernetes(8064812e-b6aa-4f56-81c9-16154c00abad)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34b1fffbebfbd3599bed22fe3fe80129f5b704560c54d09e9ab5e9ea8bba4513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:08Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.479213 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bvtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b35c333b-0d4e-4d9a-a8fe-a1c6f5fbbf75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03411ae3786161605ed0cb08a21661c7c7a4cc93f91320586547b8ffab3e90ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g97k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bvtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:08Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.483482 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.483529 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.483539 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.483555 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.483568 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:08Z","lastTransitionTime":"2025-10-13T17:25:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.500719 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f856022da70859465022db433585bd8cad12f8cd2aeae7943491d3f7e5049256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:08Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.517549 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmjlm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f85220d6-f940-4538-806a-2b26bacd7b09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6315723f6cd375a392c8c427a737021b7e01415c902514b09ee43407749caa34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7795z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmjlm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:08Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.537221 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7baf94d8-250c-4568-ab1c-a1429b8caf70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435ba16189d6d19892aaab158512c1a84a72103779e3cc4b2c634de344583c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e566f3b66d404dd8da93bba1745b9e67cd63532330591f5b735f6826c07eae2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ec64662f56c2609ebac1383a78840ba4e0d9fb1fe10e666169285d3852e6b40\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d8f079c2674e0c6bea1837755db23bc3205da6a01ac1896d6293226d9c6fb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:08Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.560170 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:08Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.576793 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://705766b44d481300d0fd8cc654aa2dc9046733e5a71ad3a7ab789f85e737dcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a69b8825692cbbb7cce0a52b1ccb7b137ce6c9f8a8d788fff95bb7228cb40e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:08Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.586343 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.586390 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.586400 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.586417 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.586429 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:08Z","lastTransitionTime":"2025-10-13T17:25:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.594880 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f85e4aab30e7f49ef05e48cc22fc22e0291e32604a8f74ae1cc2ea57b5889e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:08Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.688627 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.688694 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.688717 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.688744 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.688824 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:08Z","lastTransitionTime":"2025-10-13T17:25:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.791495 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.791542 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.791552 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.791572 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.791583 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:08Z","lastTransitionTime":"2025-10-13T17:25:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.894973 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.895071 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.895093 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.895535 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.895797 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:08Z","lastTransitionTime":"2025-10-13T17:25:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.999670 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.999736 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.999757 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:08 crc kubenswrapper[4720]: I1013 17:25:08.999786 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:09 crc kubenswrapper[4720]: I1013 17:25:08.999809 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:08Z","lastTransitionTime":"2025-10-13T17:25:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:09 crc kubenswrapper[4720]: I1013 17:25:09.102502 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:09 crc kubenswrapper[4720]: I1013 17:25:09.102544 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:09 crc kubenswrapper[4720]: I1013 17:25:09.102555 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:09 crc kubenswrapper[4720]: I1013 17:25:09.102571 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:09 crc kubenswrapper[4720]: I1013 17:25:09.102580 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:09Z","lastTransitionTime":"2025-10-13T17:25:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:09 crc kubenswrapper[4720]: I1013 17:25:09.167680 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:25:09 crc kubenswrapper[4720]: I1013 17:25:09.167793 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:25:09 crc kubenswrapper[4720]: E1013 17:25:09.167884 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 17:25:09 crc kubenswrapper[4720]: I1013 17:25:09.167940 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:25:09 crc kubenswrapper[4720]: E1013 17:25:09.168099 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 17:25:09 crc kubenswrapper[4720]: E1013 17:25:09.168308 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 17:25:09 crc kubenswrapper[4720]: I1013 17:25:09.205431 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:09 crc kubenswrapper[4720]: I1013 17:25:09.205491 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:09 crc kubenswrapper[4720]: I1013 17:25:09.205508 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:09 crc kubenswrapper[4720]: I1013 17:25:09.205534 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:09 crc kubenswrapper[4720]: I1013 17:25:09.205550 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:09Z","lastTransitionTime":"2025-10-13T17:25:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:09 crc kubenswrapper[4720]: I1013 17:25:09.308750 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:09 crc kubenswrapper[4720]: I1013 17:25:09.308793 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:09 crc kubenswrapper[4720]: I1013 17:25:09.308807 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:09 crc kubenswrapper[4720]: I1013 17:25:09.308822 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:09 crc kubenswrapper[4720]: I1013 17:25:09.308834 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:09Z","lastTransitionTime":"2025-10-13T17:25:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:09 crc kubenswrapper[4720]: I1013 17:25:09.412729 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:09 crc kubenswrapper[4720]: I1013 17:25:09.412781 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:09 crc kubenswrapper[4720]: I1013 17:25:09.412797 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:09 crc kubenswrapper[4720]: I1013 17:25:09.412820 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:09 crc kubenswrapper[4720]: I1013 17:25:09.412837 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:09Z","lastTransitionTime":"2025-10-13T17:25:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:09 crc kubenswrapper[4720]: I1013 17:25:09.515363 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:09 crc kubenswrapper[4720]: I1013 17:25:09.515437 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:09 crc kubenswrapper[4720]: I1013 17:25:09.515459 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:09 crc kubenswrapper[4720]: I1013 17:25:09.515488 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:09 crc kubenswrapper[4720]: I1013 17:25:09.515511 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:09Z","lastTransitionTime":"2025-10-13T17:25:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:09 crc kubenswrapper[4720]: I1013 17:25:09.618947 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:09 crc kubenswrapper[4720]: I1013 17:25:09.619032 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:09 crc kubenswrapper[4720]: I1013 17:25:09.619055 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:09 crc kubenswrapper[4720]: I1013 17:25:09.619087 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:09 crc kubenswrapper[4720]: I1013 17:25:09.619113 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:09Z","lastTransitionTime":"2025-10-13T17:25:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:09 crc kubenswrapper[4720]: I1013 17:25:09.723004 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:09 crc kubenswrapper[4720]: I1013 17:25:09.723056 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:09 crc kubenswrapper[4720]: I1013 17:25:09.723065 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:09 crc kubenswrapper[4720]: I1013 17:25:09.723081 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:09 crc kubenswrapper[4720]: I1013 17:25:09.723091 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:09Z","lastTransitionTime":"2025-10-13T17:25:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:09 crc kubenswrapper[4720]: I1013 17:25:09.825158 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:09 crc kubenswrapper[4720]: I1013 17:25:09.825222 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:09 crc kubenswrapper[4720]: I1013 17:25:09.825236 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:09 crc kubenswrapper[4720]: I1013 17:25:09.825252 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:09 crc kubenswrapper[4720]: I1013 17:25:09.825265 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:09Z","lastTransitionTime":"2025-10-13T17:25:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:09 crc kubenswrapper[4720]: I1013 17:25:09.928309 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:09 crc kubenswrapper[4720]: I1013 17:25:09.928367 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:09 crc kubenswrapper[4720]: I1013 17:25:09.928387 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:09 crc kubenswrapper[4720]: I1013 17:25:09.928411 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:09 crc kubenswrapper[4720]: I1013 17:25:09.928427 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:09Z","lastTransitionTime":"2025-10-13T17:25:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:10 crc kubenswrapper[4720]: I1013 17:25:10.031964 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:10 crc kubenswrapper[4720]: I1013 17:25:10.032513 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:10 crc kubenswrapper[4720]: I1013 17:25:10.032613 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:10 crc kubenswrapper[4720]: I1013 17:25:10.032705 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:10 crc kubenswrapper[4720]: I1013 17:25:10.032788 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:10Z","lastTransitionTime":"2025-10-13T17:25:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:10 crc kubenswrapper[4720]: I1013 17:25:10.135726 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:10 crc kubenswrapper[4720]: I1013 17:25:10.136233 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:10 crc kubenswrapper[4720]: I1013 17:25:10.136493 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:10 crc kubenswrapper[4720]: I1013 17:25:10.136657 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:10 crc kubenswrapper[4720]: I1013 17:25:10.136799 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:10Z","lastTransitionTime":"2025-10-13T17:25:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:10 crc kubenswrapper[4720]: I1013 17:25:10.162134 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61-metrics-certs\") pod \"network-metrics-daemon-c6ntg\" (UID: \"c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61\") " pod="openshift-multus/network-metrics-daemon-c6ntg" Oct 13 17:25:10 crc kubenswrapper[4720]: E1013 17:25:10.162355 4720 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 13 17:25:10 crc kubenswrapper[4720]: E1013 17:25:10.162436 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61-metrics-certs podName:c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61 nodeName:}" failed. No retries permitted until 2025-10-13 17:25:26.162420537 +0000 UTC m=+71.619670669 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61-metrics-certs") pod "network-metrics-daemon-c6ntg" (UID: "c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 13 17:25:10 crc kubenswrapper[4720]: I1013 17:25:10.168147 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c6ntg" Oct 13 17:25:10 crc kubenswrapper[4720]: E1013 17:25:10.168589 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c6ntg" podUID="c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61" Oct 13 17:25:10 crc kubenswrapper[4720]: I1013 17:25:10.239385 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:10 crc kubenswrapper[4720]: I1013 17:25:10.239445 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:10 crc kubenswrapper[4720]: I1013 17:25:10.239463 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:10 crc kubenswrapper[4720]: I1013 17:25:10.239489 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:10 crc kubenswrapper[4720]: I1013 17:25:10.239512 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:10Z","lastTransitionTime":"2025-10-13T17:25:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:10 crc kubenswrapper[4720]: I1013 17:25:10.341676 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:10 crc kubenswrapper[4720]: I1013 17:25:10.341735 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:10 crc kubenswrapper[4720]: I1013 17:25:10.341753 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:10 crc kubenswrapper[4720]: I1013 17:25:10.341777 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:10 crc kubenswrapper[4720]: I1013 17:25:10.341793 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:10Z","lastTransitionTime":"2025-10-13T17:25:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:10 crc kubenswrapper[4720]: I1013 17:25:10.444507 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:10 crc kubenswrapper[4720]: I1013 17:25:10.444586 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:10 crc kubenswrapper[4720]: I1013 17:25:10.444603 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:10 crc kubenswrapper[4720]: I1013 17:25:10.444630 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:10 crc kubenswrapper[4720]: I1013 17:25:10.444647 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:10Z","lastTransitionTime":"2025-10-13T17:25:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:10 crc kubenswrapper[4720]: I1013 17:25:10.546922 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:10 crc kubenswrapper[4720]: I1013 17:25:10.546977 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:10 crc kubenswrapper[4720]: I1013 17:25:10.546994 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:10 crc kubenswrapper[4720]: I1013 17:25:10.547016 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:10 crc kubenswrapper[4720]: I1013 17:25:10.547032 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:10Z","lastTransitionTime":"2025-10-13T17:25:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:10 crc kubenswrapper[4720]: I1013 17:25:10.650250 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:10 crc kubenswrapper[4720]: I1013 17:25:10.650285 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:10 crc kubenswrapper[4720]: I1013 17:25:10.650293 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:10 crc kubenswrapper[4720]: I1013 17:25:10.650307 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:10 crc kubenswrapper[4720]: I1013 17:25:10.650316 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:10Z","lastTransitionTime":"2025-10-13T17:25:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:10 crc kubenswrapper[4720]: I1013 17:25:10.753855 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:10 crc kubenswrapper[4720]: I1013 17:25:10.753921 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:10 crc kubenswrapper[4720]: I1013 17:25:10.753936 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:10 crc kubenswrapper[4720]: I1013 17:25:10.753960 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:10 crc kubenswrapper[4720]: I1013 17:25:10.753980 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:10Z","lastTransitionTime":"2025-10-13T17:25:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:10 crc kubenswrapper[4720]: I1013 17:25:10.857167 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:10 crc kubenswrapper[4720]: I1013 17:25:10.857251 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:10 crc kubenswrapper[4720]: I1013 17:25:10.857268 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:10 crc kubenswrapper[4720]: I1013 17:25:10.857291 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:10 crc kubenswrapper[4720]: I1013 17:25:10.857309 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:10Z","lastTransitionTime":"2025-10-13T17:25:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:10 crc kubenswrapper[4720]: I1013 17:25:10.960302 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:10 crc kubenswrapper[4720]: I1013 17:25:10.960357 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:10 crc kubenswrapper[4720]: I1013 17:25:10.960376 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:10 crc kubenswrapper[4720]: I1013 17:25:10.960402 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:10 crc kubenswrapper[4720]: I1013 17:25:10.960419 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:10Z","lastTransitionTime":"2025-10-13T17:25:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:11 crc kubenswrapper[4720]: I1013 17:25:11.063688 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:11 crc kubenswrapper[4720]: I1013 17:25:11.063729 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:11 crc kubenswrapper[4720]: I1013 17:25:11.063739 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:11 crc kubenswrapper[4720]: I1013 17:25:11.063755 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:11 crc kubenswrapper[4720]: I1013 17:25:11.064423 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:11Z","lastTransitionTime":"2025-10-13T17:25:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:11 crc kubenswrapper[4720]: I1013 17:25:11.167255 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:25:11 crc kubenswrapper[4720]: I1013 17:25:11.167300 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:25:11 crc kubenswrapper[4720]: I1013 17:25:11.167375 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:25:11 crc kubenswrapper[4720]: I1013 17:25:11.167429 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:11 crc kubenswrapper[4720]: I1013 17:25:11.167448 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:11 crc kubenswrapper[4720]: E1013 17:25:11.167421 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 17:25:11 crc kubenswrapper[4720]: I1013 17:25:11.167457 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:11 crc kubenswrapper[4720]: E1013 17:25:11.167516 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 17:25:11 crc kubenswrapper[4720]: I1013 17:25:11.167566 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:11 crc kubenswrapper[4720]: I1013 17:25:11.167602 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:11Z","lastTransitionTime":"2025-10-13T17:25:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:11 crc kubenswrapper[4720]: E1013 17:25:11.167640 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 17:25:11 crc kubenswrapper[4720]: I1013 17:25:11.270644 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:11 crc kubenswrapper[4720]: I1013 17:25:11.270902 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:11 crc kubenswrapper[4720]: I1013 17:25:11.271057 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:11 crc kubenswrapper[4720]: I1013 17:25:11.271281 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:11 crc kubenswrapper[4720]: I1013 17:25:11.271444 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:11Z","lastTransitionTime":"2025-10-13T17:25:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:11 crc kubenswrapper[4720]: I1013 17:25:11.374883 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:11 crc kubenswrapper[4720]: I1013 17:25:11.374926 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:11 crc kubenswrapper[4720]: I1013 17:25:11.374935 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:11 crc kubenswrapper[4720]: I1013 17:25:11.374949 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:11 crc kubenswrapper[4720]: I1013 17:25:11.374959 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:11Z","lastTransitionTime":"2025-10-13T17:25:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:11 crc kubenswrapper[4720]: I1013 17:25:11.478115 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:11 crc kubenswrapper[4720]: I1013 17:25:11.478172 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:11 crc kubenswrapper[4720]: I1013 17:25:11.478218 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:11 crc kubenswrapper[4720]: I1013 17:25:11.478242 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:11 crc kubenswrapper[4720]: I1013 17:25:11.478260 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:11Z","lastTransitionTime":"2025-10-13T17:25:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:11 crc kubenswrapper[4720]: I1013 17:25:11.581318 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:11 crc kubenswrapper[4720]: I1013 17:25:11.581613 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:11 crc kubenswrapper[4720]: I1013 17:25:11.581625 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:11 crc kubenswrapper[4720]: I1013 17:25:11.581643 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:11 crc kubenswrapper[4720]: I1013 17:25:11.581661 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:11Z","lastTransitionTime":"2025-10-13T17:25:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:11 crc kubenswrapper[4720]: I1013 17:25:11.684301 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:11 crc kubenswrapper[4720]: I1013 17:25:11.684363 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:11 crc kubenswrapper[4720]: I1013 17:25:11.684381 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:11 crc kubenswrapper[4720]: I1013 17:25:11.684409 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:11 crc kubenswrapper[4720]: I1013 17:25:11.684425 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:11Z","lastTransitionTime":"2025-10-13T17:25:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:11 crc kubenswrapper[4720]: I1013 17:25:11.787550 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:11 crc kubenswrapper[4720]: I1013 17:25:11.787605 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:11 crc kubenswrapper[4720]: I1013 17:25:11.787615 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:11 crc kubenswrapper[4720]: I1013 17:25:11.787633 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:11 crc kubenswrapper[4720]: I1013 17:25:11.787645 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:11Z","lastTransitionTime":"2025-10-13T17:25:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:11 crc kubenswrapper[4720]: I1013 17:25:11.890008 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:11 crc kubenswrapper[4720]: I1013 17:25:11.890278 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:11 crc kubenswrapper[4720]: I1013 17:25:11.890362 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:11 crc kubenswrapper[4720]: I1013 17:25:11.890430 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:11 crc kubenswrapper[4720]: I1013 17:25:11.890494 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:11Z","lastTransitionTime":"2025-10-13T17:25:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:11 crc kubenswrapper[4720]: I1013 17:25:11.993560 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:11 crc kubenswrapper[4720]: I1013 17:25:11.993603 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:11 crc kubenswrapper[4720]: I1013 17:25:11.993612 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:11 crc kubenswrapper[4720]: I1013 17:25:11.993625 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:11 crc kubenswrapper[4720]: I1013 17:25:11.993633 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:11Z","lastTransitionTime":"2025-10-13T17:25:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.078818 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.078863 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.078875 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.078891 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.078903 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:12Z","lastTransitionTime":"2025-10-13T17:25:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:12 crc kubenswrapper[4720]: E1013 17:25:12.090647 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a530de0f-daad-4050-9522-69c64a451e75\\\",\\\"systemUUID\\\":\\\"00bb7c43-79d6-45b5-bd02-4b71a0ba6837\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:12Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.095339 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.095375 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.095386 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.095401 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.095411 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:12Z","lastTransitionTime":"2025-10-13T17:25:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:12 crc kubenswrapper[4720]: E1013 17:25:12.106807 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a530de0f-daad-4050-9522-69c64a451e75\\\",\\\"systemUUID\\\":\\\"00bb7c43-79d6-45b5-bd02-4b71a0ba6837\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:12Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.110072 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.110097 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.110105 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.110119 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.110127 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:12Z","lastTransitionTime":"2025-10-13T17:25:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:12 crc kubenswrapper[4720]: E1013 17:25:12.122184 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a530de0f-daad-4050-9522-69c64a451e75\\\",\\\"systemUUID\\\":\\\"00bb7c43-79d6-45b5-bd02-4b71a0ba6837\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:12Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.126546 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.126650 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.126673 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.126700 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.126722 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:12Z","lastTransitionTime":"2025-10-13T17:25:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.168184 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c6ntg" Oct 13 17:25:12 crc kubenswrapper[4720]: E1013 17:25:12.168416 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c6ntg" podUID="c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.169491 4720 scope.go:117] "RemoveContainer" containerID="696c3d0d1105e26a509a7a8f91eb2ade836194b7de933f28b7a3d380b5964ee3" Oct 13 17:25:12 crc kubenswrapper[4720]: E1013 17:25:12.173657 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a530de0f-daad-4050-9522-69c64a451e75\\\",\\\"systemUUID\\\":\\\"00bb7c43-79d6-45b5-bd02-4b71a0ba6837\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:12Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.181813 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.181863 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.181876 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.181899 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.181914 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:12Z","lastTransitionTime":"2025-10-13T17:25:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:12 crc kubenswrapper[4720]: E1013 17:25:12.214527 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a530de0f-daad-4050-9522-69c64a451e75\\\",\\\"systemUUID\\\":\\\"00bb7c43-79d6-45b5-bd02-4b71a0ba6837\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:12Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:12 crc kubenswrapper[4720]: E1013 17:25:12.215288 4720 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.218322 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.218401 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.218422 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.218444 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.218461 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:12Z","lastTransitionTime":"2025-10-13T17:25:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.320594 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.320625 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.320637 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.320653 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.320664 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:12Z","lastTransitionTime":"2025-10-13T17:25:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.422476 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.422512 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.422520 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.422533 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.422544 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:12Z","lastTransitionTime":"2025-10-13T17:25:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.522012 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pn6lz_8064812e-b6aa-4f56-81c9-16154c00abad/ovnkube-controller/1.log" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.524606 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.524632 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.524639 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.524651 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.524659 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:12Z","lastTransitionTime":"2025-10-13T17:25:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.524714 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" event={"ID":"8064812e-b6aa-4f56-81c9-16154c00abad","Type":"ContainerStarted","Data":"9825f520cb9cff0f39359c2b7f6d8ef1e3163ec1c94f80c037ea4afb4300c5d3"} Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.526371 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.601048 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd8dd666-aa42-4598-bb52-c0cd9345384d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e96fe80c0a88f19c6c5705c7fd945c3282a104b7767db5c217b6364c045d649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9d9053346b85f12f6cf781f1699dbf8aea670b7e1cc5f4fdc1ffac2c969712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d586c8b48f5d2ca87e3a758dd265611f11678f6fa8dd1118a859923c331c4f67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19c59ec68cb6306af37b90cd4583f5a1a266e93f77566d756223eecf7cdadb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19c59ec68cb6306af37b90cd4583f5a1a266e93f77566d756223eecf7cdadb76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:12Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.615729 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:12Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.627630 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.627846 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.627917 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.628005 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.628081 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:12Z","lastTransitionTime":"2025-10-13T17:25:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.629510 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://705766b44d481300d0fd8cc654aa2dc9046733e5a71ad3a7ab789f85e737dcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a69b8825692cbbb7cce0a52b1ccb7b137ce6c9f8a8d788fff95bb7228cb40e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:12Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.647708 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f85e4aab30e7f49ef05e48cc22fc22e0291e32604a8f74ae1cc2ea57b5889e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:12Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.662203 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f856022da70859465022db433585bd8cad12f8cd2aeae7943491d3f7e5049256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:12Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.684770 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmjlm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f85220d6-f940-4538-806a-2b26bacd7b09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6315723f6cd375a392c8c427a737021b7e01415c902514b09ee43407749caa34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7795z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmjlm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:12Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.699278 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7baf94d8-250c-4568-ab1c-a1429b8caf70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435ba16189d6d19892aaab158512c1a84a72103779e3cc4b2c634de344583c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e566f3b66d404dd8da93bba1745b9e67cd63532330591f5b735f6826c07eae2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ec64662f56c2609ebac1383a78840ba4e0d9fb1fe10e666169285d3852e6b40\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d8f079c2674e0c6bea1837755db23bc3205da6a01ac1896d6293226d9c6fb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:12Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.721146 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5rgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d7408e5-b529-4396-92b3-2fed275c3250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://439dea1ed9724a215fa1aa1f4a4bd84a4bb998f2f86814f5f4eb46989242f11c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42ff97f0a5fb40b52d5445a00eb8f70f8028610b76baf33117b425b821b5ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f42ff97f0a5fb40b52d5445a00eb8f70f8028610b76baf33117b425b821b5ec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://471bb77042e72bcac4bb28dd26f6151a7154bea0b3ec92272e405e9b22bc0170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://471bb77042e72bcac4bb28dd26f6151a7154bea0b3ec92272e405e9b22bc0170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20cb8981843cc39e8722f560f7a7e75483c93a30edffde9835533f8dc5b50587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20cb8981843cc39e8722f560f7a7e75483c93a30edffde9835533f8dc5b50587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79851ed8c84027d1bb9eccb69bed004ae1cdbe934245c66325eafe42b12ea808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79851ed8c84027d1bb9eccb69bed004ae1cdbe934245c66325eafe42b12ea808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5rgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:12Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.731100 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.731302 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.731387 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.731468 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.731547 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:12Z","lastTransitionTime":"2025-10-13T17:25:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.737487 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c6ntg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxbvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxbvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c6ntg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:12Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.747408 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce442c80-fcde-4b79-b6f9-f8f25771dfd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10e7a946185d6502e37a1ab60ad1eca36cce991f230188d22b2d6e2ea554f920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://864e330267c8cec8487e1dcf4a50bbb2ba37d81d80361f0873e6bf69de1ed721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-htwnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:12Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.757603 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:12Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.769097 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:12Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.780942 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dnjrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e6efca3-dcc8-488e-8fb1-e14ee0396158\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0f539bcc67139c5b3d51ab01a3114afc20fc44cf5e286dce2b4cad0ea0629c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g676g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c792a043326901c5cbcae8c96b168aee71b56f6e91d0edfc7f58abb5da9c9c28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g676g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dnjrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:12Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.800874 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a1f1696-ed7f-4482-a300-ca25b68e09e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2507a30017fbd9947ca456b84812f0b26baabf572768870f5a34b3c7699443cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc60672e282de4d3d6a2d8fd64306ec0fc0d032ade655f88afb0c0b9e0a8e589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cf903094dae4e559bcef3e269240670b5d4693344d600779c394ce1e039b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06eea6556daedf913d87b8b0527b337070528b07cc026de878b4df332598a2d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b452d113d48729b6d4ab59f80138b15e7cb16cc16eef4ec9916901a067986d07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:12Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.815264 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxmjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b45ec2d-5bea-4007-a49f-224a866f93eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00868f87f1a3020a1db7e50377bbe90203d6e93799c8ec9f3100dd52a6c0cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xsrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxmjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:12Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.834139 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.834382 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.834477 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.834564 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.834641 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:12Z","lastTransitionTime":"2025-10-13T17:25:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.837410 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8064812e-b6aa-4f56-81c9-16154c00abad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c28fde07efa17d20ddecd1c8a0da09a6fa09e98a90b14718565c00aef9c0d25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bbd41d9b0518c8b060b1d01c335b6c9923e7196624fbd49326133faaa2e36ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd8a4dfb390a1643edb4eb4bd01970eaffd8dab0f748effac9e7c691ef4a3ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://013dfade47eccae4339fca0d379d914f63502613f87f1bc621113f7008634475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69d511a66a84fb0fa90f426bd4609c2b3ad915571802c6c024e0370d6e1683a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1aefa46ee989416f0e4fd05f6f12031da637a354fadd7f6caef75ddc2c74ddf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9825f520cb9cff0f39359c2b7f6d8ef1e3163ec1c94f80c037ea4afb4300c5d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://696c3d0d1105e26a509a7a8f91eb2ade836194b7de933f28b7a3d380b5964ee3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T17:24:51Z\\\",\\\"message\\\":\\\"loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/catalog-operator-metrics\\\\\\\"}\\\\nI1013 17:24:51.349805 6169 services_controller.go:454] Service openshift-machine-config-operator/machine-config-daemon for network=default has 2 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI1013 17:24:51.349817 6169 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1013 17:24:51.349829 6169 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-authentication-operator/metrics\\\\\\\"}\\\\nI1013 17:24:51.349839 6169 services_controller.go:360] Finished syncing service metrics on namespace openshift-authentication-operator for network=default : 675.638µs\\\\nI1013 17:24:51.349848 6169 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF1013 17:24:51.349740 6169 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initializatio\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:25:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34b1fffbebfbd3599bed22fe3fe80129f5b704560c54d09e9ab5e9ea8bba4513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:12Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.855299 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bvtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b35c333b-0d4e-4d9a-a8fe-a1c6f5fbbf75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03411ae3786161605ed0cb08a21661c7c7a4cc93f91320586547b8ffab3e90ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g97k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bvtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:12Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.871912 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea323a1a-c607-40cf-b61c-b7ff03399c45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88b8a46e2e2c2342bd22a4a9b42cadd0188ec83f23815773f6ffb46be79fdf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d875b0d7eb9f13bc697597a33f7196598ae679980b13842e92acf20477526f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e18d0e9df43d366a6e7088cc3d5bdc794882828f60f008eaf29e788e0141ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef2ba92cea782dd4fd31a511e1086777b58c00340014dff74ff6615b07bd10c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7977e20d8558810fb9f8f827830d0378de7d15c71340d396a95b506902c0962\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T17:24:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1013 17:24:28.736714 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 17:24:28.738546 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445610647/tls.crt::/tmp/serving-cert-445610647/tls.key\\\\\\\"\\\\nI1013 17:24:34.860768 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 17:24:34.864611 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 17:24:34.864632 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 17:24:34.864654 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 17:24:34.864660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 17:24:34.871230 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1013 17:24:34.871248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1013 17:24:34.871273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871290 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871305 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 17:24:34.871314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 17:24:34.871324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 17:24:34.871335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1013 17:24:34.872239 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd536f783e3f8f974f25772994c3d00a01c2ba66997a6e06236f1b83b890f8f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:12Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.936681 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.936714 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.936726 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.936741 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:12 crc kubenswrapper[4720]: I1013 17:25:12.936752 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:12Z","lastTransitionTime":"2025-10-13T17:25:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.039336 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.039365 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.039374 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.039387 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.039398 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:13Z","lastTransitionTime":"2025-10-13T17:25:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.142216 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.142249 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.142259 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.142273 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.142283 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:13Z","lastTransitionTime":"2025-10-13T17:25:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.168073 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.168082 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:25:13 crc kubenswrapper[4720]: E1013 17:25:13.168421 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 17:25:13 crc kubenswrapper[4720]: E1013 17:25:13.168446 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.168158 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:25:13 crc kubenswrapper[4720]: E1013 17:25:13.168833 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.244238 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.244299 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.244322 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.244349 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.244370 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:13Z","lastTransitionTime":"2025-10-13T17:25:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.347273 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.347336 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.347359 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.347386 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.347406 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:13Z","lastTransitionTime":"2025-10-13T17:25:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.450711 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.450765 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.450786 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.450814 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.450836 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:13Z","lastTransitionTime":"2025-10-13T17:25:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.536875 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pn6lz_8064812e-b6aa-4f56-81c9-16154c00abad/ovnkube-controller/2.log" Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.538685 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pn6lz_8064812e-b6aa-4f56-81c9-16154c00abad/ovnkube-controller/1.log" Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.542402 4720 generic.go:334] "Generic (PLEG): container finished" podID="8064812e-b6aa-4f56-81c9-16154c00abad" containerID="9825f520cb9cff0f39359c2b7f6d8ef1e3163ec1c94f80c037ea4afb4300c5d3" exitCode=1 Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.542537 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" event={"ID":"8064812e-b6aa-4f56-81c9-16154c00abad","Type":"ContainerDied","Data":"9825f520cb9cff0f39359c2b7f6d8ef1e3163ec1c94f80c037ea4afb4300c5d3"} Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.542842 4720 scope.go:117] "RemoveContainer" containerID="696c3d0d1105e26a509a7a8f91eb2ade836194b7de933f28b7a3d380b5964ee3" Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.543456 4720 scope.go:117] "RemoveContainer" containerID="9825f520cb9cff0f39359c2b7f6d8ef1e3163ec1c94f80c037ea4afb4300c5d3" Oct 13 17:25:13 crc kubenswrapper[4720]: E1013 17:25:13.543732 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pn6lz_openshift-ovn-kubernetes(8064812e-b6aa-4f56-81c9-16154c00abad)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" podUID="8064812e-b6aa-4f56-81c9-16154c00abad" Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.553470 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.553520 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.553537 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.553560 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.553578 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:13Z","lastTransitionTime":"2025-10-13T17:25:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.578261 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a1f1696-ed7f-4482-a300-ca25b68e09e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2507a30017fbd9947ca456b84812f0b26baabf572768870f5a34b3c7699443cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc60672e282de4d3d6a2d8fd64306ec0fc0d032ade655f88afb0c0b9e0a8e589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cf903094dae4e559bcef3e269240670b5d4693344d600779c394ce1e039b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06eea6556daedf913d87b8b0527b337070528b07cc026de878b4df332598a2d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b452d113d48729b6d4ab59f80138b15e7cb16cc16eef4ec9916901a067986d07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:13Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.595473 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:13Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.613230 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:13Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.629651 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dnjrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e6efca3-dcc8-488e-8fb1-e14ee0396158\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0f539bcc67139c5b3d51ab01a3114afc20fc44cf5e286dce2b4cad0ea0629c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g676g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c792a043326901c5cbcae8c96b168aee71b56f6e91d0edfc7f58abb5da9c9c28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g676g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dnjrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:13Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.649686 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea323a1a-c607-40cf-b61c-b7ff03399c45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88b8a46e2e2c2342bd22a4a9b42cadd0188ec83f23815773f6ffb46be79fdf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d875b0d7eb9f13bc697597a33f7196598ae679980b13842e92acf20477526f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e18d0e9df43d366a6e7088cc3d5bdc794882828f60f008eaf29e788e0141ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef2ba92cea782dd4fd31a511e1086777b58c00340014dff74ff6615b07bd10c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7977e20d8558810fb9f8f827830d0378de7d15c71340d396a95b506902c0962\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T17:24:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1013 17:24:28.736714 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 17:24:28.738546 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445610647/tls.crt::/tmp/serving-cert-445610647/tls.key\\\\\\\"\\\\nI1013 17:24:34.860768 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 17:24:34.864611 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 17:24:34.864632 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 17:24:34.864654 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 17:24:34.864660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 17:24:34.871230 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1013 17:24:34.871248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1013 17:24:34.871273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871290 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871305 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 17:24:34.871314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 17:24:34.871324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 17:24:34.871335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1013 17:24:34.872239 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd536f783e3f8f974f25772994c3d00a01c2ba66997a6e06236f1b83b890f8f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:13Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.655983 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.656036 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.656052 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.656075 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.656094 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:13Z","lastTransitionTime":"2025-10-13T17:25:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.668662 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxmjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b45ec2d-5bea-4007-a49f-224a866f93eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00868f87f1a3020a1db7e50377bbe90203d6e93799c8ec9f3100dd52a6c0cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xsrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxmjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:13Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.699898 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8064812e-b6aa-4f56-81c9-16154c00abad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c28fde07efa17d20ddecd1c8a0da09a6fa09e98a90b14718565c00aef9c0d25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bbd41d9b0518c8b060b1d01c335b6c9923e7196624fbd49326133faaa2e36ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd8a4dfb390a1643edb4eb4bd01970eaffd8dab0f748effac9e7c691ef4a3ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://013dfade47eccae4339fca0d379d914f63502613f87f1bc621113f7008634475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69d511a66a84fb0fa90f426bd4609c2b3ad915571802c6c024e0370d6e1683a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1aefa46ee989416f0e4fd05f6f12031da637a354fadd7f6caef75ddc2c74ddf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9825f520cb9cff0f39359c2b7f6d8ef1e3163ec1c94f80c037ea4afb4300c5d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://696c3d0d1105e26a509a7a8f91eb2ade836194b7de933f28b7a3d380b5964ee3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T17:24:51Z\\\",\\\"message\\\":\\\"loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/catalog-operator-metrics\\\\\\\"}\\\\nI1013 17:24:51.349805 6169 services_controller.go:454] Service openshift-machine-config-operator/machine-config-daemon for network=default has 2 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI1013 17:24:51.349817 6169 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1013 17:24:51.349829 6169 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-authentication-operator/metrics\\\\\\\"}\\\\nI1013 17:24:51.349839 6169 services_controller.go:360] Finished syncing service metrics on namespace openshift-authentication-operator for network=default : 675.638µs\\\\nI1013 17:24:51.349848 6169 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF1013 17:24:51.349740 6169 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initializatio\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9825f520cb9cff0f39359c2b7f6d8ef1e3163ec1c94f80c037ea4afb4300c5d3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T17:25:13Z\\\",\\\"message\\\":\\\"_controller.go:776] Recording success event on pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI1013 17:25:13.141326 6449 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 2.698682ms)\\\\nI1013 17:25:13.141338 6449 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\\\\\" but failed to find it\\\\nI1013 17:25:13.141346 6449 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\\\\\" in cache\\\\nI1013 17:25:13.141945 6449 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1013 17:25:13.142064 6449 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1013 17:25:13.142100 6449 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1013 17:25:13.142115 6449 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1013 17:25:13.142133 6449 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1013 17:25:13.142159 6449 factory.go:656] Stopping watch factory\\\\nI1013 17:25:13.142173 6449 ovnkube.go:599] Stopped ovnkube\\\\nI1013 17:25:13.142239 6449 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1013 17:25:13.142339 6449 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:25:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34b1fffbebfbd3599bed22fe3fe80129f5b704560c54d09e9ab5e9ea8bba4513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:13Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.715152 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bvtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b35c333b-0d4e-4d9a-a8fe-a1c6f5fbbf75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03411ae3786161605ed0cb08a21661c7c7a4cc93f91320586547b8ffab3e90ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g97k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bvtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:13Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.729058 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmjlm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f85220d6-f940-4538-806a-2b26bacd7b09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6315723f6cd375a392c8c427a737021b7e01415c902514b09ee43407749caa34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7795z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmjlm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:13Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.747529 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7baf94d8-250c-4568-ab1c-a1429b8caf70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435ba16189d6d19892aaab158512c1a84a72103779e3cc4b2c634de344583c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e566f3b66d404dd8da93bba1745b9e67cd63532330591f5b735f6826c07eae2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ec64662f56c2609ebac1383a78840ba4e0d9fb1fe10e666169285d3852e6b40\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d8f079c2674e0c6bea1837755db23bc3205da6a01ac1896d6293226d9c6fb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:13Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.759696 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.759745 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.759761 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.759784 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.759801 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:13Z","lastTransitionTime":"2025-10-13T17:25:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.761656 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd8dd666-aa42-4598-bb52-c0cd9345384d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e96fe80c0a88f19c6c5705c7fd945c3282a104b7767db5c217b6364c045d649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9d9053346b85f12f6cf781f1699dbf8aea670b7e1cc5f4fdc1ffac2c969712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d586c8b48f5d2ca87e3a758dd265611f11678f6fa8dd1118a859923c331c4f67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19c59ec68cb6306af37b90cd4583f5a1a266e93f77566d756223eecf7cdadb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19c59ec68cb6306af37b90cd4583f5a1a266e93f77566d756223eecf7cdadb76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:13Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.777640 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:13Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.791410 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://705766b44d481300d0fd8cc654aa2dc9046733e5a71ad3a7ab789f85e737dcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a69b8825692cbbb7cce0a52b1ccb7b137ce6c9f8a8d788fff95bb7228cb40e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:13Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.804029 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f85e4aab30e7f49ef05e48cc22fc22e0291e32604a8f74ae1cc2ea57b5889e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:13Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.816468 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f856022da70859465022db433585bd8cad12f8cd2aeae7943491d3f7e5049256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:13Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.825958 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce442c80-fcde-4b79-b6f9-f8f25771dfd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10e7a946185d6502e37a1ab60ad1eca36cce991f230188d22b2d6e2ea554f920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://864e330267c8cec8487e1dcf4a50bbb2ba37d81d80361f0873e6bf69de1ed721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-htwnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:13Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.843897 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5rgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d7408e5-b529-4396-92b3-2fed275c3250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://439dea1ed9724a215fa1aa1f4a4bd84a4bb998f2f86814f5f4eb46989242f11c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42ff97f0a5fb40b52d5445a00eb8f70f8028610b76baf33117b425b821b5ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f42ff97f0a5fb40b52d5445a00eb8f70f8028610b76baf33117b425b821b5ec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://471bb77042e72bcac4bb28dd26f6151a7154bea0b3ec92272e405e9b22bc0170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://471bb77042e72bcac4bb28dd26f6151a7154bea0b3ec92272e405e9b22bc0170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20cb8981843cc39e8722f560f7a7e75483c93a30edffde9835533f8dc5b50587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20cb8981843cc39e8722f560f7a7e75483c93a30edffde9835533f8dc5b50587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79851ed8c84027d1bb9eccb69bed004ae1cdbe934245c66325eafe42b12ea808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79851ed8c84027d1bb9eccb69bed004ae1cdbe934245c66325eafe42b12ea808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5rgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:13Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.854326 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c6ntg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxbvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxbvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c6ntg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:13Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.862038 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.862088 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.862105 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.862127 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.862143 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:13Z","lastTransitionTime":"2025-10-13T17:25:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.964376 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.964433 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.964449 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.964473 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:13 crc kubenswrapper[4720]: I1013 17:25:13.964491 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:13Z","lastTransitionTime":"2025-10-13T17:25:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.067488 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.067558 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.067581 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.067612 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.067635 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:14Z","lastTransitionTime":"2025-10-13T17:25:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.167280 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c6ntg" Oct 13 17:25:14 crc kubenswrapper[4720]: E1013 17:25:14.167459 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c6ntg" podUID="c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61" Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.169798 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.169868 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.169890 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.169917 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.169938 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:14Z","lastTransitionTime":"2025-10-13T17:25:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.273251 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.273313 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.273330 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.273354 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.273371 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:14Z","lastTransitionTime":"2025-10-13T17:25:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.376263 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.376326 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.376351 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.376381 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.376403 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:14Z","lastTransitionTime":"2025-10-13T17:25:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.479237 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.479310 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.479333 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.479361 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.479378 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:14Z","lastTransitionTime":"2025-10-13T17:25:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.548711 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pn6lz_8064812e-b6aa-4f56-81c9-16154c00abad/ovnkube-controller/2.log" Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.553185 4720 scope.go:117] "RemoveContainer" containerID="9825f520cb9cff0f39359c2b7f6d8ef1e3163ec1c94f80c037ea4afb4300c5d3" Oct 13 17:25:14 crc kubenswrapper[4720]: E1013 17:25:14.553498 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pn6lz_openshift-ovn-kubernetes(8064812e-b6aa-4f56-81c9-16154c00abad)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" podUID="8064812e-b6aa-4f56-81c9-16154c00abad" Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.572169 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea323a1a-c607-40cf-b61c-b7ff03399c45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88b8a46e2e2c2342bd22a4a9b42cadd0188ec83f23815773f6ffb46be79fdf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d875b0d7eb9f13bc697597a33f7196598ae679980b13842e92acf20477526f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e18d0e9df43d366a6e7088cc3d5bdc794882828f60f008eaf29e788e0141ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef2ba92cea782dd4fd31a511e1086777b58c00340014dff74ff6615b07bd10c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7977e20d8558810fb9f8f827830d0378de7d15c71340d396a95b506902c0962\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T17:24:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1013 17:24:28.736714 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 17:24:28.738546 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445610647/tls.crt::/tmp/serving-cert-445610647/tls.key\\\\\\\"\\\\nI1013 17:24:34.860768 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 17:24:34.864611 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 17:24:34.864632 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 17:24:34.864654 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 17:24:34.864660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 17:24:34.871230 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1013 17:24:34.871248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1013 17:24:34.871273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871290 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871305 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 17:24:34.871314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 17:24:34.871324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 17:24:34.871335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1013 17:24:34.872239 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd536f783e3f8f974f25772994c3d00a01c2ba66997a6e06236f1b83b890f8f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:14Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.581891 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.581965 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.581982 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.582005 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.582021 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:14Z","lastTransitionTime":"2025-10-13T17:25:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.586864 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxmjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b45ec2d-5bea-4007-a49f-224a866f93eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00868f87f1a3020a1db7e50377bbe90203d6e93799c8ec9f3100dd52a6c0cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xsrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxmjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:14Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.610875 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8064812e-b6aa-4f56-81c9-16154c00abad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c28fde07efa17d20ddecd1c8a0da09a6fa09e98a90b14718565c00aef9c0d25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bbd41d9b0518c8b060b1d01c335b6c9923e7196624fbd49326133faaa2e36ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd8a4dfb390a1643edb4eb4bd01970eaffd8dab0f748effac9e7c691ef4a3ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://013dfade47eccae4339fca0d379d914f63502613f87f1bc621113f7008634475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69d511a66a84fb0fa90f426bd4609c2b3ad915571802c6c024e0370d6e1683a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1aefa46ee989416f0e4fd05f6f12031da637a354fadd7f6caef75ddc2c74ddf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9825f520cb9cff0f39359c2b7f6d8ef1e3163ec1c94f80c037ea4afb4300c5d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9825f520cb9cff0f39359c2b7f6d8ef1e3163ec1c94f80c037ea4afb4300c5d3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T17:25:13Z\\\",\\\"message\\\":\\\"_controller.go:776] Recording success event on pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI1013 17:25:13.141326 6449 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 2.698682ms)\\\\nI1013 17:25:13.141338 6449 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\\\\\" but failed to find it\\\\nI1013 17:25:13.141346 6449 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\\\\\" in cache\\\\nI1013 17:25:13.141945 6449 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1013 17:25:13.142064 6449 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1013 17:25:13.142100 6449 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1013 17:25:13.142115 6449 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1013 17:25:13.142133 6449 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1013 17:25:13.142159 6449 factory.go:656] Stopping watch factory\\\\nI1013 17:25:13.142173 6449 ovnkube.go:599] Stopped ovnkube\\\\nI1013 17:25:13.142239 6449 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1013 17:25:13.142339 6449 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:25:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pn6lz_openshift-ovn-kubernetes(8064812e-b6aa-4f56-81c9-16154c00abad)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34b1fffbebfbd3599bed22fe3fe80129f5b704560c54d09e9ab5e9ea8bba4513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:14Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.622412 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bvtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b35c333b-0d4e-4d9a-a8fe-a1c6f5fbbf75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03411ae3786161605ed0cb08a21661c7c7a4cc93f91320586547b8ffab3e90ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g97k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bvtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:14Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.637377 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7baf94d8-250c-4568-ab1c-a1429b8caf70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435ba16189d6d19892aaab158512c1a84a72103779e3cc4b2c634de344583c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e566f3b66d404dd8da93bba1745b9e67cd63532330591f5b735f6826c07eae2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ec64662f56c2609ebac1383a78840ba4e0d9fb1fe10e666169285d3852e6b40\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d8f079c2674e0c6bea1837755db23bc3205da6a01ac1896d6293226d9c6fb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:14Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.648760 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd8dd666-aa42-4598-bb52-c0cd9345384d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e96fe80c0a88f19c6c5705c7fd945c3282a104b7767db5c217b6364c045d649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9d9053346b85f12f6cf781f1699dbf8aea670b7e1cc5f4fdc1ffac2c969712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d586c8b48f5d2ca87e3a758dd265611f11678f6fa8dd1118a859923c331c4f67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19c59ec68cb6306af37b90cd4583f5a1a266e93f77566d756223eecf7cdadb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19c59ec68cb6306af37b90cd4583f5a1a266e93f77566d756223eecf7cdadb76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:14Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.661877 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:14Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.682780 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://705766b44d481300d0fd8cc654aa2dc9046733e5a71ad3a7ab789f85e737dcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a69b8825692cbbb7cce0a52b1ccb7b137ce6c9f8a8d788fff95bb7228cb40e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:14Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.684378 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.684410 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.684420 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.684437 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.684449 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:14Z","lastTransitionTime":"2025-10-13T17:25:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.705826 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f85e4aab30e7f49ef05e48cc22fc22e0291e32604a8f74ae1cc2ea57b5889e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:14Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.721164 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f856022da70859465022db433585bd8cad12f8cd2aeae7943491d3f7e5049256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:14Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.736819 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmjlm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f85220d6-f940-4538-806a-2b26bacd7b09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6315723f6cd375a392c8c427a737021b7e01415c902514b09ee43407749caa34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7795z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmjlm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:14Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.749216 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce442c80-fcde-4b79-b6f9-f8f25771dfd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10e7a946185d6502e37a1ab60ad1eca36cce991f230188d22b2d6e2ea554f920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://864e330267c8cec8487e1dcf4a50bbb2ba37d81d80361f0873e6bf69de1ed721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-htwnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:14Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.769406 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5rgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d7408e5-b529-4396-92b3-2fed275c3250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://439dea1ed9724a215fa1aa1f4a4bd84a4bb998f2f86814f5f4eb46989242f11c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42ff97f0a5fb40b52d5445a00eb8f70f8028610b76baf33117b425b821b5ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f42ff97f0a5fb40b52d5445a00eb8f70f8028610b76baf33117b425b821b5ec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://471bb77042e72bcac4bb28dd26f6151a7154bea0b3ec92272e405e9b22bc0170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://471bb77042e72bcac4bb28dd26f6151a7154bea0b3ec92272e405e9b22bc0170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20cb8981843cc39e8722f560f7a7e75483c93a30edffde9835533f8dc5b50587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20cb8981843cc39e8722f560f7a7e75483c93a30edffde9835533f8dc5b50587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79851ed8c84027d1bb9eccb69bed004ae1cdbe934245c66325eafe42b12ea808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79851ed8c84027d1bb9eccb69bed004ae1cdbe934245c66325eafe42b12ea808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5rgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:14Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.783778 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c6ntg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxbvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxbvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c6ntg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:14Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.786183 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.786253 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.786268 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.786312 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.786323 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:14Z","lastTransitionTime":"2025-10-13T17:25:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.811879 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a1f1696-ed7f-4482-a300-ca25b68e09e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2507a30017fbd9947ca456b84812f0b26baabf572768870f5a34b3c7699443cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc60672e282de4d3d6a2d8fd64306ec0fc0d032ade655f88afb0c0b9e0a8e589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cf903094dae4e559bcef3e269240670b5d4693344d600779c394ce1e039b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06eea6556daedf913d87b8b0527b337070528b07cc026de878b4df332598a2d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b452d113d48729b6d4ab59f80138b15e7cb16cc16eef4ec9916901a067986d07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:14Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.828779 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:14Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.845682 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:14Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.859382 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dnjrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e6efca3-dcc8-488e-8fb1-e14ee0396158\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0f539bcc67139c5b3d51ab01a3114afc20fc44cf5e286dce2b4cad0ea0629c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g676g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c792a043326901c5cbcae8c96b168aee71b56f6e91d0edfc7f58abb5da9c9c28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g676g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dnjrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:14Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.888685 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.888731 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.888743 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.888759 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.888767 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:14Z","lastTransitionTime":"2025-10-13T17:25:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.991261 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.991294 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.991303 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.991315 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:14 crc kubenswrapper[4720]: I1013 17:25:14.991325 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:14Z","lastTransitionTime":"2025-10-13T17:25:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:15 crc kubenswrapper[4720]: I1013 17:25:15.093251 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:15 crc kubenswrapper[4720]: I1013 17:25:15.093283 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:15 crc kubenswrapper[4720]: I1013 17:25:15.093291 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:15 crc kubenswrapper[4720]: I1013 17:25:15.093306 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:15 crc kubenswrapper[4720]: I1013 17:25:15.093317 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:15Z","lastTransitionTime":"2025-10-13T17:25:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:15 crc kubenswrapper[4720]: I1013 17:25:15.167383 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:25:15 crc kubenswrapper[4720]: I1013 17:25:15.167403 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:25:15 crc kubenswrapper[4720]: E1013 17:25:15.167586 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 17:25:15 crc kubenswrapper[4720]: E1013 17:25:15.167734 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 17:25:15 crc kubenswrapper[4720]: I1013 17:25:15.168079 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:25:15 crc kubenswrapper[4720]: E1013 17:25:15.168269 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 17:25:15 crc kubenswrapper[4720]: I1013 17:25:15.186294 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7baf94d8-250c-4568-ab1c-a1429b8caf70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435ba16189d6d19892aaab158512c1a84a72103779e3cc4b2c634de344583c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e566f3b66d404dd8da93bba1745b9e67cd63532330591f5b735f6826c07eae2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ec64662f56c2609ebac1383a78840ba4e0d9fb1fe10e666169285d3852e6b40\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d8f079c2674e0c6bea1837755db23bc3205da6a01ac1896d6293226d9c6fb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:15Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:15 crc kubenswrapper[4720]: I1013 17:25:15.196424 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:15 crc kubenswrapper[4720]: I1013 17:25:15.196480 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:15 crc kubenswrapper[4720]: I1013 17:25:15.196498 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:15 crc kubenswrapper[4720]: I1013 17:25:15.196521 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:15 crc kubenswrapper[4720]: I1013 17:25:15.196538 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:15Z","lastTransitionTime":"2025-10-13T17:25:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:15 crc kubenswrapper[4720]: I1013 17:25:15.203417 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd8dd666-aa42-4598-bb52-c0cd9345384d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e96fe80c0a88f19c6c5705c7fd945c3282a104b7767db5c217b6364c045d649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9d9053346b85f12f6cf781f1699dbf8aea670b7e1cc5f4fdc1ffac2c969712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d586c8b48f5d2ca87e3a758dd265611f11678f6fa8dd1118a859923c331c4f67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19c59ec68cb6306af37b90cd4583f5a1a266e93f77566d756223eecf7cdadb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19c59ec68cb6306af37b90cd4583f5a1a266e93f77566d756223eecf7cdadb76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:15Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:15 crc kubenswrapper[4720]: I1013 17:25:15.221723 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:15Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:15 crc kubenswrapper[4720]: I1013 17:25:15.241380 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://705766b44d481300d0fd8cc654aa2dc9046733e5a71ad3a7ab789f85e737dcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a69b8825692cbbb7cce0a52b1ccb7b137ce6c9f8a8d788fff95bb7228cb40e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:15Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:15 crc kubenswrapper[4720]: I1013 17:25:15.258714 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f85e4aab30e7f49ef05e48cc22fc22e0291e32604a8f74ae1cc2ea57b5889e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:15Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:15 crc kubenswrapper[4720]: I1013 17:25:15.276492 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f856022da70859465022db433585bd8cad12f8cd2aeae7943491d3f7e5049256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:15Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:15 crc kubenswrapper[4720]: I1013 17:25:15.289386 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmjlm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f85220d6-f940-4538-806a-2b26bacd7b09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6315723f6cd375a392c8c427a737021b7e01415c902514b09ee43407749caa34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7795z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmjlm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:15Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:15 crc kubenswrapper[4720]: I1013 17:25:15.298641 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:15 crc kubenswrapper[4720]: I1013 17:25:15.298697 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:15 crc kubenswrapper[4720]: I1013 17:25:15.298714 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:15 crc kubenswrapper[4720]: I1013 17:25:15.298740 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:15 crc kubenswrapper[4720]: I1013 17:25:15.298757 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:15Z","lastTransitionTime":"2025-10-13T17:25:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:15 crc kubenswrapper[4720]: I1013 17:25:15.306110 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce442c80-fcde-4b79-b6f9-f8f25771dfd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10e7a946185d6502e37a1ab60ad1eca36cce991f230188d22b2d6e2ea554f920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://864e330267c8cec8487e1dcf4a50bbb2ba37d81d80361f0873e6bf69de1ed721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-htwnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:15Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:15 crc kubenswrapper[4720]: I1013 17:25:15.328124 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5rgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d7408e5-b529-4396-92b3-2fed275c3250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://439dea1ed9724a215fa1aa1f4a4bd84a4bb998f2f86814f5f4eb46989242f11c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42ff97f0a5fb40b52d5445a00eb8f70f8028610b76baf33117b425b821b5ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f42ff97f0a5fb40b52d5445a00eb8f70f8028610b76baf33117b425b821b5ec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://471bb77042e72bcac4bb28dd26f6151a7154bea0b3ec92272e405e9b22bc0170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://471bb77042e72bcac4bb28dd26f6151a7154bea0b3ec92272e405e9b22bc0170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20cb8981843cc39e8722f560f7a7e75483c93a30edffde9835533f8dc5b50587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20cb8981843cc39e8722f560f7a7e75483c93a30edffde9835533f8dc5b50587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79851ed8c84027d1bb9eccb69bed004ae1cdbe934245c66325eafe42b12ea808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79851ed8c84027d1bb9eccb69bed004ae1cdbe934245c66325eafe42b12ea808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5rgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:15Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:15 crc kubenswrapper[4720]: I1013 17:25:15.344915 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c6ntg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxbvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxbvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c6ntg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:15Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:15 crc kubenswrapper[4720]: I1013 17:25:15.375282 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a1f1696-ed7f-4482-a300-ca25b68e09e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2507a30017fbd9947ca456b84812f0b26baabf572768870f5a34b3c7699443cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc60672e282de4d3d6a2d8fd64306ec0fc0d032ade655f88afb0c0b9e0a8e589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cf903094dae4e559bcef3e269240670b5d4693344d600779c394ce1e039b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06eea6556daedf913d87b8b0527b337070528b07cc026de878b4df332598a2d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b452d113d48729b6d4ab59f80138b15e7cb16cc16eef4ec9916901a067986d07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:15Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:15 crc kubenswrapper[4720]: I1013 17:25:15.393773 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:15Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:15 crc kubenswrapper[4720]: I1013 17:25:15.400936 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:15 crc kubenswrapper[4720]: I1013 17:25:15.400990 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:15 crc kubenswrapper[4720]: I1013 17:25:15.401007 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:15 crc kubenswrapper[4720]: I1013 17:25:15.401029 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:15 crc kubenswrapper[4720]: I1013 17:25:15.401046 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:15Z","lastTransitionTime":"2025-10-13T17:25:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:15 crc kubenswrapper[4720]: I1013 17:25:15.413383 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:15Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:15 crc kubenswrapper[4720]: I1013 17:25:15.428411 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dnjrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e6efca3-dcc8-488e-8fb1-e14ee0396158\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0f539bcc67139c5b3d51ab01a3114afc20fc44cf5e286dce2b4cad0ea0629c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g676g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c792a043326901c5cbcae8c96b168aee71b56f6e91d0edfc7f58abb5da9c9c28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g676g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dnjrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:15Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:15 crc kubenswrapper[4720]: I1013 17:25:15.447953 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea323a1a-c607-40cf-b61c-b7ff03399c45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88b8a46e2e2c2342bd22a4a9b42cadd0188ec83f23815773f6ffb46be79fdf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d875b0d7eb9f13bc697597a33f7196598ae679980b13842e92acf20477526f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e18d0e9df43d366a6e7088cc3d5bdc794882828f60f008eaf29e788e0141ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef2ba92cea782dd4fd31a511e1086777b58c00340014dff74ff6615b07bd10c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7977e20d8558810fb9f8f827830d0378de7d15c71340d396a95b506902c0962\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T17:24:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1013 17:24:28.736714 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 17:24:28.738546 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445610647/tls.crt::/tmp/serving-cert-445610647/tls.key\\\\\\\"\\\\nI1013 17:24:34.860768 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 17:24:34.864611 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 17:24:34.864632 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 17:24:34.864654 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 17:24:34.864660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 17:24:34.871230 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1013 17:24:34.871248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1013 17:24:34.871273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871290 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871305 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 17:24:34.871314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 17:24:34.871324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 17:24:34.871335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1013 17:24:34.872239 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd536f783e3f8f974f25772994c3d00a01c2ba66997a6e06236f1b83b890f8f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:15Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:15 crc kubenswrapper[4720]: I1013 17:25:15.468118 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxmjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b45ec2d-5bea-4007-a49f-224a866f93eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00868f87f1a3020a1db7e50377bbe90203d6e93799c8ec9f3100dd52a6c0cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xsrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxmjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:15Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:15 crc kubenswrapper[4720]: I1013 17:25:15.496930 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8064812e-b6aa-4f56-81c9-16154c00abad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c28fde07efa17d20ddecd1c8a0da09a6fa09e98a90b14718565c00aef9c0d25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bbd41d9b0518c8b060b1d01c335b6c9923e7196624fbd49326133faaa2e36ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd8a4dfb390a1643edb4eb4bd01970eaffd8dab0f748effac9e7c691ef4a3ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://013dfade47eccae4339fca0d379d914f63502613f87f1bc621113f7008634475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69d511a66a84fb0fa90f426bd4609c2b3ad915571802c6c024e0370d6e1683a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1aefa46ee989416f0e4fd05f6f12031da637a354fadd7f6caef75ddc2c74ddf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9825f520cb9cff0f39359c2b7f6d8ef1e3163ec1c94f80c037ea4afb4300c5d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9825f520cb9cff0f39359c2b7f6d8ef1e3163ec1c94f80c037ea4afb4300c5d3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T17:25:13Z\\\",\\\"message\\\":\\\"_controller.go:776] Recording success event on pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI1013 17:25:13.141326 6449 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 2.698682ms)\\\\nI1013 17:25:13.141338 6449 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\\\\\" but failed to find it\\\\nI1013 17:25:13.141346 6449 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\\\\\" in cache\\\\nI1013 17:25:13.141945 6449 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1013 17:25:13.142064 6449 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1013 17:25:13.142100 6449 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1013 17:25:13.142115 6449 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1013 17:25:13.142133 6449 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1013 17:25:13.142159 6449 factory.go:656] Stopping watch factory\\\\nI1013 17:25:13.142173 6449 ovnkube.go:599] Stopped ovnkube\\\\nI1013 17:25:13.142239 6449 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1013 17:25:13.142339 6449 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:25:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pn6lz_openshift-ovn-kubernetes(8064812e-b6aa-4f56-81c9-16154c00abad)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34b1fffbebfbd3599bed22fe3fe80129f5b704560c54d09e9ab5e9ea8bba4513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:15Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:15 crc kubenswrapper[4720]: I1013 17:25:15.503648 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:15 crc kubenswrapper[4720]: I1013 17:25:15.503689 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:15 crc kubenswrapper[4720]: I1013 17:25:15.503706 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:15 crc kubenswrapper[4720]: I1013 17:25:15.503728 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:15 crc kubenswrapper[4720]: I1013 17:25:15.503745 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:15Z","lastTransitionTime":"2025-10-13T17:25:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:15 crc kubenswrapper[4720]: I1013 17:25:15.512760 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bvtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b35c333b-0d4e-4d9a-a8fe-a1c6f5fbbf75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03411ae3786161605ed0cb08a21661c7c7a4cc93f91320586547b8ffab3e90ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g97k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bvtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:15Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:15 crc kubenswrapper[4720]: I1013 17:25:15.606489 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:15 crc kubenswrapper[4720]: I1013 17:25:15.606558 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:15 crc kubenswrapper[4720]: I1013 17:25:15.606582 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:15 crc kubenswrapper[4720]: I1013 17:25:15.606611 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:15 crc kubenswrapper[4720]: I1013 17:25:15.606633 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:15Z","lastTransitionTime":"2025-10-13T17:25:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:15 crc kubenswrapper[4720]: I1013 17:25:15.709869 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:15 crc kubenswrapper[4720]: I1013 17:25:15.709924 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:15 crc kubenswrapper[4720]: I1013 17:25:15.709939 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:15 crc kubenswrapper[4720]: I1013 17:25:15.709963 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:15 crc kubenswrapper[4720]: I1013 17:25:15.709981 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:15Z","lastTransitionTime":"2025-10-13T17:25:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:15 crc kubenswrapper[4720]: I1013 17:25:15.812839 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:15 crc kubenswrapper[4720]: I1013 17:25:15.812953 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:15 crc kubenswrapper[4720]: I1013 17:25:15.812970 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:15 crc kubenswrapper[4720]: I1013 17:25:15.812996 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:15 crc kubenswrapper[4720]: I1013 17:25:15.813015 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:15Z","lastTransitionTime":"2025-10-13T17:25:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:15 crc kubenswrapper[4720]: I1013 17:25:15.915446 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:15 crc kubenswrapper[4720]: I1013 17:25:15.915568 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:15 crc kubenswrapper[4720]: I1013 17:25:15.915586 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:15 crc kubenswrapper[4720]: I1013 17:25:15.915608 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:15 crc kubenswrapper[4720]: I1013 17:25:15.915623 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:15Z","lastTransitionTime":"2025-10-13T17:25:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:16 crc kubenswrapper[4720]: I1013 17:25:16.018153 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:16 crc kubenswrapper[4720]: I1013 17:25:16.018285 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:16 crc kubenswrapper[4720]: I1013 17:25:16.018306 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:16 crc kubenswrapper[4720]: I1013 17:25:16.018329 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:16 crc kubenswrapper[4720]: I1013 17:25:16.018345 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:16Z","lastTransitionTime":"2025-10-13T17:25:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:16 crc kubenswrapper[4720]: I1013 17:25:16.120898 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:16 crc kubenswrapper[4720]: I1013 17:25:16.120955 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:16 crc kubenswrapper[4720]: I1013 17:25:16.120973 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:16 crc kubenswrapper[4720]: I1013 17:25:16.121000 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:16 crc kubenswrapper[4720]: I1013 17:25:16.121016 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:16Z","lastTransitionTime":"2025-10-13T17:25:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:16 crc kubenswrapper[4720]: I1013 17:25:16.167471 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c6ntg" Oct 13 17:25:16 crc kubenswrapper[4720]: E1013 17:25:16.167622 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c6ntg" podUID="c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61" Oct 13 17:25:16 crc kubenswrapper[4720]: I1013 17:25:16.223678 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:16 crc kubenswrapper[4720]: I1013 17:25:16.223712 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:16 crc kubenswrapper[4720]: I1013 17:25:16.223721 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:16 crc kubenswrapper[4720]: I1013 17:25:16.223734 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:16 crc kubenswrapper[4720]: I1013 17:25:16.223743 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:16Z","lastTransitionTime":"2025-10-13T17:25:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:16 crc kubenswrapper[4720]: I1013 17:25:16.326743 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:16 crc kubenswrapper[4720]: I1013 17:25:16.326832 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:16 crc kubenswrapper[4720]: I1013 17:25:16.326850 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:16 crc kubenswrapper[4720]: I1013 17:25:16.326986 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:16 crc kubenswrapper[4720]: I1013 17:25:16.327013 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:16Z","lastTransitionTime":"2025-10-13T17:25:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:16 crc kubenswrapper[4720]: I1013 17:25:16.429969 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:16 crc kubenswrapper[4720]: I1013 17:25:16.430035 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:16 crc kubenswrapper[4720]: I1013 17:25:16.430047 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:16 crc kubenswrapper[4720]: I1013 17:25:16.430078 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:16 crc kubenswrapper[4720]: I1013 17:25:16.430114 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:16Z","lastTransitionTime":"2025-10-13T17:25:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:16 crc kubenswrapper[4720]: I1013 17:25:16.544863 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:16 crc kubenswrapper[4720]: I1013 17:25:16.544932 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:16 crc kubenswrapper[4720]: I1013 17:25:16.544949 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:16 crc kubenswrapper[4720]: I1013 17:25:16.544975 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:16 crc kubenswrapper[4720]: I1013 17:25:16.544993 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:16Z","lastTransitionTime":"2025-10-13T17:25:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:16 crc kubenswrapper[4720]: I1013 17:25:16.648004 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:16 crc kubenswrapper[4720]: I1013 17:25:16.648065 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:16 crc kubenswrapper[4720]: I1013 17:25:16.648081 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:16 crc kubenswrapper[4720]: I1013 17:25:16.648105 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:16 crc kubenswrapper[4720]: I1013 17:25:16.648121 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:16Z","lastTransitionTime":"2025-10-13T17:25:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:16 crc kubenswrapper[4720]: I1013 17:25:16.751230 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:16 crc kubenswrapper[4720]: I1013 17:25:16.751302 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:16 crc kubenswrapper[4720]: I1013 17:25:16.751326 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:16 crc kubenswrapper[4720]: I1013 17:25:16.751356 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:16 crc kubenswrapper[4720]: I1013 17:25:16.751378 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:16Z","lastTransitionTime":"2025-10-13T17:25:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:16 crc kubenswrapper[4720]: I1013 17:25:16.854300 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:16 crc kubenswrapper[4720]: I1013 17:25:16.854355 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:16 crc kubenswrapper[4720]: I1013 17:25:16.854372 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:16 crc kubenswrapper[4720]: I1013 17:25:16.854393 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:16 crc kubenswrapper[4720]: I1013 17:25:16.854410 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:16Z","lastTransitionTime":"2025-10-13T17:25:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:16 crc kubenswrapper[4720]: I1013 17:25:16.957373 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:16 crc kubenswrapper[4720]: I1013 17:25:16.957439 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:16 crc kubenswrapper[4720]: I1013 17:25:16.957458 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:16 crc kubenswrapper[4720]: I1013 17:25:16.957482 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:16 crc kubenswrapper[4720]: I1013 17:25:16.957497 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:16Z","lastTransitionTime":"2025-10-13T17:25:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:17 crc kubenswrapper[4720]: I1013 17:25:17.059959 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:17 crc kubenswrapper[4720]: I1013 17:25:17.059997 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:17 crc kubenswrapper[4720]: I1013 17:25:17.060005 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:17 crc kubenswrapper[4720]: I1013 17:25:17.060020 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:17 crc kubenswrapper[4720]: I1013 17:25:17.060029 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:17Z","lastTransitionTime":"2025-10-13T17:25:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:17 crc kubenswrapper[4720]: I1013 17:25:17.162489 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:17 crc kubenswrapper[4720]: I1013 17:25:17.162553 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:17 crc kubenswrapper[4720]: I1013 17:25:17.162566 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:17 crc kubenswrapper[4720]: I1013 17:25:17.162583 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:17 crc kubenswrapper[4720]: I1013 17:25:17.162594 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:17Z","lastTransitionTime":"2025-10-13T17:25:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:17 crc kubenswrapper[4720]: I1013 17:25:17.167966 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:25:17 crc kubenswrapper[4720]: E1013 17:25:17.168060 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 17:25:17 crc kubenswrapper[4720]: I1013 17:25:17.168107 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:25:17 crc kubenswrapper[4720]: I1013 17:25:17.168134 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:25:17 crc kubenswrapper[4720]: E1013 17:25:17.168303 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 17:25:17 crc kubenswrapper[4720]: E1013 17:25:17.168462 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 17:25:17 crc kubenswrapper[4720]: I1013 17:25:17.265068 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:17 crc kubenswrapper[4720]: I1013 17:25:17.265157 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:17 crc kubenswrapper[4720]: I1013 17:25:17.265246 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:17 crc kubenswrapper[4720]: I1013 17:25:17.265275 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:17 crc kubenswrapper[4720]: I1013 17:25:17.265332 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:17Z","lastTransitionTime":"2025-10-13T17:25:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:17 crc kubenswrapper[4720]: I1013 17:25:17.367646 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:17 crc kubenswrapper[4720]: I1013 17:25:17.367698 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:17 crc kubenswrapper[4720]: I1013 17:25:17.367714 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:17 crc kubenswrapper[4720]: I1013 17:25:17.367734 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:17 crc kubenswrapper[4720]: I1013 17:25:17.367750 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:17Z","lastTransitionTime":"2025-10-13T17:25:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:17 crc kubenswrapper[4720]: I1013 17:25:17.470826 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:17 crc kubenswrapper[4720]: I1013 17:25:17.470875 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:17 crc kubenswrapper[4720]: I1013 17:25:17.470885 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:17 crc kubenswrapper[4720]: I1013 17:25:17.470903 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:17 crc kubenswrapper[4720]: I1013 17:25:17.470914 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:17Z","lastTransitionTime":"2025-10-13T17:25:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:17 crc kubenswrapper[4720]: I1013 17:25:17.572582 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:17 crc kubenswrapper[4720]: I1013 17:25:17.572638 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:17 crc kubenswrapper[4720]: I1013 17:25:17.572654 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:17 crc kubenswrapper[4720]: I1013 17:25:17.572679 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:17 crc kubenswrapper[4720]: I1013 17:25:17.572695 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:17Z","lastTransitionTime":"2025-10-13T17:25:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:17 crc kubenswrapper[4720]: I1013 17:25:17.675050 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:17 crc kubenswrapper[4720]: I1013 17:25:17.675102 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:17 crc kubenswrapper[4720]: I1013 17:25:17.675112 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:17 crc kubenswrapper[4720]: I1013 17:25:17.675131 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:17 crc kubenswrapper[4720]: I1013 17:25:17.675143 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:17Z","lastTransitionTime":"2025-10-13T17:25:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:17 crc kubenswrapper[4720]: I1013 17:25:17.777805 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:17 crc kubenswrapper[4720]: I1013 17:25:17.777898 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:17 crc kubenswrapper[4720]: I1013 17:25:17.777916 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:17 crc kubenswrapper[4720]: I1013 17:25:17.777941 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:17 crc kubenswrapper[4720]: I1013 17:25:17.777958 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:17Z","lastTransitionTime":"2025-10-13T17:25:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:17 crc kubenswrapper[4720]: I1013 17:25:17.880751 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:17 crc kubenswrapper[4720]: I1013 17:25:17.880803 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:17 crc kubenswrapper[4720]: I1013 17:25:17.880821 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:17 crc kubenswrapper[4720]: I1013 17:25:17.880842 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:17 crc kubenswrapper[4720]: I1013 17:25:17.880859 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:17Z","lastTransitionTime":"2025-10-13T17:25:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:17 crc kubenswrapper[4720]: I1013 17:25:17.983418 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:17 crc kubenswrapper[4720]: I1013 17:25:17.983488 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:17 crc kubenswrapper[4720]: I1013 17:25:17.983501 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:17 crc kubenswrapper[4720]: I1013 17:25:17.983536 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:17 crc kubenswrapper[4720]: I1013 17:25:17.983548 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:17Z","lastTransitionTime":"2025-10-13T17:25:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:18 crc kubenswrapper[4720]: I1013 17:25:18.086181 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:18 crc kubenswrapper[4720]: I1013 17:25:18.086338 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:18 crc kubenswrapper[4720]: I1013 17:25:18.086396 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:18 crc kubenswrapper[4720]: I1013 17:25:18.086426 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:18 crc kubenswrapper[4720]: I1013 17:25:18.086444 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:18Z","lastTransitionTime":"2025-10-13T17:25:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:18 crc kubenswrapper[4720]: I1013 17:25:18.167992 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c6ntg" Oct 13 17:25:18 crc kubenswrapper[4720]: E1013 17:25:18.168171 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c6ntg" podUID="c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61" Oct 13 17:25:18 crc kubenswrapper[4720]: I1013 17:25:18.189891 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:18 crc kubenswrapper[4720]: I1013 17:25:18.189984 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:18 crc kubenswrapper[4720]: I1013 17:25:18.190008 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:18 crc kubenswrapper[4720]: I1013 17:25:18.190036 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:18 crc kubenswrapper[4720]: I1013 17:25:18.190060 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:18Z","lastTransitionTime":"2025-10-13T17:25:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:18 crc kubenswrapper[4720]: I1013 17:25:18.293145 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:18 crc kubenswrapper[4720]: I1013 17:25:18.293255 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:18 crc kubenswrapper[4720]: I1013 17:25:18.293274 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:18 crc kubenswrapper[4720]: I1013 17:25:18.293298 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:18 crc kubenswrapper[4720]: I1013 17:25:18.293317 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:18Z","lastTransitionTime":"2025-10-13T17:25:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:18 crc kubenswrapper[4720]: I1013 17:25:18.396327 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:18 crc kubenswrapper[4720]: I1013 17:25:18.396388 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:18 crc kubenswrapper[4720]: I1013 17:25:18.396407 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:18 crc kubenswrapper[4720]: I1013 17:25:18.396435 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:18 crc kubenswrapper[4720]: I1013 17:25:18.396457 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:18Z","lastTransitionTime":"2025-10-13T17:25:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:18 crc kubenswrapper[4720]: I1013 17:25:18.499199 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:18 crc kubenswrapper[4720]: I1013 17:25:18.499284 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:18 crc kubenswrapper[4720]: I1013 17:25:18.499302 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:18 crc kubenswrapper[4720]: I1013 17:25:18.499325 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:18 crc kubenswrapper[4720]: I1013 17:25:18.499341 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:18Z","lastTransitionTime":"2025-10-13T17:25:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:18 crc kubenswrapper[4720]: I1013 17:25:18.602019 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:18 crc kubenswrapper[4720]: I1013 17:25:18.602076 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:18 crc kubenswrapper[4720]: I1013 17:25:18.602093 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:18 crc kubenswrapper[4720]: I1013 17:25:18.602116 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:18 crc kubenswrapper[4720]: I1013 17:25:18.602133 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:18Z","lastTransitionTime":"2025-10-13T17:25:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:18 crc kubenswrapper[4720]: I1013 17:25:18.705330 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:18 crc kubenswrapper[4720]: I1013 17:25:18.705382 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:18 crc kubenswrapper[4720]: I1013 17:25:18.705398 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:18 crc kubenswrapper[4720]: I1013 17:25:18.705421 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:18 crc kubenswrapper[4720]: I1013 17:25:18.705437 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:18Z","lastTransitionTime":"2025-10-13T17:25:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:18 crc kubenswrapper[4720]: I1013 17:25:18.808494 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:18 crc kubenswrapper[4720]: I1013 17:25:18.808550 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:18 crc kubenswrapper[4720]: I1013 17:25:18.808563 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:18 crc kubenswrapper[4720]: I1013 17:25:18.808582 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:18 crc kubenswrapper[4720]: I1013 17:25:18.808593 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:18Z","lastTransitionTime":"2025-10-13T17:25:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:18 crc kubenswrapper[4720]: I1013 17:25:18.911576 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:18 crc kubenswrapper[4720]: I1013 17:25:18.911636 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:18 crc kubenswrapper[4720]: I1013 17:25:18.911650 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:18 crc kubenswrapper[4720]: I1013 17:25:18.911674 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:18 crc kubenswrapper[4720]: I1013 17:25:18.911686 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:18Z","lastTransitionTime":"2025-10-13T17:25:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:19 crc kubenswrapper[4720]: I1013 17:25:19.015168 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:19 crc kubenswrapper[4720]: I1013 17:25:19.015250 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:19 crc kubenswrapper[4720]: I1013 17:25:19.015261 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:19 crc kubenswrapper[4720]: I1013 17:25:19.015285 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:19 crc kubenswrapper[4720]: I1013 17:25:19.015335 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:19Z","lastTransitionTime":"2025-10-13T17:25:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:19 crc kubenswrapper[4720]: I1013 17:25:19.118425 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:19 crc kubenswrapper[4720]: I1013 17:25:19.118475 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:19 crc kubenswrapper[4720]: I1013 17:25:19.118487 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:19 crc kubenswrapper[4720]: I1013 17:25:19.118508 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:19 crc kubenswrapper[4720]: I1013 17:25:19.118523 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:19Z","lastTransitionTime":"2025-10-13T17:25:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:19 crc kubenswrapper[4720]: I1013 17:25:19.167547 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:25:19 crc kubenswrapper[4720]: E1013 17:25:19.167706 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 17:25:19 crc kubenswrapper[4720]: I1013 17:25:19.167949 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:25:19 crc kubenswrapper[4720]: E1013 17:25:19.168001 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 17:25:19 crc kubenswrapper[4720]: I1013 17:25:19.168108 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:25:19 crc kubenswrapper[4720]: E1013 17:25:19.168157 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 17:25:19 crc kubenswrapper[4720]: I1013 17:25:19.220960 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:19 crc kubenswrapper[4720]: I1013 17:25:19.221005 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:19 crc kubenswrapper[4720]: I1013 17:25:19.221015 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:19 crc kubenswrapper[4720]: I1013 17:25:19.221032 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:19 crc kubenswrapper[4720]: I1013 17:25:19.221047 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:19Z","lastTransitionTime":"2025-10-13T17:25:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:19 crc kubenswrapper[4720]: I1013 17:25:19.324008 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:19 crc kubenswrapper[4720]: I1013 17:25:19.324075 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:19 crc kubenswrapper[4720]: I1013 17:25:19.324093 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:19 crc kubenswrapper[4720]: I1013 17:25:19.324121 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:19 crc kubenswrapper[4720]: I1013 17:25:19.324140 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:19Z","lastTransitionTime":"2025-10-13T17:25:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:19 crc kubenswrapper[4720]: I1013 17:25:19.427116 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:19 crc kubenswrapper[4720]: I1013 17:25:19.427174 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:19 crc kubenswrapper[4720]: I1013 17:25:19.427184 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:19 crc kubenswrapper[4720]: I1013 17:25:19.427219 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:19 crc kubenswrapper[4720]: I1013 17:25:19.427230 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:19Z","lastTransitionTime":"2025-10-13T17:25:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:19 crc kubenswrapper[4720]: I1013 17:25:19.530191 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:19 crc kubenswrapper[4720]: I1013 17:25:19.530259 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:19 crc kubenswrapper[4720]: I1013 17:25:19.530273 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:19 crc kubenswrapper[4720]: I1013 17:25:19.530292 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:19 crc kubenswrapper[4720]: I1013 17:25:19.530303 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:19Z","lastTransitionTime":"2025-10-13T17:25:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:19 crc kubenswrapper[4720]: I1013 17:25:19.632642 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:19 crc kubenswrapper[4720]: I1013 17:25:19.632672 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:19 crc kubenswrapper[4720]: I1013 17:25:19.632680 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:19 crc kubenswrapper[4720]: I1013 17:25:19.632695 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:19 crc kubenswrapper[4720]: I1013 17:25:19.632707 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:19Z","lastTransitionTime":"2025-10-13T17:25:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:19 crc kubenswrapper[4720]: I1013 17:25:19.734515 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:19 crc kubenswrapper[4720]: I1013 17:25:19.734565 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:19 crc kubenswrapper[4720]: I1013 17:25:19.734574 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:19 crc kubenswrapper[4720]: I1013 17:25:19.734589 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:19 crc kubenswrapper[4720]: I1013 17:25:19.734598 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:19Z","lastTransitionTime":"2025-10-13T17:25:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:19 crc kubenswrapper[4720]: I1013 17:25:19.838326 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:19 crc kubenswrapper[4720]: I1013 17:25:19.838371 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:19 crc kubenswrapper[4720]: I1013 17:25:19.838380 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:19 crc kubenswrapper[4720]: I1013 17:25:19.838397 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:19 crc kubenswrapper[4720]: I1013 17:25:19.838407 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:19Z","lastTransitionTime":"2025-10-13T17:25:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:19 crc kubenswrapper[4720]: I1013 17:25:19.940868 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:19 crc kubenswrapper[4720]: I1013 17:25:19.940935 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:19 crc kubenswrapper[4720]: I1013 17:25:19.940958 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:19 crc kubenswrapper[4720]: I1013 17:25:19.940988 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:19 crc kubenswrapper[4720]: I1013 17:25:19.941009 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:19Z","lastTransitionTime":"2025-10-13T17:25:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:20 crc kubenswrapper[4720]: I1013 17:25:20.044017 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:20 crc kubenswrapper[4720]: I1013 17:25:20.044051 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:20 crc kubenswrapper[4720]: I1013 17:25:20.044059 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:20 crc kubenswrapper[4720]: I1013 17:25:20.044073 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:20 crc kubenswrapper[4720]: I1013 17:25:20.044082 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:20Z","lastTransitionTime":"2025-10-13T17:25:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:20 crc kubenswrapper[4720]: I1013 17:25:20.146169 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:20 crc kubenswrapper[4720]: I1013 17:25:20.146223 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:20 crc kubenswrapper[4720]: I1013 17:25:20.146234 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:20 crc kubenswrapper[4720]: I1013 17:25:20.146248 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:20 crc kubenswrapper[4720]: I1013 17:25:20.146258 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:20Z","lastTransitionTime":"2025-10-13T17:25:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:20 crc kubenswrapper[4720]: I1013 17:25:20.167828 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c6ntg" Oct 13 17:25:20 crc kubenswrapper[4720]: E1013 17:25:20.167941 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c6ntg" podUID="c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61" Oct 13 17:25:20 crc kubenswrapper[4720]: I1013 17:25:20.248499 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:20 crc kubenswrapper[4720]: I1013 17:25:20.248532 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:20 crc kubenswrapper[4720]: I1013 17:25:20.248540 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:20 crc kubenswrapper[4720]: I1013 17:25:20.248554 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:20 crc kubenswrapper[4720]: I1013 17:25:20.248564 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:20Z","lastTransitionTime":"2025-10-13T17:25:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:20 crc kubenswrapper[4720]: I1013 17:25:20.350934 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:20 crc kubenswrapper[4720]: I1013 17:25:20.350985 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:20 crc kubenswrapper[4720]: I1013 17:25:20.350998 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:20 crc kubenswrapper[4720]: I1013 17:25:20.351013 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:20 crc kubenswrapper[4720]: I1013 17:25:20.351024 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:20Z","lastTransitionTime":"2025-10-13T17:25:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:20 crc kubenswrapper[4720]: I1013 17:25:20.453567 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:20 crc kubenswrapper[4720]: I1013 17:25:20.453606 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:20 crc kubenswrapper[4720]: I1013 17:25:20.453614 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:20 crc kubenswrapper[4720]: I1013 17:25:20.453628 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:20 crc kubenswrapper[4720]: I1013 17:25:20.453640 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:20Z","lastTransitionTime":"2025-10-13T17:25:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:20 crc kubenswrapper[4720]: I1013 17:25:20.555958 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:20 crc kubenswrapper[4720]: I1013 17:25:20.555998 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:20 crc kubenswrapper[4720]: I1013 17:25:20.556006 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:20 crc kubenswrapper[4720]: I1013 17:25:20.556021 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:20 crc kubenswrapper[4720]: I1013 17:25:20.556030 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:20Z","lastTransitionTime":"2025-10-13T17:25:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:20 crc kubenswrapper[4720]: I1013 17:25:20.657856 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:20 crc kubenswrapper[4720]: I1013 17:25:20.657903 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:20 crc kubenswrapper[4720]: I1013 17:25:20.657915 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:20 crc kubenswrapper[4720]: I1013 17:25:20.657933 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:20 crc kubenswrapper[4720]: I1013 17:25:20.657944 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:20Z","lastTransitionTime":"2025-10-13T17:25:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:20 crc kubenswrapper[4720]: I1013 17:25:20.759657 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:20 crc kubenswrapper[4720]: I1013 17:25:20.759700 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:20 crc kubenswrapper[4720]: I1013 17:25:20.759712 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:20 crc kubenswrapper[4720]: I1013 17:25:20.759731 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:20 crc kubenswrapper[4720]: I1013 17:25:20.759743 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:20Z","lastTransitionTime":"2025-10-13T17:25:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:20 crc kubenswrapper[4720]: I1013 17:25:20.862034 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:20 crc kubenswrapper[4720]: I1013 17:25:20.862073 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:20 crc kubenswrapper[4720]: I1013 17:25:20.862084 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:20 crc kubenswrapper[4720]: I1013 17:25:20.862101 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:20 crc kubenswrapper[4720]: I1013 17:25:20.862111 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:20Z","lastTransitionTime":"2025-10-13T17:25:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:20 crc kubenswrapper[4720]: I1013 17:25:20.964313 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:20 crc kubenswrapper[4720]: I1013 17:25:20.964353 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:20 crc kubenswrapper[4720]: I1013 17:25:20.964362 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:20 crc kubenswrapper[4720]: I1013 17:25:20.964379 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:20 crc kubenswrapper[4720]: I1013 17:25:20.964391 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:20Z","lastTransitionTime":"2025-10-13T17:25:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:21 crc kubenswrapper[4720]: I1013 17:25:21.066832 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:21 crc kubenswrapper[4720]: I1013 17:25:21.066889 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:21 crc kubenswrapper[4720]: I1013 17:25:21.066904 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:21 crc kubenswrapper[4720]: I1013 17:25:21.066927 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:21 crc kubenswrapper[4720]: I1013 17:25:21.066943 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:21Z","lastTransitionTime":"2025-10-13T17:25:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:21 crc kubenswrapper[4720]: I1013 17:25:21.168167 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:25:21 crc kubenswrapper[4720]: I1013 17:25:21.168224 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:25:21 crc kubenswrapper[4720]: E1013 17:25:21.168449 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 17:25:21 crc kubenswrapper[4720]: I1013 17:25:21.168529 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:25:21 crc kubenswrapper[4720]: E1013 17:25:21.168666 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 17:25:21 crc kubenswrapper[4720]: E1013 17:25:21.168819 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 17:25:21 crc kubenswrapper[4720]: I1013 17:25:21.169520 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:21 crc kubenswrapper[4720]: I1013 17:25:21.169547 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:21 crc kubenswrapper[4720]: I1013 17:25:21.169556 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:21 crc kubenswrapper[4720]: I1013 17:25:21.169568 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:21 crc kubenswrapper[4720]: I1013 17:25:21.169579 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:21Z","lastTransitionTime":"2025-10-13T17:25:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:21 crc kubenswrapper[4720]: I1013 17:25:21.271729 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:21 crc kubenswrapper[4720]: I1013 17:25:21.271775 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:21 crc kubenswrapper[4720]: I1013 17:25:21.271786 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:21 crc kubenswrapper[4720]: I1013 17:25:21.271803 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:21 crc kubenswrapper[4720]: I1013 17:25:21.271816 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:21Z","lastTransitionTime":"2025-10-13T17:25:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:21 crc kubenswrapper[4720]: I1013 17:25:21.373878 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:21 crc kubenswrapper[4720]: I1013 17:25:21.373925 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:21 crc kubenswrapper[4720]: I1013 17:25:21.373934 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:21 crc kubenswrapper[4720]: I1013 17:25:21.373951 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:21 crc kubenswrapper[4720]: I1013 17:25:21.373962 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:21Z","lastTransitionTime":"2025-10-13T17:25:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:21 crc kubenswrapper[4720]: I1013 17:25:21.476003 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:21 crc kubenswrapper[4720]: I1013 17:25:21.476062 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:21 crc kubenswrapper[4720]: I1013 17:25:21.476079 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:21 crc kubenswrapper[4720]: I1013 17:25:21.476101 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:21 crc kubenswrapper[4720]: I1013 17:25:21.476118 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:21Z","lastTransitionTime":"2025-10-13T17:25:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:21 crc kubenswrapper[4720]: I1013 17:25:21.581383 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:21 crc kubenswrapper[4720]: I1013 17:25:21.581527 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:21 crc kubenswrapper[4720]: I1013 17:25:21.581596 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:21 crc kubenswrapper[4720]: I1013 17:25:21.581627 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:21 crc kubenswrapper[4720]: I1013 17:25:21.581653 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:21Z","lastTransitionTime":"2025-10-13T17:25:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:21 crc kubenswrapper[4720]: I1013 17:25:21.684861 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:21 crc kubenswrapper[4720]: I1013 17:25:21.684943 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:21 crc kubenswrapper[4720]: I1013 17:25:21.684983 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:21 crc kubenswrapper[4720]: I1013 17:25:21.685008 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:21 crc kubenswrapper[4720]: I1013 17:25:21.685024 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:21Z","lastTransitionTime":"2025-10-13T17:25:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:21 crc kubenswrapper[4720]: I1013 17:25:21.787709 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:21 crc kubenswrapper[4720]: I1013 17:25:21.787744 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:21 crc kubenswrapper[4720]: I1013 17:25:21.787752 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:21 crc kubenswrapper[4720]: I1013 17:25:21.787765 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:21 crc kubenswrapper[4720]: I1013 17:25:21.787774 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:21Z","lastTransitionTime":"2025-10-13T17:25:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:21 crc kubenswrapper[4720]: I1013 17:25:21.889912 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:21 crc kubenswrapper[4720]: I1013 17:25:21.889945 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:21 crc kubenswrapper[4720]: I1013 17:25:21.889955 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:21 crc kubenswrapper[4720]: I1013 17:25:21.889969 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:21 crc kubenswrapper[4720]: I1013 17:25:21.889978 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:21Z","lastTransitionTime":"2025-10-13T17:25:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:21 crc kubenswrapper[4720]: I1013 17:25:21.992152 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:21 crc kubenswrapper[4720]: I1013 17:25:21.992211 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:21 crc kubenswrapper[4720]: I1013 17:25:21.992224 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:21 crc kubenswrapper[4720]: I1013 17:25:21.992241 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:21 crc kubenswrapper[4720]: I1013 17:25:21.992252 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:21Z","lastTransitionTime":"2025-10-13T17:25:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.094496 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.094535 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.094545 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.094559 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.094569 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:22Z","lastTransitionTime":"2025-10-13T17:25:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.167648 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c6ntg" Oct 13 17:25:22 crc kubenswrapper[4720]: E1013 17:25:22.167878 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c6ntg" podUID="c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61" Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.182175 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.196871 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.196912 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.196921 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.196934 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.196943 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:22Z","lastTransitionTime":"2025-10-13T17:25:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.298926 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.298952 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.298961 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.298976 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.298988 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:22Z","lastTransitionTime":"2025-10-13T17:25:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.367813 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.367845 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.367857 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.367870 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.367879 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:22Z","lastTransitionTime":"2025-10-13T17:25:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:22 crc kubenswrapper[4720]: E1013 17:25:22.384073 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a530de0f-daad-4050-9522-69c64a451e75\\\",\\\"systemUUID\\\":\\\"00bb7c43-79d6-45b5-bd02-4b71a0ba6837\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:22Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.387326 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.387382 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.387399 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.387419 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.387446 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:22Z","lastTransitionTime":"2025-10-13T17:25:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:22 crc kubenswrapper[4720]: E1013 17:25:22.399772 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a530de0f-daad-4050-9522-69c64a451e75\\\",\\\"systemUUID\\\":\\\"00bb7c43-79d6-45b5-bd02-4b71a0ba6837\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:22Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.403270 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.403313 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.403325 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.403345 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.403355 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:22Z","lastTransitionTime":"2025-10-13T17:25:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:22 crc kubenswrapper[4720]: E1013 17:25:22.417924 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a530de0f-daad-4050-9522-69c64a451e75\\\",\\\"systemUUID\\\":\\\"00bb7c43-79d6-45b5-bd02-4b71a0ba6837\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:22Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.422419 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.422502 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.422523 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.422546 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.422593 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:22Z","lastTransitionTime":"2025-10-13T17:25:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:22 crc kubenswrapper[4720]: E1013 17:25:22.438649 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a530de0f-daad-4050-9522-69c64a451e75\\\",\\\"systemUUID\\\":\\\"00bb7c43-79d6-45b5-bd02-4b71a0ba6837\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:22Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.442135 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.442183 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.442224 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.442239 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.442252 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:22Z","lastTransitionTime":"2025-10-13T17:25:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:22 crc kubenswrapper[4720]: E1013 17:25:22.460871 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:22Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a530de0f-daad-4050-9522-69c64a451e75\\\",\\\"systemUUID\\\":\\\"00bb7c43-79d6-45b5-bd02-4b71a0ba6837\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:22Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:22 crc kubenswrapper[4720]: E1013 17:25:22.461028 4720 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.462694 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.462733 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.462750 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.462775 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.462791 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:22Z","lastTransitionTime":"2025-10-13T17:25:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.564757 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.564797 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.564813 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.564829 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.564843 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:22Z","lastTransitionTime":"2025-10-13T17:25:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.667007 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.667064 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.667082 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.667106 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.667125 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:22Z","lastTransitionTime":"2025-10-13T17:25:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.769705 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.769749 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.769760 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.769775 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.769787 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:22Z","lastTransitionTime":"2025-10-13T17:25:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.872638 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.872720 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.872797 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.872821 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.872838 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:22Z","lastTransitionTime":"2025-10-13T17:25:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.975168 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.975217 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.975227 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.975243 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:22 crc kubenswrapper[4720]: I1013 17:25:22.975254 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:22Z","lastTransitionTime":"2025-10-13T17:25:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:23 crc kubenswrapper[4720]: I1013 17:25:23.080577 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:23 crc kubenswrapper[4720]: I1013 17:25:23.080631 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:23 crc kubenswrapper[4720]: I1013 17:25:23.080643 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:23 crc kubenswrapper[4720]: I1013 17:25:23.080660 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:23 crc kubenswrapper[4720]: I1013 17:25:23.080672 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:23Z","lastTransitionTime":"2025-10-13T17:25:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:23 crc kubenswrapper[4720]: I1013 17:25:23.167396 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:25:23 crc kubenswrapper[4720]: I1013 17:25:23.167469 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:25:23 crc kubenswrapper[4720]: I1013 17:25:23.167486 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:25:23 crc kubenswrapper[4720]: E1013 17:25:23.167569 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 17:25:23 crc kubenswrapper[4720]: E1013 17:25:23.167686 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 17:25:23 crc kubenswrapper[4720]: E1013 17:25:23.167812 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 17:25:23 crc kubenswrapper[4720]: I1013 17:25:23.182557 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:23 crc kubenswrapper[4720]: I1013 17:25:23.182619 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:23 crc kubenswrapper[4720]: I1013 17:25:23.182643 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:23 crc kubenswrapper[4720]: I1013 17:25:23.182671 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:23 crc kubenswrapper[4720]: I1013 17:25:23.182693 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:23Z","lastTransitionTime":"2025-10-13T17:25:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:23 crc kubenswrapper[4720]: I1013 17:25:23.284891 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:23 crc kubenswrapper[4720]: I1013 17:25:23.284932 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:23 crc kubenswrapper[4720]: I1013 17:25:23.284945 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:23 crc kubenswrapper[4720]: I1013 17:25:23.284960 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:23 crc kubenswrapper[4720]: I1013 17:25:23.284969 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:23Z","lastTransitionTime":"2025-10-13T17:25:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:23 crc kubenswrapper[4720]: I1013 17:25:23.387576 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:23 crc kubenswrapper[4720]: I1013 17:25:23.387619 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:23 crc kubenswrapper[4720]: I1013 17:25:23.387628 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:23 crc kubenswrapper[4720]: I1013 17:25:23.387643 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:23 crc kubenswrapper[4720]: I1013 17:25:23.387652 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:23Z","lastTransitionTime":"2025-10-13T17:25:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:23 crc kubenswrapper[4720]: I1013 17:25:23.490550 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:23 crc kubenswrapper[4720]: I1013 17:25:23.490585 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:23 crc kubenswrapper[4720]: I1013 17:25:23.490594 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:23 crc kubenswrapper[4720]: I1013 17:25:23.490608 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:23 crc kubenswrapper[4720]: I1013 17:25:23.490619 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:23Z","lastTransitionTime":"2025-10-13T17:25:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:23 crc kubenswrapper[4720]: I1013 17:25:23.592961 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:23 crc kubenswrapper[4720]: I1013 17:25:23.593060 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:23 crc kubenswrapper[4720]: I1013 17:25:23.593120 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:23 crc kubenswrapper[4720]: I1013 17:25:23.593143 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:23 crc kubenswrapper[4720]: I1013 17:25:23.593216 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:23Z","lastTransitionTime":"2025-10-13T17:25:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:23 crc kubenswrapper[4720]: I1013 17:25:23.695757 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:23 crc kubenswrapper[4720]: I1013 17:25:23.695819 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:23 crc kubenswrapper[4720]: I1013 17:25:23.695831 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:23 crc kubenswrapper[4720]: I1013 17:25:23.695849 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:23 crc kubenswrapper[4720]: I1013 17:25:23.695860 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:23Z","lastTransitionTime":"2025-10-13T17:25:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:23 crc kubenswrapper[4720]: I1013 17:25:23.798010 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:23 crc kubenswrapper[4720]: I1013 17:25:23.798053 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:23 crc kubenswrapper[4720]: I1013 17:25:23.798062 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:23 crc kubenswrapper[4720]: I1013 17:25:23.798077 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:23 crc kubenswrapper[4720]: I1013 17:25:23.798089 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:23Z","lastTransitionTime":"2025-10-13T17:25:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:23 crc kubenswrapper[4720]: I1013 17:25:23.900747 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:23 crc kubenswrapper[4720]: I1013 17:25:23.900791 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:23 crc kubenswrapper[4720]: I1013 17:25:23.900800 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:23 crc kubenswrapper[4720]: I1013 17:25:23.900815 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:23 crc kubenswrapper[4720]: I1013 17:25:23.900824 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:23Z","lastTransitionTime":"2025-10-13T17:25:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:24 crc kubenswrapper[4720]: I1013 17:25:24.003474 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:24 crc kubenswrapper[4720]: I1013 17:25:24.003526 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:24 crc kubenswrapper[4720]: I1013 17:25:24.003542 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:24 crc kubenswrapper[4720]: I1013 17:25:24.003566 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:24 crc kubenswrapper[4720]: I1013 17:25:24.003581 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:24Z","lastTransitionTime":"2025-10-13T17:25:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:24 crc kubenswrapper[4720]: I1013 17:25:24.106068 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:24 crc kubenswrapper[4720]: I1013 17:25:24.106121 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:24 crc kubenswrapper[4720]: I1013 17:25:24.106137 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:24 crc kubenswrapper[4720]: I1013 17:25:24.106161 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:24 crc kubenswrapper[4720]: I1013 17:25:24.106177 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:24Z","lastTransitionTime":"2025-10-13T17:25:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:24 crc kubenswrapper[4720]: I1013 17:25:24.167292 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c6ntg" Oct 13 17:25:24 crc kubenswrapper[4720]: E1013 17:25:24.167562 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c6ntg" podUID="c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61" Oct 13 17:25:24 crc kubenswrapper[4720]: I1013 17:25:24.207722 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:24 crc kubenswrapper[4720]: I1013 17:25:24.207786 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:24 crc kubenswrapper[4720]: I1013 17:25:24.207808 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:24 crc kubenswrapper[4720]: I1013 17:25:24.207836 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:24 crc kubenswrapper[4720]: I1013 17:25:24.207859 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:24Z","lastTransitionTime":"2025-10-13T17:25:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:24 crc kubenswrapper[4720]: I1013 17:25:24.310371 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:24 crc kubenswrapper[4720]: I1013 17:25:24.310413 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:24 crc kubenswrapper[4720]: I1013 17:25:24.310423 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:24 crc kubenswrapper[4720]: I1013 17:25:24.310439 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:24 crc kubenswrapper[4720]: I1013 17:25:24.310449 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:24Z","lastTransitionTime":"2025-10-13T17:25:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:24 crc kubenswrapper[4720]: I1013 17:25:24.412891 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:24 crc kubenswrapper[4720]: I1013 17:25:24.412930 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:24 crc kubenswrapper[4720]: I1013 17:25:24.412939 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:24 crc kubenswrapper[4720]: I1013 17:25:24.412953 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:24 crc kubenswrapper[4720]: I1013 17:25:24.412962 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:24Z","lastTransitionTime":"2025-10-13T17:25:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:24 crc kubenswrapper[4720]: I1013 17:25:24.515313 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:24 crc kubenswrapper[4720]: I1013 17:25:24.515343 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:24 crc kubenswrapper[4720]: I1013 17:25:24.515351 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:24 crc kubenswrapper[4720]: I1013 17:25:24.515363 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:24 crc kubenswrapper[4720]: I1013 17:25:24.515370 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:24Z","lastTransitionTime":"2025-10-13T17:25:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:24 crc kubenswrapper[4720]: I1013 17:25:24.618224 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:24 crc kubenswrapper[4720]: I1013 17:25:24.618282 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:24 crc kubenswrapper[4720]: I1013 17:25:24.618331 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:24 crc kubenswrapper[4720]: I1013 17:25:24.618357 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:24 crc kubenswrapper[4720]: I1013 17:25:24.618373 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:24Z","lastTransitionTime":"2025-10-13T17:25:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:24 crc kubenswrapper[4720]: I1013 17:25:24.720399 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:24 crc kubenswrapper[4720]: I1013 17:25:24.720452 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:24 crc kubenswrapper[4720]: I1013 17:25:24.720468 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:24 crc kubenswrapper[4720]: I1013 17:25:24.720488 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:24 crc kubenswrapper[4720]: I1013 17:25:24.720504 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:24Z","lastTransitionTime":"2025-10-13T17:25:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:24 crc kubenswrapper[4720]: I1013 17:25:24.823348 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:24 crc kubenswrapper[4720]: I1013 17:25:24.823404 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:24 crc kubenswrapper[4720]: I1013 17:25:24.823424 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:24 crc kubenswrapper[4720]: I1013 17:25:24.823450 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:24 crc kubenswrapper[4720]: I1013 17:25:24.823467 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:24Z","lastTransitionTime":"2025-10-13T17:25:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:24 crc kubenswrapper[4720]: I1013 17:25:24.925548 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:24 crc kubenswrapper[4720]: I1013 17:25:24.925606 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:24 crc kubenswrapper[4720]: I1013 17:25:24.925622 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:24 crc kubenswrapper[4720]: I1013 17:25:24.925648 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:24 crc kubenswrapper[4720]: I1013 17:25:24.925665 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:24Z","lastTransitionTime":"2025-10-13T17:25:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.028345 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.028386 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.028394 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.028409 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.028418 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:25Z","lastTransitionTime":"2025-10-13T17:25:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.130793 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.130834 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.130847 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.130864 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.130875 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:25Z","lastTransitionTime":"2025-10-13T17:25:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.168007 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:25:25 crc kubenswrapper[4720]: E1013 17:25:25.168153 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.168510 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.168528 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:25:25 crc kubenswrapper[4720]: E1013 17:25:25.168630 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 17:25:25 crc kubenswrapper[4720]: E1013 17:25:25.168744 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.182851 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f856022da70859465022db433585bd8cad12f8cd2aeae7943491d3f7e5049256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:25Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.192762 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmjlm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f85220d6-f940-4538-806a-2b26bacd7b09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6315723f6cd375a392c8c427a737021b7e01415c902514b09ee43407749caa34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7795z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmjlm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:25Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.205484 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7baf94d8-250c-4568-ab1c-a1429b8caf70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435ba16189d6d19892aaab158512c1a84a72103779e3cc4b2c634de344583c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e566f3b66d404dd8da93bba1745b9e67cd63532330591f5b735f6826c07eae2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ec64662f56c2609ebac1383a78840ba4e0d9fb1fe10e666169285d3852e6b40\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d8f079c2674e0c6bea1837755db23bc3205da6a01ac1896d6293226d9c6fb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:25Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.216853 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd8dd666-aa42-4598-bb52-c0cd9345384d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e96fe80c0a88f19c6c5705c7fd945c3282a104b7767db5c217b6364c045d649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9d9053346b85f12f6cf781f1699dbf8aea670b7e1cc5f4fdc1ffac2c969712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d586c8b48f5d2ca87e3a758dd265611f11678f6fa8dd1118a859923c331c4f67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19c59ec68cb6306af37b90cd4583f5a1a266e93f77566d756223eecf7cdadb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19c59ec68cb6306af37b90cd4583f5a1a266e93f77566d756223eecf7cdadb76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:25Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.232100 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:25Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.233399 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.234203 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.234224 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.234242 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.234253 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:25Z","lastTransitionTime":"2025-10-13T17:25:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.250144 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://705766b44d481300d0fd8cc654aa2dc9046733e5a71ad3a7ab789f85e737dcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a69b8825692cbbb7cce0a52b1ccb7b137ce6c9f8a8d788fff95bb7228cb40e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:25Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.263776 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f85e4aab30e7f49ef05e48cc22fc22e0291e32604a8f74ae1cc2ea57b5889e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:25Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.274723 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce442c80-fcde-4b79-b6f9-f8f25771dfd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10e7a946185d6502e37a1ab60ad1eca36cce991f230188d22b2d6e2ea554f920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://864e330267c8cec8487e1dcf4a50bbb2ba37d81d80361f0873e6bf69de1ed721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-htwnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:25Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.293351 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5rgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d7408e5-b529-4396-92b3-2fed275c3250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://439dea1ed9724a215fa1aa1f4a4bd84a4bb998f2f86814f5f4eb46989242f11c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42ff97f0a5fb40b52d5445a00eb8f70f8028610b76baf33117b425b821b5ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f42ff97f0a5fb40b52d5445a00eb8f70f8028610b76baf33117b425b821b5ec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://471bb77042e72bcac4bb28dd26f6151a7154bea0b3ec92272e405e9b22bc0170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://471bb77042e72bcac4bb28dd26f6151a7154bea0b3ec92272e405e9b22bc0170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20cb8981843cc39e8722f560f7a7e75483c93a30edffde9835533f8dc5b50587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20cb8981843cc39e8722f560f7a7e75483c93a30edffde9835533f8dc5b50587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79851ed8c84027d1bb9eccb69bed004ae1cdbe934245c66325eafe42b12ea808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79851ed8c84027d1bb9eccb69bed004ae1cdbe934245c66325eafe42b12ea808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5rgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:25Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.302837 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c6ntg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxbvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxbvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c6ntg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:25Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.328906 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a1f1696-ed7f-4482-a300-ca25b68e09e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2507a30017fbd9947ca456b84812f0b26baabf572768870f5a34b3c7699443cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc60672e282de4d3d6a2d8fd64306ec0fc0d032ade655f88afb0c0b9e0a8e589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cf903094dae4e559bcef3e269240670b5d4693344d600779c394ce1e039b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06eea6556daedf913d87b8b0527b337070528b07cc026de878b4df332598a2d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b452d113d48729b6d4ab59f80138b15e7cb16cc16eef4ec9916901a067986d07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:25Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.339084 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.339123 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.339133 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.339148 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.339160 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:25Z","lastTransitionTime":"2025-10-13T17:25:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.342292 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00fc1050-d713-484e-899e-9bd4e5d7b250\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://591ebf9a13f38e2f458aa34584be8fabc9115335a04af437a2291e7980a903ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18ddea8ad4714addb2f8431d0802d32a36f3d823bb123d4546bc6de1a3a017a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ddea8ad4714addb2f8431d0802d32a36f3d823bb123d4546bc6de1a3a017a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:25Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.352598 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:25Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.362933 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:25Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.372937 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dnjrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e6efca3-dcc8-488e-8fb1-e14ee0396158\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0f539bcc67139c5b3d51ab01a3114afc20fc44cf5e286dce2b4cad0ea0629c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g676g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c792a043326901c5cbcae8c96b168aee71b56f6e91d0edfc7f58abb5da9c9c28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g676g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dnjrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:25Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.389705 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea323a1a-c607-40cf-b61c-b7ff03399c45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88b8a46e2e2c2342bd22a4a9b42cadd0188ec83f23815773f6ffb46be79fdf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d875b0d7eb9f13bc697597a33f7196598ae679980b13842e92acf20477526f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e18d0e9df43d366a6e7088cc3d5bdc794882828f60f008eaf29e788e0141ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef2ba92cea782dd4fd31a511e1086777b58c00340014dff74ff6615b07bd10c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7977e20d8558810fb9f8f827830d0378de7d15c71340d396a95b506902c0962\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T17:24:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1013 17:24:28.736714 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 17:24:28.738546 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445610647/tls.crt::/tmp/serving-cert-445610647/tls.key\\\\\\\"\\\\nI1013 17:24:34.860768 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 17:24:34.864611 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 17:24:34.864632 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 17:24:34.864654 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 17:24:34.864660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 17:24:34.871230 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1013 17:24:34.871248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1013 17:24:34.871273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871290 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871305 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 17:24:34.871314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 17:24:34.871324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 17:24:34.871335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1013 17:24:34.872239 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd536f783e3f8f974f25772994c3d00a01c2ba66997a6e06236f1b83b890f8f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:25Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.401894 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxmjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b45ec2d-5bea-4007-a49f-224a866f93eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00868f87f1a3020a1db7e50377bbe90203d6e93799c8ec9f3100dd52a6c0cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xsrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxmjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:25Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.420375 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8064812e-b6aa-4f56-81c9-16154c00abad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c28fde07efa17d20ddecd1c8a0da09a6fa09e98a90b14718565c00aef9c0d25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bbd41d9b0518c8b060b1d01c335b6c9923e7196624fbd49326133faaa2e36ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd8a4dfb390a1643edb4eb4bd01970eaffd8dab0f748effac9e7c691ef4a3ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://013dfade47eccae4339fca0d379d914f63502613f87f1bc621113f7008634475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69d511a66a84fb0fa90f426bd4609c2b3ad915571802c6c024e0370d6e1683a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1aefa46ee989416f0e4fd05f6f12031da637a354fadd7f6caef75ddc2c74ddf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9825f520cb9cff0f39359c2b7f6d8ef1e3163ec1c94f80c037ea4afb4300c5d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9825f520cb9cff0f39359c2b7f6d8ef1e3163ec1c94f80c037ea4afb4300c5d3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T17:25:13Z\\\",\\\"message\\\":\\\"_controller.go:776] Recording success event on pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI1013 17:25:13.141326 6449 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 2.698682ms)\\\\nI1013 17:25:13.141338 6449 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\\\\\" but failed to find it\\\\nI1013 17:25:13.141346 6449 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\\\\\" in cache\\\\nI1013 17:25:13.141945 6449 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1013 17:25:13.142064 6449 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1013 17:25:13.142100 6449 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1013 17:25:13.142115 6449 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1013 17:25:13.142133 6449 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1013 17:25:13.142159 6449 factory.go:656] Stopping watch factory\\\\nI1013 17:25:13.142173 6449 ovnkube.go:599] Stopped ovnkube\\\\nI1013 17:25:13.142239 6449 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1013 17:25:13.142339 6449 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:25:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pn6lz_openshift-ovn-kubernetes(8064812e-b6aa-4f56-81c9-16154c00abad)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34b1fffbebfbd3599bed22fe3fe80129f5b704560c54d09e9ab5e9ea8bba4513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:25Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.432170 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bvtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b35c333b-0d4e-4d9a-a8fe-a1c6f5fbbf75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03411ae3786161605ed0cb08a21661c7c7a4cc93f91320586547b8ffab3e90ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g97k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bvtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:25Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.441811 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.441863 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.441884 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.441912 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.441934 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:25Z","lastTransitionTime":"2025-10-13T17:25:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.545136 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.545162 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.545171 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.545184 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.545206 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:25Z","lastTransitionTime":"2025-10-13T17:25:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.647260 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.647450 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.647526 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.647602 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.647683 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:25Z","lastTransitionTime":"2025-10-13T17:25:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.749737 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.749774 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.749785 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.749802 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.749814 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:25Z","lastTransitionTime":"2025-10-13T17:25:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.852003 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.852036 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.852047 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.852063 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.852074 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:25Z","lastTransitionTime":"2025-10-13T17:25:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.955311 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.955368 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.955384 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.955409 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:25 crc kubenswrapper[4720]: I1013 17:25:25.955425 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:25Z","lastTransitionTime":"2025-10-13T17:25:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:26 crc kubenswrapper[4720]: I1013 17:25:26.057653 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:26 crc kubenswrapper[4720]: I1013 17:25:26.057701 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:26 crc kubenswrapper[4720]: I1013 17:25:26.057714 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:26 crc kubenswrapper[4720]: I1013 17:25:26.057731 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:26 crc kubenswrapper[4720]: I1013 17:25:26.057743 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:26Z","lastTransitionTime":"2025-10-13T17:25:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:26 crc kubenswrapper[4720]: I1013 17:25:26.160308 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:26 crc kubenswrapper[4720]: I1013 17:25:26.160346 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:26 crc kubenswrapper[4720]: I1013 17:25:26.160358 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:26 crc kubenswrapper[4720]: I1013 17:25:26.160374 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:26 crc kubenswrapper[4720]: I1013 17:25:26.160385 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:26Z","lastTransitionTime":"2025-10-13T17:25:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:26 crc kubenswrapper[4720]: I1013 17:25:26.167557 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c6ntg" Oct 13 17:25:26 crc kubenswrapper[4720]: E1013 17:25:26.167682 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c6ntg" podUID="c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61" Oct 13 17:25:26 crc kubenswrapper[4720]: I1013 17:25:26.222395 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61-metrics-certs\") pod \"network-metrics-daemon-c6ntg\" (UID: \"c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61\") " pod="openshift-multus/network-metrics-daemon-c6ntg" Oct 13 17:25:26 crc kubenswrapper[4720]: E1013 17:25:26.222521 4720 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 13 17:25:26 crc kubenswrapper[4720]: E1013 17:25:26.222580 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61-metrics-certs podName:c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61 nodeName:}" failed. No retries permitted until 2025-10-13 17:25:58.22256323 +0000 UTC m=+103.679813362 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61-metrics-certs") pod "network-metrics-daemon-c6ntg" (UID: "c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 13 17:25:26 crc kubenswrapper[4720]: I1013 17:25:26.262643 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:26 crc kubenswrapper[4720]: I1013 17:25:26.262677 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:26 crc kubenswrapper[4720]: I1013 17:25:26.262688 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:26 crc kubenswrapper[4720]: I1013 17:25:26.262704 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:26 crc kubenswrapper[4720]: I1013 17:25:26.262716 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:26Z","lastTransitionTime":"2025-10-13T17:25:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:26 crc kubenswrapper[4720]: I1013 17:25:26.364410 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:26 crc kubenswrapper[4720]: I1013 17:25:26.364454 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:26 crc kubenswrapper[4720]: I1013 17:25:26.364466 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:26 crc kubenswrapper[4720]: I1013 17:25:26.364485 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:26 crc kubenswrapper[4720]: I1013 17:25:26.364499 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:26Z","lastTransitionTime":"2025-10-13T17:25:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:26 crc kubenswrapper[4720]: I1013 17:25:26.466414 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:26 crc kubenswrapper[4720]: I1013 17:25:26.466478 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:26 crc kubenswrapper[4720]: I1013 17:25:26.466488 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:26 crc kubenswrapper[4720]: I1013 17:25:26.466507 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:26 crc kubenswrapper[4720]: I1013 17:25:26.466518 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:26Z","lastTransitionTime":"2025-10-13T17:25:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:26 crc kubenswrapper[4720]: I1013 17:25:26.569439 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:26 crc kubenswrapper[4720]: I1013 17:25:26.569485 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:26 crc kubenswrapper[4720]: I1013 17:25:26.569501 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:26 crc kubenswrapper[4720]: I1013 17:25:26.569541 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:26 crc kubenswrapper[4720]: I1013 17:25:26.569558 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:26Z","lastTransitionTime":"2025-10-13T17:25:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:26 crc kubenswrapper[4720]: I1013 17:25:26.678650 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:26 crc kubenswrapper[4720]: I1013 17:25:26.678706 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:26 crc kubenswrapper[4720]: I1013 17:25:26.678715 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:26 crc kubenswrapper[4720]: I1013 17:25:26.678746 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:26 crc kubenswrapper[4720]: I1013 17:25:26.678755 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:26Z","lastTransitionTime":"2025-10-13T17:25:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:26 crc kubenswrapper[4720]: I1013 17:25:26.781532 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:26 crc kubenswrapper[4720]: I1013 17:25:26.781600 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:26 crc kubenswrapper[4720]: I1013 17:25:26.781617 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:26 crc kubenswrapper[4720]: I1013 17:25:26.781644 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:26 crc kubenswrapper[4720]: I1013 17:25:26.781663 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:26Z","lastTransitionTime":"2025-10-13T17:25:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:26 crc kubenswrapper[4720]: I1013 17:25:26.884007 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:26 crc kubenswrapper[4720]: I1013 17:25:26.884117 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:26 crc kubenswrapper[4720]: I1013 17:25:26.884129 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:26 crc kubenswrapper[4720]: I1013 17:25:26.884143 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:26 crc kubenswrapper[4720]: I1013 17:25:26.884153 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:26Z","lastTransitionTime":"2025-10-13T17:25:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:26 crc kubenswrapper[4720]: I1013 17:25:26.987072 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:26 crc kubenswrapper[4720]: I1013 17:25:26.987100 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:26 crc kubenswrapper[4720]: I1013 17:25:26.987108 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:26 crc kubenswrapper[4720]: I1013 17:25:26.987119 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:26 crc kubenswrapper[4720]: I1013 17:25:26.987127 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:26Z","lastTransitionTime":"2025-10-13T17:25:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.089528 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.089591 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.089608 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.089660 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.089701 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:27Z","lastTransitionTime":"2025-10-13T17:25:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.168263 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.168367 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:25:27 crc kubenswrapper[4720]: E1013 17:25:27.168529 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 17:25:27 crc kubenswrapper[4720]: E1013 17:25:27.168766 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.168880 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.169902 4720 scope.go:117] "RemoveContainer" containerID="9825f520cb9cff0f39359c2b7f6d8ef1e3163ec1c94f80c037ea4afb4300c5d3" Oct 13 17:25:27 crc kubenswrapper[4720]: E1013 17:25:27.170284 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pn6lz_openshift-ovn-kubernetes(8064812e-b6aa-4f56-81c9-16154c00abad)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" podUID="8064812e-b6aa-4f56-81c9-16154c00abad" Oct 13 17:25:27 crc kubenswrapper[4720]: E1013 17:25:27.170466 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.191574 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.191627 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.191645 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.191669 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.191687 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:27Z","lastTransitionTime":"2025-10-13T17:25:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.293521 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.293561 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.293574 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.293589 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.293602 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:27Z","lastTransitionTime":"2025-10-13T17:25:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.395824 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.395875 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.395892 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.395968 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.395991 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:27Z","lastTransitionTime":"2025-10-13T17:25:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.498471 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.498520 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.498532 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.498550 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.498565 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:27Z","lastTransitionTime":"2025-10-13T17:25:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.593017 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lxmjt_7b45ec2d-5bea-4007-a49f-224a866f93eb/kube-multus/0.log" Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.593107 4720 generic.go:334] "Generic (PLEG): container finished" podID="7b45ec2d-5bea-4007-a49f-224a866f93eb" containerID="c00868f87f1a3020a1db7e50377bbe90203d6e93799c8ec9f3100dd52a6c0cf6" exitCode=1 Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.593151 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lxmjt" event={"ID":"7b45ec2d-5bea-4007-a49f-224a866f93eb","Type":"ContainerDied","Data":"c00868f87f1a3020a1db7e50377bbe90203d6e93799c8ec9f3100dd52a6c0cf6"} Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.593757 4720 scope.go:117] "RemoveContainer" containerID="c00868f87f1a3020a1db7e50377bbe90203d6e93799c8ec9f3100dd52a6c0cf6" Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.605833 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.605857 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.605864 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.605876 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.605885 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:27Z","lastTransitionTime":"2025-10-13T17:25:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.608384 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce442c80-fcde-4b79-b6f9-f8f25771dfd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10e7a946185d6502e37a1ab60ad1eca36cce991f230188d22b2d6e2ea554f920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://864e330267c8cec8487e1dcf4a50bbb2ba37d81d80361f0873e6bf69de1ed721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-htwnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:27Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.631097 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5rgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d7408e5-b529-4396-92b3-2fed275c3250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://439dea1ed9724a215fa1aa1f4a4bd84a4bb998f2f86814f5f4eb46989242f11c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42ff97f0a5fb40b52d5445a00eb8f70f8028610b76baf33117b425b821b5ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f42ff97f0a5fb40b52d5445a00eb8f70f8028610b76baf33117b425b821b5ec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://471bb77042e72bcac4bb28dd26f6151a7154bea0b3ec92272e405e9b22bc0170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://471bb77042e72bcac4bb28dd26f6151a7154bea0b3ec92272e405e9b22bc0170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20cb8981843cc39e8722f560f7a7e75483c93a30edffde9835533f8dc5b50587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20cb8981843cc39e8722f560f7a7e75483c93a30edffde9835533f8dc5b50587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79851ed8c84027d1bb9eccb69bed004ae1cdbe934245c66325eafe42b12ea808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79851ed8c84027d1bb9eccb69bed004ae1cdbe934245c66325eafe42b12ea808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5rgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:27Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.652090 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c6ntg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxbvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxbvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c6ntg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:27Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.682781 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a1f1696-ed7f-4482-a300-ca25b68e09e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2507a30017fbd9947ca456b84812f0b26baabf572768870f5a34b3c7699443cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc60672e282de4d3d6a2d8fd64306ec0fc0d032ade655f88afb0c0b9e0a8e589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cf903094dae4e559bcef3e269240670b5d4693344d600779c394ce1e039b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06eea6556daedf913d87b8b0527b337070528b07cc026de878b4df332598a2d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b452d113d48729b6d4ab59f80138b15e7cb16cc16eef4ec9916901a067986d07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:27Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.695273 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00fc1050-d713-484e-899e-9bd4e5d7b250\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://591ebf9a13f38e2f458aa34584be8fabc9115335a04af437a2291e7980a903ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18ddea8ad4714addb2f8431d0802d32a36f3d823bb123d4546bc6de1a3a017a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ddea8ad4714addb2f8431d0802d32a36f3d823bb123d4546bc6de1a3a017a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:27Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.707885 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:27Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.708381 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.708415 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.708428 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.708445 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.708458 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:27Z","lastTransitionTime":"2025-10-13T17:25:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.718356 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:27Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.727994 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dnjrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e6efca3-dcc8-488e-8fb1-e14ee0396158\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0f539bcc67139c5b3d51ab01a3114afc20fc44cf5e286dce2b4cad0ea0629c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g676g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c792a043326901c5cbcae8c96b168aee71b56f6e91d0edfc7f58abb5da9c9c28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g676g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dnjrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:27Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.747053 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea323a1a-c607-40cf-b61c-b7ff03399c45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88b8a46e2e2c2342bd22a4a9b42cadd0188ec83f23815773f6ffb46be79fdf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d875b0d7eb9f13bc697597a33f7196598ae679980b13842e92acf20477526f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e18d0e9df43d366a6e7088cc3d5bdc794882828f60f008eaf29e788e0141ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef2ba92cea782dd4fd31a511e1086777b58c00340014dff74ff6615b07bd10c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7977e20d8558810fb9f8f827830d0378de7d15c71340d396a95b506902c0962\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T17:24:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1013 17:24:28.736714 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 17:24:28.738546 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445610647/tls.crt::/tmp/serving-cert-445610647/tls.key\\\\\\\"\\\\nI1013 17:24:34.860768 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 17:24:34.864611 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 17:24:34.864632 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 17:24:34.864654 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 17:24:34.864660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 17:24:34.871230 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1013 17:24:34.871248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1013 17:24:34.871273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871290 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871305 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 17:24:34.871314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 17:24:34.871324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 17:24:34.871335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1013 17:24:34.872239 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd536f783e3f8f974f25772994c3d00a01c2ba66997a6e06236f1b83b890f8f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:27Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.767088 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxmjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b45ec2d-5bea-4007-a49f-224a866f93eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c00868f87f1a3020a1db7e50377bbe90203d6e93799c8ec9f3100dd52a6c0cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c00868f87f1a3020a1db7e50377bbe90203d6e93799c8ec9f3100dd52a6c0cf6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T17:25:27Z\\\",\\\"message\\\":\\\"2025-10-13T17:24:42+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d76d90a0-c554-4f54-9b32-3c7f604179f6\\\\n2025-10-13T17:24:42+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d76d90a0-c554-4f54-9b32-3c7f604179f6 to /host/opt/cni/bin/\\\\n2025-10-13T17:24:42Z [verbose] multus-daemon started\\\\n2025-10-13T17:24:42Z [verbose] Readiness Indicator file check\\\\n2025-10-13T17:25:27Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xsrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxmjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:27Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.792369 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8064812e-b6aa-4f56-81c9-16154c00abad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c28fde07efa17d20ddecd1c8a0da09a6fa09e98a90b14718565c00aef9c0d25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bbd41d9b0518c8b060b1d01c335b6c9923e7196624fbd49326133faaa2e36ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd8a4dfb390a1643edb4eb4bd01970eaffd8dab0f748effac9e7c691ef4a3ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://013dfade47eccae4339fca0d379d914f63502613f87f1bc621113f7008634475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69d511a66a84fb0fa90f426bd4609c2b3ad915571802c6c024e0370d6e1683a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1aefa46ee989416f0e4fd05f6f12031da637a354fadd7f6caef75ddc2c74ddf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9825f520cb9cff0f39359c2b7f6d8ef1e3163ec1c94f80c037ea4afb4300c5d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9825f520cb9cff0f39359c2b7f6d8ef1e3163ec1c94f80c037ea4afb4300c5d3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T17:25:13Z\\\",\\\"message\\\":\\\"_controller.go:776] Recording success event on pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI1013 17:25:13.141326 6449 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 2.698682ms)\\\\nI1013 17:25:13.141338 6449 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\\\\\" but failed to find it\\\\nI1013 17:25:13.141346 6449 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\\\\\" in cache\\\\nI1013 17:25:13.141945 6449 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1013 17:25:13.142064 6449 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1013 17:25:13.142100 6449 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1013 17:25:13.142115 6449 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1013 17:25:13.142133 6449 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1013 17:25:13.142159 6449 factory.go:656] Stopping watch factory\\\\nI1013 17:25:13.142173 6449 ovnkube.go:599] Stopped ovnkube\\\\nI1013 17:25:13.142239 6449 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1013 17:25:13.142339 6449 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:25:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pn6lz_openshift-ovn-kubernetes(8064812e-b6aa-4f56-81c9-16154c00abad)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34b1fffbebfbd3599bed22fe3fe80129f5b704560c54d09e9ab5e9ea8bba4513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:27Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.802626 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bvtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b35c333b-0d4e-4d9a-a8fe-a1c6f5fbbf75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03411ae3786161605ed0cb08a21661c7c7a4cc93f91320586547b8ffab3e90ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g97k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bvtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:27Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.811109 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.811150 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.811168 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.811220 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.811239 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:27Z","lastTransitionTime":"2025-10-13T17:25:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.815009 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7baf94d8-250c-4568-ab1c-a1429b8caf70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435ba16189d6d19892aaab158512c1a84a72103779e3cc4b2c634de344583c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e566f3b66d404dd8da93bba1745b9e67cd63532330591f5b735f6826c07eae2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ec64662f56c2609ebac1383a78840ba4e0d9fb1fe10e666169285d3852e6b40\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d8f079c2674e0c6bea1837755db23bc3205da6a01ac1896d6293226d9c6fb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:27Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.828059 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd8dd666-aa42-4598-bb52-c0cd9345384d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e96fe80c0a88f19c6c5705c7fd945c3282a104b7767db5c217b6364c045d649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9d9053346b85f12f6cf781f1699dbf8aea670b7e1cc5f4fdc1ffac2c969712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d586c8b48f5d2ca87e3a758dd265611f11678f6fa8dd1118a859923c331c4f67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19c59ec68cb6306af37b90cd4583f5a1a266e93f77566d756223eecf7cdadb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19c59ec68cb6306af37b90cd4583f5a1a266e93f77566d756223eecf7cdadb76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:27Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.841628 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:27Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.854025 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://705766b44d481300d0fd8cc654aa2dc9046733e5a71ad3a7ab789f85e737dcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a69b8825692cbbb7cce0a52b1ccb7b137ce6c9f8a8d788fff95bb7228cb40e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:27Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.864449 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f85e4aab30e7f49ef05e48cc22fc22e0291e32604a8f74ae1cc2ea57b5889e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:27Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.877046 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f856022da70859465022db433585bd8cad12f8cd2aeae7943491d3f7e5049256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:27Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.888049 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmjlm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f85220d6-f940-4538-806a-2b26bacd7b09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6315723f6cd375a392c8c427a737021b7e01415c902514b09ee43407749caa34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7795z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmjlm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:27Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.913107 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.913145 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.913158 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.913176 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:27 crc kubenswrapper[4720]: I1013 17:25:27.913205 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:27Z","lastTransitionTime":"2025-10-13T17:25:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.015913 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.015974 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.015987 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.016022 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.016036 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:28Z","lastTransitionTime":"2025-10-13T17:25:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.118130 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.118169 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.118223 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.118272 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.118314 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:28Z","lastTransitionTime":"2025-10-13T17:25:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.168083 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c6ntg" Oct 13 17:25:28 crc kubenswrapper[4720]: E1013 17:25:28.168285 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c6ntg" podUID="c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61" Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.221256 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.221288 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.221298 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.221315 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.221325 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:28Z","lastTransitionTime":"2025-10-13T17:25:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.323660 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.323715 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.323732 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.323757 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.323773 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:28Z","lastTransitionTime":"2025-10-13T17:25:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.426470 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.426513 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.426525 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.426542 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.426552 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:28Z","lastTransitionTime":"2025-10-13T17:25:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.528538 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.528583 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.528599 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.528622 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.528637 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:28Z","lastTransitionTime":"2025-10-13T17:25:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.598429 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lxmjt_7b45ec2d-5bea-4007-a49f-224a866f93eb/kube-multus/0.log" Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.598513 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lxmjt" event={"ID":"7b45ec2d-5bea-4007-a49f-224a866f93eb","Type":"ContainerStarted","Data":"835c6ec8dc7ae3785a23ae45e5c9dc4b3bcc24428ca4c3865a6dd790d5956e74"} Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.611454 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxmjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b45ec2d-5bea-4007-a49f-224a866f93eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://835c6ec8dc7ae3785a23ae45e5c9dc4b3bcc24428ca4c3865a6dd790d5956e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c00868f87f1a3020a1db7e50377bbe90203d6e93799c8ec9f3100dd52a6c0cf6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T17:25:27Z\\\",\\\"message\\\":\\\"2025-10-13T17:24:42+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d76d90a0-c554-4f54-9b32-3c7f604179f6\\\\n2025-10-13T17:24:42+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d76d90a0-c554-4f54-9b32-3c7f604179f6 to /host/opt/cni/bin/\\\\n2025-10-13T17:24:42Z [verbose] multus-daemon started\\\\n2025-10-13T17:24:42Z [verbose] Readiness Indicator file check\\\\n2025-10-13T17:25:27Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xsrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxmjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:28Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.628066 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8064812e-b6aa-4f56-81c9-16154c00abad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c28fde07efa17d20ddecd1c8a0da09a6fa09e98a90b14718565c00aef9c0d25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bbd41d9b0518c8b060b1d01c335b6c9923e7196624fbd49326133faaa2e36ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd8a4dfb390a1643edb4eb4bd01970eaffd8dab0f748effac9e7c691ef4a3ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://013dfade47eccae4339fca0d379d914f63502613f87f1bc621113f7008634475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69d511a66a84fb0fa90f426bd4609c2b3ad915571802c6c024e0370d6e1683a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1aefa46ee989416f0e4fd05f6f12031da637a354fadd7f6caef75ddc2c74ddf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9825f520cb9cff0f39359c2b7f6d8ef1e3163ec1c94f80c037ea4afb4300c5d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9825f520cb9cff0f39359c2b7f6d8ef1e3163ec1c94f80c037ea4afb4300c5d3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T17:25:13Z\\\",\\\"message\\\":\\\"_controller.go:776] Recording success event on pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI1013 17:25:13.141326 6449 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 2.698682ms)\\\\nI1013 17:25:13.141338 6449 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\\\\\" but failed to find it\\\\nI1013 17:25:13.141346 6449 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\\\\\" in cache\\\\nI1013 17:25:13.141945 6449 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1013 17:25:13.142064 6449 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1013 17:25:13.142100 6449 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1013 17:25:13.142115 6449 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1013 17:25:13.142133 6449 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1013 17:25:13.142159 6449 factory.go:656] Stopping watch factory\\\\nI1013 17:25:13.142173 6449 ovnkube.go:599] Stopped ovnkube\\\\nI1013 17:25:13.142239 6449 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1013 17:25:13.142339 6449 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:25:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pn6lz_openshift-ovn-kubernetes(8064812e-b6aa-4f56-81c9-16154c00abad)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34b1fffbebfbd3599bed22fe3fe80129f5b704560c54d09e9ab5e9ea8bba4513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:28Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.631293 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.631352 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.631375 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.631402 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.631423 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:28Z","lastTransitionTime":"2025-10-13T17:25:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.637518 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bvtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b35c333b-0d4e-4d9a-a8fe-a1c6f5fbbf75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03411ae3786161605ed0cb08a21661c7c7a4cc93f91320586547b8ffab3e90ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g97k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bvtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:28Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.648977 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea323a1a-c607-40cf-b61c-b7ff03399c45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88b8a46e2e2c2342bd22a4a9b42cadd0188ec83f23815773f6ffb46be79fdf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d875b0d7eb9f13bc697597a33f7196598ae679980b13842e92acf20477526f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e18d0e9df43d366a6e7088cc3d5bdc794882828f60f008eaf29e788e0141ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef2ba92cea782dd4fd31a511e1086777b58c00340014dff74ff6615b07bd10c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7977e20d8558810fb9f8f827830d0378de7d15c71340d396a95b506902c0962\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T17:24:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1013 17:24:28.736714 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 17:24:28.738546 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445610647/tls.crt::/tmp/serving-cert-445610647/tls.key\\\\\\\"\\\\nI1013 17:24:34.860768 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 17:24:34.864611 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 17:24:34.864632 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 17:24:34.864654 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 17:24:34.864660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 17:24:34.871230 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1013 17:24:34.871248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1013 17:24:34.871273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871290 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871305 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 17:24:34.871314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 17:24:34.871324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 17:24:34.871335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1013 17:24:34.872239 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd536f783e3f8f974f25772994c3d00a01c2ba66997a6e06236f1b83b890f8f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:28Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.659065 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd8dd666-aa42-4598-bb52-c0cd9345384d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e96fe80c0a88f19c6c5705c7fd945c3282a104b7767db5c217b6364c045d649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9d9053346b85f12f6cf781f1699dbf8aea670b7e1cc5f4fdc1ffac2c969712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d586c8b48f5d2ca87e3a758dd265611f11678f6fa8dd1118a859923c331c4f67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19c59ec68cb6306af37b90cd4583f5a1a266e93f77566d756223eecf7cdadb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19c59ec68cb6306af37b90cd4583f5a1a266e93f77566d756223eecf7cdadb76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:28Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.670224 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:28Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.680941 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://705766b44d481300d0fd8cc654aa2dc9046733e5a71ad3a7ab789f85e737dcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a69b8825692cbbb7cce0a52b1ccb7b137ce6c9f8a8d788fff95bb7228cb40e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:28Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.692804 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f85e4aab30e7f49ef05e48cc22fc22e0291e32604a8f74ae1cc2ea57b5889e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:28Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.705160 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f856022da70859465022db433585bd8cad12f8cd2aeae7943491d3f7e5049256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:28Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.717555 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmjlm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f85220d6-f940-4538-806a-2b26bacd7b09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6315723f6cd375a392c8c427a737021b7e01415c902514b09ee43407749caa34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7795z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmjlm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:28Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.728808 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7baf94d8-250c-4568-ab1c-a1429b8caf70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435ba16189d6d19892aaab158512c1a84a72103779e3cc4b2c634de344583c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e566f3b66d404dd8da93bba1745b9e67cd63532330591f5b735f6826c07eae2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ec64662f56c2609ebac1383a78840ba4e0d9fb1fe10e666169285d3852e6b40\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d8f079c2674e0c6bea1837755db23bc3205da6a01ac1896d6293226d9c6fb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:28Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.733421 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.733485 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.733510 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.733539 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.733561 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:28Z","lastTransitionTime":"2025-10-13T17:25:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.744106 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5rgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d7408e5-b529-4396-92b3-2fed275c3250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://439dea1ed9724a215fa1aa1f4a4bd84a4bb998f2f86814f5f4eb46989242f11c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42ff97f0a5fb40b52d5445a00eb8f70f8028610b76baf33117b425b821b5ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f42ff97f0a5fb40b52d5445a00eb8f70f8028610b76baf33117b425b821b5ec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://471bb77042e72bcac4bb28dd26f6151a7154bea0b3ec92272e405e9b22bc0170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://471bb77042e72bcac4bb28dd26f6151a7154bea0b3ec92272e405e9b22bc0170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20cb8981843cc39e8722f560f7a7e75483c93a30edffde9835533f8dc5b50587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20cb8981843cc39e8722f560f7a7e75483c93a30edffde9835533f8dc5b50587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79851ed8c84027d1bb9eccb69bed004ae1cdbe934245c66325eafe42b12ea808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79851ed8c84027d1bb9eccb69bed004ae1cdbe934245c66325eafe42b12ea808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5rgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:28Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.754747 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c6ntg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxbvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxbvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c6ntg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:28Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.769269 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce442c80-fcde-4b79-b6f9-f8f25771dfd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10e7a946185d6502e37a1ab60ad1eca36cce991f230188d22b2d6e2ea554f920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://864e330267c8cec8487e1dcf4a50bbb2ba37d81d80361f0873e6bf69de1ed721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-htwnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:28Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.782698 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00fc1050-d713-484e-899e-9bd4e5d7b250\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://591ebf9a13f38e2f458aa34584be8fabc9115335a04af437a2291e7980a903ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18ddea8ad4714addb2f8431d0802d32a36f3d823bb123d4546bc6de1a3a017a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ddea8ad4714addb2f8431d0802d32a36f3d823bb123d4546bc6de1a3a017a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:28Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.799962 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:28Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.815399 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:28Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.827172 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dnjrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e6efca3-dcc8-488e-8fb1-e14ee0396158\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0f539bcc67139c5b3d51ab01a3114afc20fc44cf5e286dce2b4cad0ea0629c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g676g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c792a043326901c5cbcae8c96b168aee71b56f6e91d0edfc7f58abb5da9c9c28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g676g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dnjrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:28Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.836420 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.836533 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.836552 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.836577 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.836593 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:28Z","lastTransitionTime":"2025-10-13T17:25:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.850655 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a1f1696-ed7f-4482-a300-ca25b68e09e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2507a30017fbd9947ca456b84812f0b26baabf572768870f5a34b3c7699443cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc60672e282de4d3d6a2d8fd64306ec0fc0d032ade655f88afb0c0b9e0a8e589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cf903094dae4e559bcef3e269240670b5d4693344d600779c394ce1e039b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06eea6556daedf913d87b8b0527b337070528b07cc026de878b4df332598a2d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b452d113d48729b6d4ab59f80138b15e7cb16cc16eef4ec9916901a067986d07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:28Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.939384 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.939461 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.939485 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.939515 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:28 crc kubenswrapper[4720]: I1013 17:25:28.939538 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:28Z","lastTransitionTime":"2025-10-13T17:25:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:29 crc kubenswrapper[4720]: I1013 17:25:29.042562 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:29 crc kubenswrapper[4720]: I1013 17:25:29.042605 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:29 crc kubenswrapper[4720]: I1013 17:25:29.042613 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:29 crc kubenswrapper[4720]: I1013 17:25:29.042631 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:29 crc kubenswrapper[4720]: I1013 17:25:29.042641 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:29Z","lastTransitionTime":"2025-10-13T17:25:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:29 crc kubenswrapper[4720]: I1013 17:25:29.144581 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:29 crc kubenswrapper[4720]: I1013 17:25:29.144626 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:29 crc kubenswrapper[4720]: I1013 17:25:29.144637 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:29 crc kubenswrapper[4720]: I1013 17:25:29.144651 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:29 crc kubenswrapper[4720]: I1013 17:25:29.144661 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:29Z","lastTransitionTime":"2025-10-13T17:25:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:29 crc kubenswrapper[4720]: I1013 17:25:29.167335 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:25:29 crc kubenswrapper[4720]: I1013 17:25:29.167380 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:25:29 crc kubenswrapper[4720]: I1013 17:25:29.167413 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:25:29 crc kubenswrapper[4720]: E1013 17:25:29.167564 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 17:25:29 crc kubenswrapper[4720]: E1013 17:25:29.167649 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 17:25:29 crc kubenswrapper[4720]: E1013 17:25:29.167697 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 17:25:29 crc kubenswrapper[4720]: I1013 17:25:29.246881 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:29 crc kubenswrapper[4720]: I1013 17:25:29.246938 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:29 crc kubenswrapper[4720]: I1013 17:25:29.246955 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:29 crc kubenswrapper[4720]: I1013 17:25:29.246977 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:29 crc kubenswrapper[4720]: I1013 17:25:29.246993 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:29Z","lastTransitionTime":"2025-10-13T17:25:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:29 crc kubenswrapper[4720]: I1013 17:25:29.349402 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:29 crc kubenswrapper[4720]: I1013 17:25:29.349458 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:29 crc kubenswrapper[4720]: I1013 17:25:29.349477 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:29 crc kubenswrapper[4720]: I1013 17:25:29.349498 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:29 crc kubenswrapper[4720]: I1013 17:25:29.349515 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:29Z","lastTransitionTime":"2025-10-13T17:25:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:29 crc kubenswrapper[4720]: I1013 17:25:29.451830 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:29 crc kubenswrapper[4720]: I1013 17:25:29.451928 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:29 crc kubenswrapper[4720]: I1013 17:25:29.451949 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:29 crc kubenswrapper[4720]: I1013 17:25:29.451972 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:29 crc kubenswrapper[4720]: I1013 17:25:29.451989 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:29Z","lastTransitionTime":"2025-10-13T17:25:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:29 crc kubenswrapper[4720]: I1013 17:25:29.554579 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:29 crc kubenswrapper[4720]: I1013 17:25:29.554622 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:29 crc kubenswrapper[4720]: I1013 17:25:29.554631 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:29 crc kubenswrapper[4720]: I1013 17:25:29.554644 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:29 crc kubenswrapper[4720]: I1013 17:25:29.554655 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:29Z","lastTransitionTime":"2025-10-13T17:25:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:29 crc kubenswrapper[4720]: I1013 17:25:29.660419 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:29 crc kubenswrapper[4720]: I1013 17:25:29.660480 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:29 crc kubenswrapper[4720]: I1013 17:25:29.660489 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:29 crc kubenswrapper[4720]: I1013 17:25:29.660504 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:29 crc kubenswrapper[4720]: I1013 17:25:29.660515 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:29Z","lastTransitionTime":"2025-10-13T17:25:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:29 crc kubenswrapper[4720]: I1013 17:25:29.763000 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:29 crc kubenswrapper[4720]: I1013 17:25:29.763069 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:29 crc kubenswrapper[4720]: I1013 17:25:29.763089 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:29 crc kubenswrapper[4720]: I1013 17:25:29.763112 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:29 crc kubenswrapper[4720]: I1013 17:25:29.763130 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:29Z","lastTransitionTime":"2025-10-13T17:25:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:29 crc kubenswrapper[4720]: I1013 17:25:29.865418 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:29 crc kubenswrapper[4720]: I1013 17:25:29.865472 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:29 crc kubenswrapper[4720]: I1013 17:25:29.865488 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:29 crc kubenswrapper[4720]: I1013 17:25:29.865512 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:29 crc kubenswrapper[4720]: I1013 17:25:29.865529 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:29Z","lastTransitionTime":"2025-10-13T17:25:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:29 crc kubenswrapper[4720]: I1013 17:25:29.968235 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:29 crc kubenswrapper[4720]: I1013 17:25:29.968277 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:29 crc kubenswrapper[4720]: I1013 17:25:29.968289 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:29 crc kubenswrapper[4720]: I1013 17:25:29.968306 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:29 crc kubenswrapper[4720]: I1013 17:25:29.968316 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:29Z","lastTransitionTime":"2025-10-13T17:25:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:30 crc kubenswrapper[4720]: I1013 17:25:30.070769 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:30 crc kubenswrapper[4720]: I1013 17:25:30.070996 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:30 crc kubenswrapper[4720]: I1013 17:25:30.071105 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:30 crc kubenswrapper[4720]: I1013 17:25:30.071232 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:30 crc kubenswrapper[4720]: I1013 17:25:30.071312 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:30Z","lastTransitionTime":"2025-10-13T17:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:30 crc kubenswrapper[4720]: I1013 17:25:30.167993 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c6ntg" Oct 13 17:25:30 crc kubenswrapper[4720]: E1013 17:25:30.168128 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c6ntg" podUID="c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61" Oct 13 17:25:30 crc kubenswrapper[4720]: I1013 17:25:30.173667 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:30 crc kubenswrapper[4720]: I1013 17:25:30.173797 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:30 crc kubenswrapper[4720]: I1013 17:25:30.173868 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:30 crc kubenswrapper[4720]: I1013 17:25:30.173936 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:30 crc kubenswrapper[4720]: I1013 17:25:30.173999 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:30Z","lastTransitionTime":"2025-10-13T17:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:30 crc kubenswrapper[4720]: I1013 17:25:30.275854 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:30 crc kubenswrapper[4720]: I1013 17:25:30.275910 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:30 crc kubenswrapper[4720]: I1013 17:25:30.275927 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:30 crc kubenswrapper[4720]: I1013 17:25:30.275952 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:30 crc kubenswrapper[4720]: I1013 17:25:30.275967 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:30Z","lastTransitionTime":"2025-10-13T17:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:30 crc kubenswrapper[4720]: I1013 17:25:30.378780 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:30 crc kubenswrapper[4720]: I1013 17:25:30.378877 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:30 crc kubenswrapper[4720]: I1013 17:25:30.378898 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:30 crc kubenswrapper[4720]: I1013 17:25:30.378960 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:30 crc kubenswrapper[4720]: I1013 17:25:30.378982 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:30Z","lastTransitionTime":"2025-10-13T17:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:30 crc kubenswrapper[4720]: I1013 17:25:30.481257 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:30 crc kubenswrapper[4720]: I1013 17:25:30.481300 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:30 crc kubenswrapper[4720]: I1013 17:25:30.481316 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:30 crc kubenswrapper[4720]: I1013 17:25:30.481339 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:30 crc kubenswrapper[4720]: I1013 17:25:30.481356 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:30Z","lastTransitionTime":"2025-10-13T17:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:30 crc kubenswrapper[4720]: I1013 17:25:30.583488 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:30 crc kubenswrapper[4720]: I1013 17:25:30.583550 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:30 crc kubenswrapper[4720]: I1013 17:25:30.583562 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:30 crc kubenswrapper[4720]: I1013 17:25:30.583580 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:30 crc kubenswrapper[4720]: I1013 17:25:30.583592 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:30Z","lastTransitionTime":"2025-10-13T17:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:30 crc kubenswrapper[4720]: I1013 17:25:30.685094 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:30 crc kubenswrapper[4720]: I1013 17:25:30.685459 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:30 crc kubenswrapper[4720]: I1013 17:25:30.685550 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:30 crc kubenswrapper[4720]: I1013 17:25:30.685635 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:30 crc kubenswrapper[4720]: I1013 17:25:30.685719 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:30Z","lastTransitionTime":"2025-10-13T17:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:30 crc kubenswrapper[4720]: I1013 17:25:30.788888 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:30 crc kubenswrapper[4720]: I1013 17:25:30.788967 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:30 crc kubenswrapper[4720]: I1013 17:25:30.788988 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:30 crc kubenswrapper[4720]: I1013 17:25:30.789013 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:30 crc kubenswrapper[4720]: I1013 17:25:30.789032 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:30Z","lastTransitionTime":"2025-10-13T17:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:30 crc kubenswrapper[4720]: I1013 17:25:30.891164 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:30 crc kubenswrapper[4720]: I1013 17:25:30.891270 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:30 crc kubenswrapper[4720]: I1013 17:25:30.891288 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:30 crc kubenswrapper[4720]: I1013 17:25:30.891313 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:30 crc kubenswrapper[4720]: I1013 17:25:30.891330 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:30Z","lastTransitionTime":"2025-10-13T17:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:30 crc kubenswrapper[4720]: I1013 17:25:30.994444 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:30 crc kubenswrapper[4720]: I1013 17:25:30.994504 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:30 crc kubenswrapper[4720]: I1013 17:25:30.994523 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:30 crc kubenswrapper[4720]: I1013 17:25:30.994546 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:30 crc kubenswrapper[4720]: I1013 17:25:30.994563 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:30Z","lastTransitionTime":"2025-10-13T17:25:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:31 crc kubenswrapper[4720]: I1013 17:25:31.097278 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:31 crc kubenswrapper[4720]: I1013 17:25:31.097463 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:31 crc kubenswrapper[4720]: I1013 17:25:31.097551 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:31 crc kubenswrapper[4720]: I1013 17:25:31.097630 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:31 crc kubenswrapper[4720]: I1013 17:25:31.097696 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:31Z","lastTransitionTime":"2025-10-13T17:25:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:31 crc kubenswrapper[4720]: I1013 17:25:31.167809 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:25:31 crc kubenswrapper[4720]: E1013 17:25:31.168066 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 17:25:31 crc kubenswrapper[4720]: I1013 17:25:31.167956 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:25:31 crc kubenswrapper[4720]: I1013 17:25:31.167817 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:25:31 crc kubenswrapper[4720]: E1013 17:25:31.168506 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 17:25:31 crc kubenswrapper[4720]: E1013 17:25:31.168336 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 17:25:31 crc kubenswrapper[4720]: I1013 17:25:31.199730 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:31 crc kubenswrapper[4720]: I1013 17:25:31.199871 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:31 crc kubenswrapper[4720]: I1013 17:25:31.199944 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:31 crc kubenswrapper[4720]: I1013 17:25:31.200015 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:31 crc kubenswrapper[4720]: I1013 17:25:31.200094 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:31Z","lastTransitionTime":"2025-10-13T17:25:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:31 crc kubenswrapper[4720]: I1013 17:25:31.302943 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:31 crc kubenswrapper[4720]: I1013 17:25:31.302995 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:31 crc kubenswrapper[4720]: I1013 17:25:31.303011 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:31 crc kubenswrapper[4720]: I1013 17:25:31.303034 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:31 crc kubenswrapper[4720]: I1013 17:25:31.303051 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:31Z","lastTransitionTime":"2025-10-13T17:25:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:31 crc kubenswrapper[4720]: I1013 17:25:31.405428 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:31 crc kubenswrapper[4720]: I1013 17:25:31.405495 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:31 crc kubenswrapper[4720]: I1013 17:25:31.405512 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:31 crc kubenswrapper[4720]: I1013 17:25:31.405536 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:31 crc kubenswrapper[4720]: I1013 17:25:31.405553 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:31Z","lastTransitionTime":"2025-10-13T17:25:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:31 crc kubenswrapper[4720]: I1013 17:25:31.508535 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:31 crc kubenswrapper[4720]: I1013 17:25:31.508592 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:31 crc kubenswrapper[4720]: I1013 17:25:31.508607 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:31 crc kubenswrapper[4720]: I1013 17:25:31.508633 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:31 crc kubenswrapper[4720]: I1013 17:25:31.508654 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:31Z","lastTransitionTime":"2025-10-13T17:25:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:31 crc kubenswrapper[4720]: I1013 17:25:31.610811 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:31 crc kubenswrapper[4720]: I1013 17:25:31.610850 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:31 crc kubenswrapper[4720]: I1013 17:25:31.610860 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:31 crc kubenswrapper[4720]: I1013 17:25:31.610876 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:31 crc kubenswrapper[4720]: I1013 17:25:31.610905 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:31Z","lastTransitionTime":"2025-10-13T17:25:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:31 crc kubenswrapper[4720]: I1013 17:25:31.713914 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:31 crc kubenswrapper[4720]: I1013 17:25:31.713980 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:31 crc kubenswrapper[4720]: I1013 17:25:31.713997 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:31 crc kubenswrapper[4720]: I1013 17:25:31.714020 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:31 crc kubenswrapper[4720]: I1013 17:25:31.714037 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:31Z","lastTransitionTime":"2025-10-13T17:25:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:31 crc kubenswrapper[4720]: I1013 17:25:31.817006 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:31 crc kubenswrapper[4720]: I1013 17:25:31.817082 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:31 crc kubenswrapper[4720]: I1013 17:25:31.817106 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:31 crc kubenswrapper[4720]: I1013 17:25:31.817215 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:31 crc kubenswrapper[4720]: I1013 17:25:31.817256 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:31Z","lastTransitionTime":"2025-10-13T17:25:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:31 crc kubenswrapper[4720]: I1013 17:25:31.921567 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:31 crc kubenswrapper[4720]: I1013 17:25:31.921626 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:31 crc kubenswrapper[4720]: I1013 17:25:31.921645 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:31 crc kubenswrapper[4720]: I1013 17:25:31.921667 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:31 crc kubenswrapper[4720]: I1013 17:25:31.921691 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:31Z","lastTransitionTime":"2025-10-13T17:25:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.024340 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.024405 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.024422 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.024448 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.024466 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:32Z","lastTransitionTime":"2025-10-13T17:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.129478 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.129559 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.129581 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.129624 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.129645 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:32Z","lastTransitionTime":"2025-10-13T17:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.167591 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c6ntg" Oct 13 17:25:32 crc kubenswrapper[4720]: E1013 17:25:32.167784 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c6ntg" podUID="c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61" Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.236071 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.236165 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.236183 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.236242 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.236260 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:32Z","lastTransitionTime":"2025-10-13T17:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.338947 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.339233 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.339367 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.339472 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.339598 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:32Z","lastTransitionTime":"2025-10-13T17:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.442432 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.442790 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.442926 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.443060 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.443173 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:32Z","lastTransitionTime":"2025-10-13T17:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.511069 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.511426 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.511577 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.511718 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.511834 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:32Z","lastTransitionTime":"2025-10-13T17:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:32 crc kubenswrapper[4720]: E1013 17:25:32.529577 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a530de0f-daad-4050-9522-69c64a451e75\\\",\\\"systemUUID\\\":\\\"00bb7c43-79d6-45b5-bd02-4b71a0ba6837\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:32Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.534435 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.534595 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.534694 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.534809 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.534928 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:32Z","lastTransitionTime":"2025-10-13T17:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:32 crc kubenswrapper[4720]: E1013 17:25:32.552750 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a530de0f-daad-4050-9522-69c64a451e75\\\",\\\"systemUUID\\\":\\\"00bb7c43-79d6-45b5-bd02-4b71a0ba6837\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:32Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.559590 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.559637 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.559654 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.559681 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.559698 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:32Z","lastTransitionTime":"2025-10-13T17:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:32 crc kubenswrapper[4720]: E1013 17:25:32.573767 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a530de0f-daad-4050-9522-69c64a451e75\\\",\\\"systemUUID\\\":\\\"00bb7c43-79d6-45b5-bd02-4b71a0ba6837\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:32Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.578104 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.578163 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.578181 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.578238 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.578257 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:32Z","lastTransitionTime":"2025-10-13T17:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:32 crc kubenswrapper[4720]: E1013 17:25:32.593757 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a530de0f-daad-4050-9522-69c64a451e75\\\",\\\"systemUUID\\\":\\\"00bb7c43-79d6-45b5-bd02-4b71a0ba6837\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:32Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.597404 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.597463 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.597479 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.597503 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.597522 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:32Z","lastTransitionTime":"2025-10-13T17:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:32 crc kubenswrapper[4720]: E1013 17:25:32.614670 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a530de0f-daad-4050-9522-69c64a451e75\\\",\\\"systemUUID\\\":\\\"00bb7c43-79d6-45b5-bd02-4b71a0ba6837\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:32Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:32 crc kubenswrapper[4720]: E1013 17:25:32.614885 4720 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.616361 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.616408 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.616424 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.616446 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.616463 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:32Z","lastTransitionTime":"2025-10-13T17:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.719096 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.719148 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.719170 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.719270 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.719297 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:32Z","lastTransitionTime":"2025-10-13T17:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.822096 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.822147 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.822164 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.822184 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.822225 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:32Z","lastTransitionTime":"2025-10-13T17:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.924697 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.925043 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.925156 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.925320 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:32 crc kubenswrapper[4720]: I1013 17:25:32.925571 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:32Z","lastTransitionTime":"2025-10-13T17:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:33 crc kubenswrapper[4720]: I1013 17:25:33.028459 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:33 crc kubenswrapper[4720]: I1013 17:25:33.028499 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:33 crc kubenswrapper[4720]: I1013 17:25:33.028508 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:33 crc kubenswrapper[4720]: I1013 17:25:33.028524 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:33 crc kubenswrapper[4720]: I1013 17:25:33.028533 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:33Z","lastTransitionTime":"2025-10-13T17:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:33 crc kubenswrapper[4720]: I1013 17:25:33.131879 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:33 crc kubenswrapper[4720]: I1013 17:25:33.131946 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:33 crc kubenswrapper[4720]: I1013 17:25:33.131969 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:33 crc kubenswrapper[4720]: I1013 17:25:33.131998 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:33 crc kubenswrapper[4720]: I1013 17:25:33.132022 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:33Z","lastTransitionTime":"2025-10-13T17:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:33 crc kubenswrapper[4720]: I1013 17:25:33.168102 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:25:33 crc kubenswrapper[4720]: E1013 17:25:33.168285 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 17:25:33 crc kubenswrapper[4720]: I1013 17:25:33.168451 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:25:33 crc kubenswrapper[4720]: E1013 17:25:33.168618 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 17:25:33 crc kubenswrapper[4720]: I1013 17:25:33.168692 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:25:33 crc kubenswrapper[4720]: E1013 17:25:33.168800 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 17:25:33 crc kubenswrapper[4720]: I1013 17:25:33.234381 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:33 crc kubenswrapper[4720]: I1013 17:25:33.234747 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:33 crc kubenswrapper[4720]: I1013 17:25:33.234893 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:33 crc kubenswrapper[4720]: I1013 17:25:33.235041 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:33 crc kubenswrapper[4720]: I1013 17:25:33.235167 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:33Z","lastTransitionTime":"2025-10-13T17:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:33 crc kubenswrapper[4720]: I1013 17:25:33.337552 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:33 crc kubenswrapper[4720]: I1013 17:25:33.337797 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:33 crc kubenswrapper[4720]: I1013 17:25:33.337870 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:33 crc kubenswrapper[4720]: I1013 17:25:33.337945 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:33 crc kubenswrapper[4720]: I1013 17:25:33.338020 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:33Z","lastTransitionTime":"2025-10-13T17:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:33 crc kubenswrapper[4720]: I1013 17:25:33.440500 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:33 crc kubenswrapper[4720]: I1013 17:25:33.440535 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:33 crc kubenswrapper[4720]: I1013 17:25:33.440546 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:33 crc kubenswrapper[4720]: I1013 17:25:33.440561 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:33 crc kubenswrapper[4720]: I1013 17:25:33.440573 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:33Z","lastTransitionTime":"2025-10-13T17:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:33 crc kubenswrapper[4720]: I1013 17:25:33.543478 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:33 crc kubenswrapper[4720]: I1013 17:25:33.543541 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:33 crc kubenswrapper[4720]: I1013 17:25:33.543558 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:33 crc kubenswrapper[4720]: I1013 17:25:33.543581 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:33 crc kubenswrapper[4720]: I1013 17:25:33.543599 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:33Z","lastTransitionTime":"2025-10-13T17:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:33 crc kubenswrapper[4720]: I1013 17:25:33.646464 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:33 crc kubenswrapper[4720]: I1013 17:25:33.646502 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:33 crc kubenswrapper[4720]: I1013 17:25:33.646511 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:33 crc kubenswrapper[4720]: I1013 17:25:33.646527 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:33 crc kubenswrapper[4720]: I1013 17:25:33.646536 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:33Z","lastTransitionTime":"2025-10-13T17:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:33 crc kubenswrapper[4720]: I1013 17:25:33.749185 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:33 crc kubenswrapper[4720]: I1013 17:25:33.749281 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:33 crc kubenswrapper[4720]: I1013 17:25:33.749306 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:33 crc kubenswrapper[4720]: I1013 17:25:33.749334 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:33 crc kubenswrapper[4720]: I1013 17:25:33.749354 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:33Z","lastTransitionTime":"2025-10-13T17:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:33 crc kubenswrapper[4720]: I1013 17:25:33.852063 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:33 crc kubenswrapper[4720]: I1013 17:25:33.852118 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:33 crc kubenswrapper[4720]: I1013 17:25:33.852140 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:33 crc kubenswrapper[4720]: I1013 17:25:33.852167 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:33 crc kubenswrapper[4720]: I1013 17:25:33.852218 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:33Z","lastTransitionTime":"2025-10-13T17:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:33 crc kubenswrapper[4720]: I1013 17:25:33.955513 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:33 crc kubenswrapper[4720]: I1013 17:25:33.955575 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:33 crc kubenswrapper[4720]: I1013 17:25:33.955592 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:33 crc kubenswrapper[4720]: I1013 17:25:33.955618 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:33 crc kubenswrapper[4720]: I1013 17:25:33.955635 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:33Z","lastTransitionTime":"2025-10-13T17:25:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:34 crc kubenswrapper[4720]: I1013 17:25:34.058136 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:34 crc kubenswrapper[4720]: I1013 17:25:34.058256 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:34 crc kubenswrapper[4720]: I1013 17:25:34.058282 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:34 crc kubenswrapper[4720]: I1013 17:25:34.058310 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:34 crc kubenswrapper[4720]: I1013 17:25:34.058332 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:34Z","lastTransitionTime":"2025-10-13T17:25:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:34 crc kubenswrapper[4720]: I1013 17:25:34.161053 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:34 crc kubenswrapper[4720]: I1013 17:25:34.161137 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:34 crc kubenswrapper[4720]: I1013 17:25:34.161160 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:34 crc kubenswrapper[4720]: I1013 17:25:34.161220 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:34 crc kubenswrapper[4720]: I1013 17:25:34.161243 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:34Z","lastTransitionTime":"2025-10-13T17:25:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:34 crc kubenswrapper[4720]: I1013 17:25:34.167301 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c6ntg" Oct 13 17:25:34 crc kubenswrapper[4720]: E1013 17:25:34.167421 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c6ntg" podUID="c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61" Oct 13 17:25:34 crc kubenswrapper[4720]: I1013 17:25:34.263251 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:34 crc kubenswrapper[4720]: I1013 17:25:34.263292 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:34 crc kubenswrapper[4720]: I1013 17:25:34.263300 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:34 crc kubenswrapper[4720]: I1013 17:25:34.263317 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:34 crc kubenswrapper[4720]: I1013 17:25:34.263326 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:34Z","lastTransitionTime":"2025-10-13T17:25:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:34 crc kubenswrapper[4720]: I1013 17:25:34.366217 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:34 crc kubenswrapper[4720]: I1013 17:25:34.366794 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:34 crc kubenswrapper[4720]: I1013 17:25:34.366819 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:34 crc kubenswrapper[4720]: I1013 17:25:34.366841 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:34 crc kubenswrapper[4720]: I1013 17:25:34.366858 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:34Z","lastTransitionTime":"2025-10-13T17:25:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:34 crc kubenswrapper[4720]: I1013 17:25:34.469472 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:34 crc kubenswrapper[4720]: I1013 17:25:34.469544 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:34 crc kubenswrapper[4720]: I1013 17:25:34.469561 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:34 crc kubenswrapper[4720]: I1013 17:25:34.469585 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:34 crc kubenswrapper[4720]: I1013 17:25:34.469602 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:34Z","lastTransitionTime":"2025-10-13T17:25:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:34 crc kubenswrapper[4720]: I1013 17:25:34.572335 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:34 crc kubenswrapper[4720]: I1013 17:25:34.572397 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:34 crc kubenswrapper[4720]: I1013 17:25:34.572415 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:34 crc kubenswrapper[4720]: I1013 17:25:34.572438 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:34 crc kubenswrapper[4720]: I1013 17:25:34.572454 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:34Z","lastTransitionTime":"2025-10-13T17:25:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:34 crc kubenswrapper[4720]: I1013 17:25:34.674808 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:34 crc kubenswrapper[4720]: I1013 17:25:34.674850 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:34 crc kubenswrapper[4720]: I1013 17:25:34.674859 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:34 crc kubenswrapper[4720]: I1013 17:25:34.674873 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:34 crc kubenswrapper[4720]: I1013 17:25:34.674882 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:34Z","lastTransitionTime":"2025-10-13T17:25:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:34 crc kubenswrapper[4720]: I1013 17:25:34.777580 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:34 crc kubenswrapper[4720]: I1013 17:25:34.777657 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:34 crc kubenswrapper[4720]: I1013 17:25:34.777682 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:34 crc kubenswrapper[4720]: I1013 17:25:34.777769 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:34 crc kubenswrapper[4720]: I1013 17:25:34.777788 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:34Z","lastTransitionTime":"2025-10-13T17:25:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:34 crc kubenswrapper[4720]: I1013 17:25:34.880114 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:34 crc kubenswrapper[4720]: I1013 17:25:34.880172 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:34 crc kubenswrapper[4720]: I1013 17:25:34.880224 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:34 crc kubenswrapper[4720]: I1013 17:25:34.880253 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:34 crc kubenswrapper[4720]: I1013 17:25:34.880270 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:34Z","lastTransitionTime":"2025-10-13T17:25:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:34 crc kubenswrapper[4720]: I1013 17:25:34.982826 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:34 crc kubenswrapper[4720]: I1013 17:25:34.982877 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:34 crc kubenswrapper[4720]: I1013 17:25:34.982893 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:34 crc kubenswrapper[4720]: I1013 17:25:34.982916 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:34 crc kubenswrapper[4720]: I1013 17:25:34.982933 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:34Z","lastTransitionTime":"2025-10-13T17:25:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.085473 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.085516 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.085526 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.085541 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.085553 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:35Z","lastTransitionTime":"2025-10-13T17:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.167589 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.167598 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.167619 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:25:35 crc kubenswrapper[4720]: E1013 17:25:35.167944 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 17:25:35 crc kubenswrapper[4720]: E1013 17:25:35.167784 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 17:25:35 crc kubenswrapper[4720]: E1013 17:25:35.168089 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.181990 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dnjrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e6efca3-dcc8-488e-8fb1-e14ee0396158\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0f539bcc67139c5b3d51ab01a3114afc20fc44cf5e286dce2b4cad0ea0629c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g676g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c792a043326901c5cbcae8c96b168aee71b56f6e91d0edfc7f58abb5da9c9c28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g676g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dnjrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:35Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.187755 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.187817 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.187840 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.187874 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.187896 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:35Z","lastTransitionTime":"2025-10-13T17:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.201634 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a1f1696-ed7f-4482-a300-ca25b68e09e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2507a30017fbd9947ca456b84812f0b26baabf572768870f5a34b3c7699443cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc60672e282de4d3d6a2d8fd64306ec0fc0d032ade655f88afb0c0b9e0a8e589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cf903094dae4e559bcef3e269240670b5d4693344d600779c394ce1e039b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06eea6556daedf913d87b8b0527b337070528b07cc026de878b4df332598a2d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b452d113d48729b6d4ab59f80138b15e7cb16cc16eef4ec9916901a067986d07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:35Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.215115 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00fc1050-d713-484e-899e-9bd4e5d7b250\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://591ebf9a13f38e2f458aa34584be8fabc9115335a04af437a2291e7980a903ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18ddea8ad4714addb2f8431d0802d32a36f3d823bb123d4546bc6de1a3a017a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ddea8ad4714addb2f8431d0802d32a36f3d823bb123d4546bc6de1a3a017a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:35Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.233954 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:35Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.250535 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:35Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.264747 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea323a1a-c607-40cf-b61c-b7ff03399c45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88b8a46e2e2c2342bd22a4a9b42cadd0188ec83f23815773f6ffb46be79fdf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d875b0d7eb9f13bc697597a33f7196598ae679980b13842e92acf20477526f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e18d0e9df43d366a6e7088cc3d5bdc794882828f60f008eaf29e788e0141ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef2ba92cea782dd4fd31a511e1086777b58c00340014dff74ff6615b07bd10c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7977e20d8558810fb9f8f827830d0378de7d15c71340d396a95b506902c0962\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T17:24:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1013 17:24:28.736714 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 17:24:28.738546 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445610647/tls.crt::/tmp/serving-cert-445610647/tls.key\\\\\\\"\\\\nI1013 17:24:34.860768 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 17:24:34.864611 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 17:24:34.864632 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 17:24:34.864654 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 17:24:34.864660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 17:24:34.871230 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1013 17:24:34.871248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1013 17:24:34.871273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871290 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871305 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 17:24:34.871314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 17:24:34.871324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 17:24:34.871335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1013 17:24:34.872239 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd536f783e3f8f974f25772994c3d00a01c2ba66997a6e06236f1b83b890f8f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:35Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.282951 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxmjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b45ec2d-5bea-4007-a49f-224a866f93eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://835c6ec8dc7ae3785a23ae45e5c9dc4b3bcc24428ca4c3865a6dd790d5956e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c00868f87f1a3020a1db7e50377bbe90203d6e93799c8ec9f3100dd52a6c0cf6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T17:25:27Z\\\",\\\"message\\\":\\\"2025-10-13T17:24:42+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d76d90a0-c554-4f54-9b32-3c7f604179f6\\\\n2025-10-13T17:24:42+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d76d90a0-c554-4f54-9b32-3c7f604179f6 to /host/opt/cni/bin/\\\\n2025-10-13T17:24:42Z [verbose] multus-daemon started\\\\n2025-10-13T17:24:42Z [verbose] Readiness Indicator file check\\\\n2025-10-13T17:25:27Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xsrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxmjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:35Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.290278 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.290310 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.290320 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.290336 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.290346 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:35Z","lastTransitionTime":"2025-10-13T17:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.309307 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8064812e-b6aa-4f56-81c9-16154c00abad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c28fde07efa17d20ddecd1c8a0da09a6fa09e98a90b14718565c00aef9c0d25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bbd41d9b0518c8b060b1d01c335b6c9923e7196624fbd49326133faaa2e36ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd8a4dfb390a1643edb4eb4bd01970eaffd8dab0f748effac9e7c691ef4a3ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://013dfade47eccae4339fca0d379d914f63502613f87f1bc621113f7008634475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69d511a66a84fb0fa90f426bd4609c2b3ad915571802c6c024e0370d6e1683a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1aefa46ee989416f0e4fd05f6f12031da637a354fadd7f6caef75ddc2c74ddf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9825f520cb9cff0f39359c2b7f6d8ef1e3163ec1c94f80c037ea4afb4300c5d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9825f520cb9cff0f39359c2b7f6d8ef1e3163ec1c94f80c037ea4afb4300c5d3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T17:25:13Z\\\",\\\"message\\\":\\\"_controller.go:776] Recording success event on pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI1013 17:25:13.141326 6449 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 2.698682ms)\\\\nI1013 17:25:13.141338 6449 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\\\\\" but failed to find it\\\\nI1013 17:25:13.141346 6449 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\\\\\" in cache\\\\nI1013 17:25:13.141945 6449 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1013 17:25:13.142064 6449 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1013 17:25:13.142100 6449 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1013 17:25:13.142115 6449 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1013 17:25:13.142133 6449 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1013 17:25:13.142159 6449 factory.go:656] Stopping watch factory\\\\nI1013 17:25:13.142173 6449 ovnkube.go:599] Stopped ovnkube\\\\nI1013 17:25:13.142239 6449 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1013 17:25:13.142339 6449 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:25:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pn6lz_openshift-ovn-kubernetes(8064812e-b6aa-4f56-81c9-16154c00abad)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34b1fffbebfbd3599bed22fe3fe80129f5b704560c54d09e9ab5e9ea8bba4513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:35Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.324520 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bvtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b35c333b-0d4e-4d9a-a8fe-a1c6f5fbbf75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03411ae3786161605ed0cb08a21661c7c7a4cc93f91320586547b8ffab3e90ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g97k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bvtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:35Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.339622 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f85e4aab30e7f49ef05e48cc22fc22e0291e32604a8f74ae1cc2ea57b5889e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:35Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.358075 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f856022da70859465022db433585bd8cad12f8cd2aeae7943491d3f7e5049256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:35Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.371002 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmjlm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f85220d6-f940-4538-806a-2b26bacd7b09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6315723f6cd375a392c8c427a737021b7e01415c902514b09ee43407749caa34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7795z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmjlm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:35Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.388223 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7baf94d8-250c-4568-ab1c-a1429b8caf70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435ba16189d6d19892aaab158512c1a84a72103779e3cc4b2c634de344583c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e566f3b66d404dd8da93bba1745b9e67cd63532330591f5b735f6826c07eae2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ec64662f56c2609ebac1383a78840ba4e0d9fb1fe10e666169285d3852e6b40\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d8f079c2674e0c6bea1837755db23bc3205da6a01ac1896d6293226d9c6fb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:35Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.394675 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.394714 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.394725 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.394743 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.394760 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:35Z","lastTransitionTime":"2025-10-13T17:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.408549 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd8dd666-aa42-4598-bb52-c0cd9345384d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e96fe80c0a88f19c6c5705c7fd945c3282a104b7767db5c217b6364c045d649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9d9053346b85f12f6cf781f1699dbf8aea670b7e1cc5f4fdc1ffac2c969712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d586c8b48f5d2ca87e3a758dd265611f11678f6fa8dd1118a859923c331c4f67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19c59ec68cb6306af37b90cd4583f5a1a266e93f77566d756223eecf7cdadb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19c59ec68cb6306af37b90cd4583f5a1a266e93f77566d756223eecf7cdadb76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:35Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.421120 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:35Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.432362 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://705766b44d481300d0fd8cc654aa2dc9046733e5a71ad3a7ab789f85e737dcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a69b8825692cbbb7cce0a52b1ccb7b137ce6c9f8a8d788fff95bb7228cb40e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:35Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.446012 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce442c80-fcde-4b79-b6f9-f8f25771dfd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10e7a946185d6502e37a1ab60ad1eca36cce991f230188d22b2d6e2ea554f920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://864e330267c8cec8487e1dcf4a50bbb2ba37d81d80361f0873e6bf69de1ed721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-htwnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:35Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.466811 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5rgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d7408e5-b529-4396-92b3-2fed275c3250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://439dea1ed9724a215fa1aa1f4a4bd84a4bb998f2f86814f5f4eb46989242f11c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42ff97f0a5fb40b52d5445a00eb8f70f8028610b76baf33117b425b821b5ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f42ff97f0a5fb40b52d5445a00eb8f70f8028610b76baf33117b425b821b5ec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://471bb77042e72bcac4bb28dd26f6151a7154bea0b3ec92272e405e9b22bc0170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://471bb77042e72bcac4bb28dd26f6151a7154bea0b3ec92272e405e9b22bc0170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20cb8981843cc39e8722f560f7a7e75483c93a30edffde9835533f8dc5b50587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20cb8981843cc39e8722f560f7a7e75483c93a30edffde9835533f8dc5b50587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79851ed8c84027d1bb9eccb69bed004ae1cdbe934245c66325eafe42b12ea808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79851ed8c84027d1bb9eccb69bed004ae1cdbe934245c66325eafe42b12ea808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5rgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:35Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.479633 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c6ntg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxbvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxbvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c6ntg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:35Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.497773 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.497940 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.498078 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.498244 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.498378 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:35Z","lastTransitionTime":"2025-10-13T17:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.600097 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.600390 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.600533 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.600680 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.600937 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:35Z","lastTransitionTime":"2025-10-13T17:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.704142 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.705837 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.706025 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.706161 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.706336 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:35Z","lastTransitionTime":"2025-10-13T17:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.809288 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.809576 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.809720 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.809862 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.809978 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:35Z","lastTransitionTime":"2025-10-13T17:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.912649 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.912708 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.912724 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.912749 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:35 crc kubenswrapper[4720]: I1013 17:25:35.912765 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:35Z","lastTransitionTime":"2025-10-13T17:25:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:36 crc kubenswrapper[4720]: I1013 17:25:36.015994 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:36 crc kubenswrapper[4720]: I1013 17:25:36.016046 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:36 crc kubenswrapper[4720]: I1013 17:25:36.016063 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:36 crc kubenswrapper[4720]: I1013 17:25:36.016085 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:36 crc kubenswrapper[4720]: I1013 17:25:36.016102 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:36Z","lastTransitionTime":"2025-10-13T17:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:36 crc kubenswrapper[4720]: I1013 17:25:36.119042 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:36 crc kubenswrapper[4720]: I1013 17:25:36.119099 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:36 crc kubenswrapper[4720]: I1013 17:25:36.119116 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:36 crc kubenswrapper[4720]: I1013 17:25:36.119139 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:36 crc kubenswrapper[4720]: I1013 17:25:36.119159 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:36Z","lastTransitionTime":"2025-10-13T17:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:36 crc kubenswrapper[4720]: I1013 17:25:36.168124 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c6ntg" Oct 13 17:25:36 crc kubenswrapper[4720]: E1013 17:25:36.168293 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c6ntg" podUID="c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61" Oct 13 17:25:36 crc kubenswrapper[4720]: I1013 17:25:36.220837 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:36 crc kubenswrapper[4720]: I1013 17:25:36.220864 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:36 crc kubenswrapper[4720]: I1013 17:25:36.220873 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:36 crc kubenswrapper[4720]: I1013 17:25:36.220885 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:36 crc kubenswrapper[4720]: I1013 17:25:36.220893 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:36Z","lastTransitionTime":"2025-10-13T17:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:36 crc kubenswrapper[4720]: I1013 17:25:36.323035 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:36 crc kubenswrapper[4720]: I1013 17:25:36.323066 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:36 crc kubenswrapper[4720]: I1013 17:25:36.323077 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:36 crc kubenswrapper[4720]: I1013 17:25:36.323091 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:36 crc kubenswrapper[4720]: I1013 17:25:36.323100 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:36Z","lastTransitionTime":"2025-10-13T17:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:36 crc kubenswrapper[4720]: I1013 17:25:36.425407 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:36 crc kubenswrapper[4720]: I1013 17:25:36.425450 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:36 crc kubenswrapper[4720]: I1013 17:25:36.425466 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:36 crc kubenswrapper[4720]: I1013 17:25:36.425488 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:36 crc kubenswrapper[4720]: I1013 17:25:36.425506 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:36Z","lastTransitionTime":"2025-10-13T17:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:36 crc kubenswrapper[4720]: I1013 17:25:36.528430 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:36 crc kubenswrapper[4720]: I1013 17:25:36.528466 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:36 crc kubenswrapper[4720]: I1013 17:25:36.528474 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:36 crc kubenswrapper[4720]: I1013 17:25:36.528488 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:36 crc kubenswrapper[4720]: I1013 17:25:36.528498 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:36Z","lastTransitionTime":"2025-10-13T17:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:36 crc kubenswrapper[4720]: I1013 17:25:36.630950 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:36 crc kubenswrapper[4720]: I1013 17:25:36.630992 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:36 crc kubenswrapper[4720]: I1013 17:25:36.631002 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:36 crc kubenswrapper[4720]: I1013 17:25:36.631016 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:36 crc kubenswrapper[4720]: I1013 17:25:36.631025 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:36Z","lastTransitionTime":"2025-10-13T17:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:36 crc kubenswrapper[4720]: I1013 17:25:36.733322 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:36 crc kubenswrapper[4720]: I1013 17:25:36.733382 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:36 crc kubenswrapper[4720]: I1013 17:25:36.733398 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:36 crc kubenswrapper[4720]: I1013 17:25:36.733422 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:36 crc kubenswrapper[4720]: I1013 17:25:36.733438 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:36Z","lastTransitionTime":"2025-10-13T17:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:36 crc kubenswrapper[4720]: I1013 17:25:36.836507 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:36 crc kubenswrapper[4720]: I1013 17:25:36.836562 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:36 crc kubenswrapper[4720]: I1013 17:25:36.836578 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:36 crc kubenswrapper[4720]: I1013 17:25:36.836606 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:36 crc kubenswrapper[4720]: I1013 17:25:36.836622 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:36Z","lastTransitionTime":"2025-10-13T17:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:36 crc kubenswrapper[4720]: I1013 17:25:36.939273 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:36 crc kubenswrapper[4720]: I1013 17:25:36.939376 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:36 crc kubenswrapper[4720]: I1013 17:25:36.939387 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:36 crc kubenswrapper[4720]: I1013 17:25:36.939402 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:36 crc kubenswrapper[4720]: I1013 17:25:36.939436 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:36Z","lastTransitionTime":"2025-10-13T17:25:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:37 crc kubenswrapper[4720]: I1013 17:25:37.041662 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:37 crc kubenswrapper[4720]: I1013 17:25:37.042067 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:37 crc kubenswrapper[4720]: I1013 17:25:37.042239 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:37 crc kubenswrapper[4720]: I1013 17:25:37.042404 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:37 crc kubenswrapper[4720]: I1013 17:25:37.042561 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:37Z","lastTransitionTime":"2025-10-13T17:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:37 crc kubenswrapper[4720]: I1013 17:25:37.145307 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:37 crc kubenswrapper[4720]: I1013 17:25:37.145364 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:37 crc kubenswrapper[4720]: I1013 17:25:37.145381 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:37 crc kubenswrapper[4720]: I1013 17:25:37.145404 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:37 crc kubenswrapper[4720]: I1013 17:25:37.145420 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:37Z","lastTransitionTime":"2025-10-13T17:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:37 crc kubenswrapper[4720]: I1013 17:25:37.167998 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:25:37 crc kubenswrapper[4720]: I1013 17:25:37.168054 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:25:37 crc kubenswrapper[4720]: I1013 17:25:37.168066 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:25:37 crc kubenswrapper[4720]: E1013 17:25:37.168135 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 17:25:37 crc kubenswrapper[4720]: E1013 17:25:37.168384 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 17:25:37 crc kubenswrapper[4720]: E1013 17:25:37.168517 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 17:25:37 crc kubenswrapper[4720]: I1013 17:25:37.248721 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:37 crc kubenswrapper[4720]: I1013 17:25:37.248782 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:37 crc kubenswrapper[4720]: I1013 17:25:37.248793 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:37 crc kubenswrapper[4720]: I1013 17:25:37.248808 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:37 crc kubenswrapper[4720]: I1013 17:25:37.248818 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:37Z","lastTransitionTime":"2025-10-13T17:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:37 crc kubenswrapper[4720]: I1013 17:25:37.350715 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:37 crc kubenswrapper[4720]: I1013 17:25:37.350787 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:37 crc kubenswrapper[4720]: I1013 17:25:37.350810 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:37 crc kubenswrapper[4720]: I1013 17:25:37.350838 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:37 crc kubenswrapper[4720]: I1013 17:25:37.350860 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:37Z","lastTransitionTime":"2025-10-13T17:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:37 crc kubenswrapper[4720]: I1013 17:25:37.453807 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:37 crc kubenswrapper[4720]: I1013 17:25:37.453853 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:37 crc kubenswrapper[4720]: I1013 17:25:37.453864 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:37 crc kubenswrapper[4720]: I1013 17:25:37.453879 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:37 crc kubenswrapper[4720]: I1013 17:25:37.453899 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:37Z","lastTransitionTime":"2025-10-13T17:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:37 crc kubenswrapper[4720]: I1013 17:25:37.557699 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:37 crc kubenswrapper[4720]: I1013 17:25:37.557749 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:37 crc kubenswrapper[4720]: I1013 17:25:37.557762 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:37 crc kubenswrapper[4720]: I1013 17:25:37.557781 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:37 crc kubenswrapper[4720]: I1013 17:25:37.557794 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:37Z","lastTransitionTime":"2025-10-13T17:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:37 crc kubenswrapper[4720]: I1013 17:25:37.659811 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:37 crc kubenswrapper[4720]: I1013 17:25:37.659861 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:37 crc kubenswrapper[4720]: I1013 17:25:37.659880 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:37 crc kubenswrapper[4720]: I1013 17:25:37.659904 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:37 crc kubenswrapper[4720]: I1013 17:25:37.659921 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:37Z","lastTransitionTime":"2025-10-13T17:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:37 crc kubenswrapper[4720]: I1013 17:25:37.762575 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:37 crc kubenswrapper[4720]: I1013 17:25:37.762650 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:37 crc kubenswrapper[4720]: I1013 17:25:37.762671 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:37 crc kubenswrapper[4720]: I1013 17:25:37.762699 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:37 crc kubenswrapper[4720]: I1013 17:25:37.762717 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:37Z","lastTransitionTime":"2025-10-13T17:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:37 crc kubenswrapper[4720]: I1013 17:25:37.866037 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:37 crc kubenswrapper[4720]: I1013 17:25:37.866076 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:37 crc kubenswrapper[4720]: I1013 17:25:37.866085 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:37 crc kubenswrapper[4720]: I1013 17:25:37.866099 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:37 crc kubenswrapper[4720]: I1013 17:25:37.866108 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:37Z","lastTransitionTime":"2025-10-13T17:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:37 crc kubenswrapper[4720]: I1013 17:25:37.968456 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:37 crc kubenswrapper[4720]: I1013 17:25:37.968525 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:37 crc kubenswrapper[4720]: I1013 17:25:37.968549 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:37 crc kubenswrapper[4720]: I1013 17:25:37.968578 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:37 crc kubenswrapper[4720]: I1013 17:25:37.968599 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:37Z","lastTransitionTime":"2025-10-13T17:25:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:38 crc kubenswrapper[4720]: I1013 17:25:38.071548 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:38 crc kubenswrapper[4720]: I1013 17:25:38.071586 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:38 crc kubenswrapper[4720]: I1013 17:25:38.071595 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:38 crc kubenswrapper[4720]: I1013 17:25:38.071610 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:38 crc kubenswrapper[4720]: I1013 17:25:38.071621 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:38Z","lastTransitionTime":"2025-10-13T17:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:38 crc kubenswrapper[4720]: I1013 17:25:38.167995 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c6ntg" Oct 13 17:25:38 crc kubenswrapper[4720]: E1013 17:25:38.168255 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c6ntg" podUID="c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61" Oct 13 17:25:38 crc kubenswrapper[4720]: I1013 17:25:38.173942 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:38 crc kubenswrapper[4720]: I1013 17:25:38.174003 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:38 crc kubenswrapper[4720]: I1013 17:25:38.174026 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:38 crc kubenswrapper[4720]: I1013 17:25:38.174053 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:38 crc kubenswrapper[4720]: I1013 17:25:38.174075 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:38Z","lastTransitionTime":"2025-10-13T17:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:38 crc kubenswrapper[4720]: I1013 17:25:38.277306 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:38 crc kubenswrapper[4720]: I1013 17:25:38.277371 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:38 crc kubenswrapper[4720]: I1013 17:25:38.277390 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:38 crc kubenswrapper[4720]: I1013 17:25:38.277413 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:38 crc kubenswrapper[4720]: I1013 17:25:38.277431 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:38Z","lastTransitionTime":"2025-10-13T17:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:38 crc kubenswrapper[4720]: I1013 17:25:38.380575 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:38 crc kubenswrapper[4720]: I1013 17:25:38.380637 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:38 crc kubenswrapper[4720]: I1013 17:25:38.380655 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:38 crc kubenswrapper[4720]: I1013 17:25:38.380679 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:38 crc kubenswrapper[4720]: I1013 17:25:38.380696 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:38Z","lastTransitionTime":"2025-10-13T17:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:38 crc kubenswrapper[4720]: I1013 17:25:38.483561 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:38 crc kubenswrapper[4720]: I1013 17:25:38.483611 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:38 crc kubenswrapper[4720]: I1013 17:25:38.483620 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:38 crc kubenswrapper[4720]: I1013 17:25:38.483636 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:38 crc kubenswrapper[4720]: I1013 17:25:38.483647 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:38Z","lastTransitionTime":"2025-10-13T17:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:38 crc kubenswrapper[4720]: I1013 17:25:38.585993 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:38 crc kubenswrapper[4720]: I1013 17:25:38.586026 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:38 crc kubenswrapper[4720]: I1013 17:25:38.586036 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:38 crc kubenswrapper[4720]: I1013 17:25:38.586052 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:38 crc kubenswrapper[4720]: I1013 17:25:38.586064 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:38Z","lastTransitionTime":"2025-10-13T17:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:38 crc kubenswrapper[4720]: I1013 17:25:38.689131 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:38 crc kubenswrapper[4720]: I1013 17:25:38.689230 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:38 crc kubenswrapper[4720]: I1013 17:25:38.689253 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:38 crc kubenswrapper[4720]: I1013 17:25:38.689280 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:38 crc kubenswrapper[4720]: I1013 17:25:38.689298 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:38Z","lastTransitionTime":"2025-10-13T17:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:38 crc kubenswrapper[4720]: I1013 17:25:38.791602 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:38 crc kubenswrapper[4720]: I1013 17:25:38.791639 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:38 crc kubenswrapper[4720]: I1013 17:25:38.791648 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:38 crc kubenswrapper[4720]: I1013 17:25:38.791664 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:38 crc kubenswrapper[4720]: I1013 17:25:38.791674 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:38Z","lastTransitionTime":"2025-10-13T17:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:38 crc kubenswrapper[4720]: I1013 17:25:38.894900 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:38 crc kubenswrapper[4720]: I1013 17:25:38.894960 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:38 crc kubenswrapper[4720]: I1013 17:25:38.894976 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:38 crc kubenswrapper[4720]: I1013 17:25:38.894999 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:38 crc kubenswrapper[4720]: I1013 17:25:38.895015 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:38Z","lastTransitionTime":"2025-10-13T17:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:38 crc kubenswrapper[4720]: I1013 17:25:38.950251 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 17:25:38 crc kubenswrapper[4720]: E1013 17:25:38.950453 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 17:26:42.950417656 +0000 UTC m=+148.407667788 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:25:38 crc kubenswrapper[4720]: I1013 17:25:38.997667 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:38 crc kubenswrapper[4720]: I1013 17:25:38.997720 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:38 crc kubenswrapper[4720]: I1013 17:25:38.997729 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:38 crc kubenswrapper[4720]: I1013 17:25:38.997745 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:38 crc kubenswrapper[4720]: I1013 17:25:38.997755 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:38Z","lastTransitionTime":"2025-10-13T17:25:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:39 crc kubenswrapper[4720]: I1013 17:25:39.052430 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:25:39 crc kubenswrapper[4720]: I1013 17:25:39.052561 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:25:39 crc kubenswrapper[4720]: I1013 17:25:39.052613 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:25:39 crc kubenswrapper[4720]: E1013 17:25:39.052634 4720 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 13 17:25:39 crc kubenswrapper[4720]: I1013 17:25:39.052714 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:25:39 crc kubenswrapper[4720]: E1013 17:25:39.052741 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-13 17:26:43.052715105 +0000 UTC m=+148.509965277 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 13 17:25:39 crc kubenswrapper[4720]: E1013 17:25:39.052737 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 13 17:25:39 crc kubenswrapper[4720]: E1013 17:25:39.052767 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 13 17:25:39 crc kubenswrapper[4720]: E1013 17:25:39.052796 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 13 17:25:39 crc kubenswrapper[4720]: E1013 17:25:39.052800 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 13 17:25:39 crc kubenswrapper[4720]: E1013 17:25:39.052817 4720 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 13 17:25:39 crc kubenswrapper[4720]: E1013 17:25:39.052821 4720 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 13 17:25:39 crc kubenswrapper[4720]: E1013 17:25:39.052834 4720 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 13 17:25:39 crc kubenswrapper[4720]: E1013 17:25:39.052858 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-13 17:26:43.052846128 +0000 UTC m=+148.510096300 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 13 17:25:39 crc kubenswrapper[4720]: E1013 17:25:39.052922 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-13 17:26:43.052875229 +0000 UTC m=+148.510125391 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 13 17:25:39 crc kubenswrapper[4720]: E1013 17:25:39.052979 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-13 17:26:43.052951041 +0000 UTC m=+148.510201203 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 13 17:25:39 crc kubenswrapper[4720]: I1013 17:25:39.100562 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:39 crc kubenswrapper[4720]: I1013 17:25:39.100602 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:39 crc kubenswrapper[4720]: I1013 17:25:39.100612 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:39 crc kubenswrapper[4720]: I1013 17:25:39.100625 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:39 crc kubenswrapper[4720]: I1013 17:25:39.100633 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:39Z","lastTransitionTime":"2025-10-13T17:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:39 crc kubenswrapper[4720]: I1013 17:25:39.168050 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:25:39 crc kubenswrapper[4720]: I1013 17:25:39.168093 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:25:39 crc kubenswrapper[4720]: I1013 17:25:39.168138 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:25:39 crc kubenswrapper[4720]: E1013 17:25:39.168243 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 17:25:39 crc kubenswrapper[4720]: E1013 17:25:39.168382 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 17:25:39 crc kubenswrapper[4720]: E1013 17:25:39.168558 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 17:25:39 crc kubenswrapper[4720]: I1013 17:25:39.203222 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:39 crc kubenswrapper[4720]: I1013 17:25:39.203275 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:39 crc kubenswrapper[4720]: I1013 17:25:39.203291 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:39 crc kubenswrapper[4720]: I1013 17:25:39.203313 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:39 crc kubenswrapper[4720]: I1013 17:25:39.203329 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:39Z","lastTransitionTime":"2025-10-13T17:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:39 crc kubenswrapper[4720]: I1013 17:25:39.306563 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:39 crc kubenswrapper[4720]: I1013 17:25:39.306609 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:39 crc kubenswrapper[4720]: I1013 17:25:39.306625 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:39 crc kubenswrapper[4720]: I1013 17:25:39.306652 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:39 crc kubenswrapper[4720]: I1013 17:25:39.306674 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:39Z","lastTransitionTime":"2025-10-13T17:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:39 crc kubenswrapper[4720]: I1013 17:25:39.409314 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:39 crc kubenswrapper[4720]: I1013 17:25:39.409365 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:39 crc kubenswrapper[4720]: I1013 17:25:39.409432 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:39 crc kubenswrapper[4720]: I1013 17:25:39.409458 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:39 crc kubenswrapper[4720]: I1013 17:25:39.409651 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:39Z","lastTransitionTime":"2025-10-13T17:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:39 crc kubenswrapper[4720]: I1013 17:25:39.512798 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:39 crc kubenswrapper[4720]: I1013 17:25:39.512869 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:39 crc kubenswrapper[4720]: I1013 17:25:39.512891 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:39 crc kubenswrapper[4720]: I1013 17:25:39.512921 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:39 crc kubenswrapper[4720]: I1013 17:25:39.512942 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:39Z","lastTransitionTime":"2025-10-13T17:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:39 crc kubenswrapper[4720]: I1013 17:25:39.615348 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:39 crc kubenswrapper[4720]: I1013 17:25:39.615392 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:39 crc kubenswrapper[4720]: I1013 17:25:39.615404 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:39 crc kubenswrapper[4720]: I1013 17:25:39.615418 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:39 crc kubenswrapper[4720]: I1013 17:25:39.615431 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:39Z","lastTransitionTime":"2025-10-13T17:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:39 crc kubenswrapper[4720]: I1013 17:25:39.717802 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:39 crc kubenswrapper[4720]: I1013 17:25:39.717878 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:39 crc kubenswrapper[4720]: I1013 17:25:39.717900 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:39 crc kubenswrapper[4720]: I1013 17:25:39.717930 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:39 crc kubenswrapper[4720]: I1013 17:25:39.717951 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:39Z","lastTransitionTime":"2025-10-13T17:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:39 crc kubenswrapper[4720]: I1013 17:25:39.820835 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:39 crc kubenswrapper[4720]: I1013 17:25:39.820923 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:39 crc kubenswrapper[4720]: I1013 17:25:39.820967 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:39 crc kubenswrapper[4720]: I1013 17:25:39.820994 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:39 crc kubenswrapper[4720]: I1013 17:25:39.821015 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:39Z","lastTransitionTime":"2025-10-13T17:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:39 crc kubenswrapper[4720]: I1013 17:25:39.923953 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:39 crc kubenswrapper[4720]: I1013 17:25:39.924125 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:39 crc kubenswrapper[4720]: I1013 17:25:39.924144 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:39 crc kubenswrapper[4720]: I1013 17:25:39.924306 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:39 crc kubenswrapper[4720]: I1013 17:25:39.924335 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:39Z","lastTransitionTime":"2025-10-13T17:25:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:40 crc kubenswrapper[4720]: I1013 17:25:40.026843 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:40 crc kubenswrapper[4720]: I1013 17:25:40.026935 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:40 crc kubenswrapper[4720]: I1013 17:25:40.026952 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:40 crc kubenswrapper[4720]: I1013 17:25:40.026977 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:40 crc kubenswrapper[4720]: I1013 17:25:40.026999 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:40Z","lastTransitionTime":"2025-10-13T17:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:40 crc kubenswrapper[4720]: I1013 17:25:40.130074 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:40 crc kubenswrapper[4720]: I1013 17:25:40.130137 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:40 crc kubenswrapper[4720]: I1013 17:25:40.130157 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:40 crc kubenswrapper[4720]: I1013 17:25:40.130181 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:40 crc kubenswrapper[4720]: I1013 17:25:40.130227 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:40Z","lastTransitionTime":"2025-10-13T17:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:40 crc kubenswrapper[4720]: I1013 17:25:40.167973 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c6ntg" Oct 13 17:25:40 crc kubenswrapper[4720]: E1013 17:25:40.168385 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c6ntg" podUID="c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61" Oct 13 17:25:40 crc kubenswrapper[4720]: I1013 17:25:40.237817 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:40 crc kubenswrapper[4720]: I1013 17:25:40.237880 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:40 crc kubenswrapper[4720]: I1013 17:25:40.237903 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:40 crc kubenswrapper[4720]: I1013 17:25:40.237933 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:40 crc kubenswrapper[4720]: I1013 17:25:40.237954 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:40Z","lastTransitionTime":"2025-10-13T17:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:40 crc kubenswrapper[4720]: I1013 17:25:40.340582 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:40 crc kubenswrapper[4720]: I1013 17:25:40.340643 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:40 crc kubenswrapper[4720]: I1013 17:25:40.340666 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:40 crc kubenswrapper[4720]: I1013 17:25:40.340695 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:40 crc kubenswrapper[4720]: I1013 17:25:40.340714 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:40Z","lastTransitionTime":"2025-10-13T17:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:40 crc kubenswrapper[4720]: I1013 17:25:40.443077 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:40 crc kubenswrapper[4720]: I1013 17:25:40.443145 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:40 crc kubenswrapper[4720]: I1013 17:25:40.443164 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:40 crc kubenswrapper[4720]: I1013 17:25:40.443216 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:40 crc kubenswrapper[4720]: I1013 17:25:40.443233 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:40Z","lastTransitionTime":"2025-10-13T17:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:40 crc kubenswrapper[4720]: I1013 17:25:40.546010 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:40 crc kubenswrapper[4720]: I1013 17:25:40.546055 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:40 crc kubenswrapper[4720]: I1013 17:25:40.546066 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:40 crc kubenswrapper[4720]: I1013 17:25:40.546082 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:40 crc kubenswrapper[4720]: I1013 17:25:40.546094 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:40Z","lastTransitionTime":"2025-10-13T17:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:40 crc kubenswrapper[4720]: I1013 17:25:40.648045 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:40 crc kubenswrapper[4720]: I1013 17:25:40.648083 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:40 crc kubenswrapper[4720]: I1013 17:25:40.648095 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:40 crc kubenswrapper[4720]: I1013 17:25:40.648110 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:40 crc kubenswrapper[4720]: I1013 17:25:40.648120 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:40Z","lastTransitionTime":"2025-10-13T17:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:40 crc kubenswrapper[4720]: I1013 17:25:40.751224 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:40 crc kubenswrapper[4720]: I1013 17:25:40.751256 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:40 crc kubenswrapper[4720]: I1013 17:25:40.751266 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:40 crc kubenswrapper[4720]: I1013 17:25:40.751279 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:40 crc kubenswrapper[4720]: I1013 17:25:40.751288 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:40Z","lastTransitionTime":"2025-10-13T17:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:40 crc kubenswrapper[4720]: I1013 17:25:40.853688 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:40 crc kubenswrapper[4720]: I1013 17:25:40.853732 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:40 crc kubenswrapper[4720]: I1013 17:25:40.853742 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:40 crc kubenswrapper[4720]: I1013 17:25:40.853759 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:40 crc kubenswrapper[4720]: I1013 17:25:40.853771 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:40Z","lastTransitionTime":"2025-10-13T17:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:40 crc kubenswrapper[4720]: I1013 17:25:40.956338 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:40 crc kubenswrapper[4720]: I1013 17:25:40.956401 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:40 crc kubenswrapper[4720]: I1013 17:25:40.956418 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:40 crc kubenswrapper[4720]: I1013 17:25:40.956440 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:40 crc kubenswrapper[4720]: I1013 17:25:40.956458 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:40Z","lastTransitionTime":"2025-10-13T17:25:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.058817 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.058899 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.058934 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.058968 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.058988 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:41Z","lastTransitionTime":"2025-10-13T17:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.162053 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.162096 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.162107 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.162146 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.162157 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:41Z","lastTransitionTime":"2025-10-13T17:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.167565 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.167633 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:25:41 crc kubenswrapper[4720]: E1013 17:25:41.167723 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.167777 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:25:41 crc kubenswrapper[4720]: E1013 17:25:41.168230 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 17:25:41 crc kubenswrapper[4720]: E1013 17:25:41.168683 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.169124 4720 scope.go:117] "RemoveContainer" containerID="9825f520cb9cff0f39359c2b7f6d8ef1e3163ec1c94f80c037ea4afb4300c5d3" Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.265404 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.265458 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.265501 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.265526 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.265542 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:41Z","lastTransitionTime":"2025-10-13T17:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.368981 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.369042 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.369059 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.369082 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.369098 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:41Z","lastTransitionTime":"2025-10-13T17:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.471733 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.471812 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.471836 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.471915 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.471979 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:41Z","lastTransitionTime":"2025-10-13T17:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.574853 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.574917 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.574939 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.574968 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.575021 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:41Z","lastTransitionTime":"2025-10-13T17:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.642066 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pn6lz_8064812e-b6aa-4f56-81c9-16154c00abad/ovnkube-controller/2.log" Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.645468 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" event={"ID":"8064812e-b6aa-4f56-81c9-16154c00abad","Type":"ContainerStarted","Data":"c62dd5ae9eb692f394556f2d0b70bf14a7c99fe714ad7084a5d08b324a4435de"} Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.646885 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.669307 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce442c80-fcde-4b79-b6f9-f8f25771dfd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10e7a946185d6502e37a1ab60ad1eca36cce991f230188d22b2d6e2ea554f920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://864e330267c8cec8487e1dcf4a50bbb2ba37d81d80361f0873e6bf69de1ed721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-htwnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.677405 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.677433 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.677443 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.677458 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.677469 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:41Z","lastTransitionTime":"2025-10-13T17:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.693256 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5rgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d7408e5-b529-4396-92b3-2fed275c3250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://439dea1ed9724a215fa1aa1f4a4bd84a4bb998f2f86814f5f4eb46989242f11c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42ff97f0a5fb40b52d5445a00eb8f70f8028610b76baf33117b425b821b5ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f42ff97f0a5fb40b52d5445a00eb8f70f8028610b76baf33117b425b821b5ec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://471bb77042e72bcac4bb28dd26f6151a7154bea0b3ec92272e405e9b22bc0170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://471bb77042e72bcac4bb28dd26f6151a7154bea0b3ec92272e405e9b22bc0170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20cb8981843cc39e8722f560f7a7e75483c93a30edffde9835533f8dc5b50587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20cb8981843cc39e8722f560f7a7e75483c93a30edffde9835533f8dc5b50587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79851ed8c84027d1bb9eccb69bed004ae1cdbe934245c66325eafe42b12ea808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79851ed8c84027d1bb9eccb69bed004ae1cdbe934245c66325eafe42b12ea808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5rgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.706774 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c6ntg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxbvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxbvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c6ntg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.723467 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dnjrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e6efca3-dcc8-488e-8fb1-e14ee0396158\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0f539bcc67139c5b3d51ab01a3114afc20fc44cf5e286dce2b4cad0ea0629c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g676g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c792a043326901c5cbcae8c96b168aee71b56f6e91d0edfc7f58abb5da9c9c28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g676g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dnjrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.768487 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a1f1696-ed7f-4482-a300-ca25b68e09e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2507a30017fbd9947ca456b84812f0b26baabf572768870f5a34b3c7699443cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc60672e282de4d3d6a2d8fd64306ec0fc0d032ade655f88afb0c0b9e0a8e589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cf903094dae4e559bcef3e269240670b5d4693344d600779c394ce1e039b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06eea6556daedf913d87b8b0527b337070528b07cc026de878b4df332598a2d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b452d113d48729b6d4ab59f80138b15e7cb16cc16eef4ec9916901a067986d07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.779500 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.779537 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.779548 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.779581 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.779593 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:41Z","lastTransitionTime":"2025-10-13T17:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.780359 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00fc1050-d713-484e-899e-9bd4e5d7b250\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://591ebf9a13f38e2f458aa34584be8fabc9115335a04af437a2291e7980a903ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18ddea8ad4714addb2f8431d0802d32a36f3d823bb123d4546bc6de1a3a017a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ddea8ad4714addb2f8431d0802d32a36f3d823bb123d4546bc6de1a3a017a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.795650 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.813785 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.828129 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea323a1a-c607-40cf-b61c-b7ff03399c45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88b8a46e2e2c2342bd22a4a9b42cadd0188ec83f23815773f6ffb46be79fdf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d875b0d7eb9f13bc697597a33f7196598ae679980b13842e92acf20477526f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e18d0e9df43d366a6e7088cc3d5bdc794882828f60f008eaf29e788e0141ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef2ba92cea782dd4fd31a511e1086777b58c00340014dff74ff6615b07bd10c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7977e20d8558810fb9f8f827830d0378de7d15c71340d396a95b506902c0962\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T17:24:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1013 17:24:28.736714 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 17:24:28.738546 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445610647/tls.crt::/tmp/serving-cert-445610647/tls.key\\\\\\\"\\\\nI1013 17:24:34.860768 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 17:24:34.864611 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 17:24:34.864632 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 17:24:34.864654 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 17:24:34.864660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 17:24:34.871230 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1013 17:24:34.871248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1013 17:24:34.871273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871290 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871305 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 17:24:34.871314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 17:24:34.871324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 17:24:34.871335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1013 17:24:34.872239 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd536f783e3f8f974f25772994c3d00a01c2ba66997a6e06236f1b83b890f8f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.843856 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxmjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b45ec2d-5bea-4007-a49f-224a866f93eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://835c6ec8dc7ae3785a23ae45e5c9dc4b3bcc24428ca4c3865a6dd790d5956e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c00868f87f1a3020a1db7e50377bbe90203d6e93799c8ec9f3100dd52a6c0cf6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T17:25:27Z\\\",\\\"message\\\":\\\"2025-10-13T17:24:42+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d76d90a0-c554-4f54-9b32-3c7f604179f6\\\\n2025-10-13T17:24:42+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d76d90a0-c554-4f54-9b32-3c7f604179f6 to /host/opt/cni/bin/\\\\n2025-10-13T17:24:42Z [verbose] multus-daemon started\\\\n2025-10-13T17:24:42Z [verbose] Readiness Indicator file check\\\\n2025-10-13T17:25:27Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xsrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxmjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.876942 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8064812e-b6aa-4f56-81c9-16154c00abad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c28fde07efa17d20ddecd1c8a0da09a6fa09e98a90b14718565c00aef9c0d25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bbd41d9b0518c8b060b1d01c335b6c9923e7196624fbd49326133faaa2e36ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd8a4dfb390a1643edb4eb4bd01970eaffd8dab0f748effac9e7c691ef4a3ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://013dfade47eccae4339fca0d379d914f63502613f87f1bc621113f7008634475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69d511a66a84fb0fa90f426bd4609c2b3ad915571802c6c024e0370d6e1683a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1aefa46ee989416f0e4fd05f6f12031da637a354fadd7f6caef75ddc2c74ddf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62dd5ae9eb692f394556f2d0b70bf14a7c99fe714ad7084a5d08b324a4435de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9825f520cb9cff0f39359c2b7f6d8ef1e3163ec1c94f80c037ea4afb4300c5d3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T17:25:13Z\\\",\\\"message\\\":\\\"_controller.go:776] Recording success event on pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI1013 17:25:13.141326 6449 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 2.698682ms)\\\\nI1013 17:25:13.141338 6449 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\\\\\" but failed to find it\\\\nI1013 17:25:13.141346 6449 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\\\\\" in cache\\\\nI1013 17:25:13.141945 6449 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1013 17:25:13.142064 6449 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1013 17:25:13.142100 6449 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1013 17:25:13.142115 6449 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1013 17:25:13.142133 6449 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1013 17:25:13.142159 6449 factory.go:656] Stopping watch factory\\\\nI1013 17:25:13.142173 6449 ovnkube.go:599] Stopped ovnkube\\\\nI1013 17:25:13.142239 6449 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1013 17:25:13.142339 6449 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:25:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:25:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34b1fffbebfbd3599bed22fe3fe80129f5b704560c54d09e9ab5e9ea8bba4513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.881471 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.881505 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.881517 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.881534 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.881546 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:41Z","lastTransitionTime":"2025-10-13T17:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.887152 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bvtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b35c333b-0d4e-4d9a-a8fe-a1c6f5fbbf75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03411ae3786161605ed0cb08a21661c7c7a4cc93f91320586547b8ffab3e90ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g97k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bvtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.901297 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f85e4aab30e7f49ef05e48cc22fc22e0291e32604a8f74ae1cc2ea57b5889e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.913304 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f856022da70859465022db433585bd8cad12f8cd2aeae7943491d3f7e5049256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.924853 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmjlm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f85220d6-f940-4538-806a-2b26bacd7b09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6315723f6cd375a392c8c427a737021b7e01415c902514b09ee43407749caa34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7795z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmjlm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.942178 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7baf94d8-250c-4568-ab1c-a1429b8caf70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435ba16189d6d19892aaab158512c1a84a72103779e3cc4b2c634de344583c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e566f3b66d404dd8da93bba1745b9e67cd63532330591f5b735f6826c07eae2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ec64662f56c2609ebac1383a78840ba4e0d9fb1fe10e666169285d3852e6b40\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d8f079c2674e0c6bea1837755db23bc3205da6a01ac1896d6293226d9c6fb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.957439 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd8dd666-aa42-4598-bb52-c0cd9345384d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e96fe80c0a88f19c6c5705c7fd945c3282a104b7767db5c217b6364c045d649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9d9053346b85f12f6cf781f1699dbf8aea670b7e1cc5f4fdc1ffac2c969712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d586c8b48f5d2ca87e3a758dd265611f11678f6fa8dd1118a859923c331c4f67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19c59ec68cb6306af37b90cd4583f5a1a266e93f77566d756223eecf7cdadb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19c59ec68cb6306af37b90cd4583f5a1a266e93f77566d756223eecf7cdadb76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.973599 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.983411 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.983446 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.983455 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.983471 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.983481 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:41Z","lastTransitionTime":"2025-10-13T17:25:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:41 crc kubenswrapper[4720]: I1013 17:25:41.988978 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://705766b44d481300d0fd8cc654aa2dc9046733e5a71ad3a7ab789f85e737dcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a69b8825692cbbb7cce0a52b1ccb7b137ce6c9f8a8d788fff95bb7228cb40e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:41Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.085766 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.085796 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.085804 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.085816 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.085823 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:42Z","lastTransitionTime":"2025-10-13T17:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.167363 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c6ntg" Oct 13 17:25:42 crc kubenswrapper[4720]: E1013 17:25:42.167512 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c6ntg" podUID="c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.188237 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.188316 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.188329 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.188345 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.188354 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:42Z","lastTransitionTime":"2025-10-13T17:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.291340 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.291424 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.291442 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.291465 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.291481 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:42Z","lastTransitionTime":"2025-10-13T17:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.394112 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.394173 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.394184 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.394226 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.394236 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:42Z","lastTransitionTime":"2025-10-13T17:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.496926 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.496981 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.496992 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.497010 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.497021 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:42Z","lastTransitionTime":"2025-10-13T17:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.599763 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.599821 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.599840 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.599864 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.599881 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:42Z","lastTransitionTime":"2025-10-13T17:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.651119 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pn6lz_8064812e-b6aa-4f56-81c9-16154c00abad/ovnkube-controller/3.log" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.651767 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pn6lz_8064812e-b6aa-4f56-81c9-16154c00abad/ovnkube-controller/2.log" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.655238 4720 generic.go:334] "Generic (PLEG): container finished" podID="8064812e-b6aa-4f56-81c9-16154c00abad" containerID="c62dd5ae9eb692f394556f2d0b70bf14a7c99fe714ad7084a5d08b324a4435de" exitCode=1 Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.655295 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" event={"ID":"8064812e-b6aa-4f56-81c9-16154c00abad","Type":"ContainerDied","Data":"c62dd5ae9eb692f394556f2d0b70bf14a7c99fe714ad7084a5d08b324a4435de"} Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.655354 4720 scope.go:117] "RemoveContainer" containerID="9825f520cb9cff0f39359c2b7f6d8ef1e3163ec1c94f80c037ea4afb4300c5d3" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.657056 4720 scope.go:117] "RemoveContainer" containerID="c62dd5ae9eb692f394556f2d0b70bf14a7c99fe714ad7084a5d08b324a4435de" Oct 13 17:25:42 crc kubenswrapper[4720]: E1013 17:25:42.657459 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pn6lz_openshift-ovn-kubernetes(8064812e-b6aa-4f56-81c9-16154c00abad)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" podUID="8064812e-b6aa-4f56-81c9-16154c00abad" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.675152 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5rgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d7408e5-b529-4396-92b3-2fed275c3250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://439dea1ed9724a215fa1aa1f4a4bd84a4bb998f2f86814f5f4eb46989242f11c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42ff97f0a5fb40b52d5445a00eb8f70f8028610b76baf33117b425b821b5ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f42ff97f0a5fb40b52d5445a00eb8f70f8028610b76baf33117b425b821b5ec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://471bb77042e72bcac4bb28dd26f6151a7154bea0b3ec92272e405e9b22bc0170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://471bb77042e72bcac4bb28dd26f6151a7154bea0b3ec92272e405e9b22bc0170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20cb8981843cc39e8722f560f7a7e75483c93a30edffde9835533f8dc5b50587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20cb8981843cc39e8722f560f7a7e75483c93a30edffde9835533f8dc5b50587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79851ed8c84027d1bb9eccb69bed004ae1cdbe934245c66325eafe42b12ea808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79851ed8c84027d1bb9eccb69bed004ae1cdbe934245c66325eafe42b12ea808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5rgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:42Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.683118 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.683220 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.683245 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.683274 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.683298 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:42Z","lastTransitionTime":"2025-10-13T17:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.696409 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c6ntg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxbvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxbvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c6ntg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:42Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:42 crc kubenswrapper[4720]: E1013 17:25:42.705006 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a530de0f-daad-4050-9522-69c64a451e75\\\",\\\"systemUUID\\\":\\\"00bb7c43-79d6-45b5-bd02-4b71a0ba6837\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:42Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.710331 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.710410 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.710432 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.710459 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.710482 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:42Z","lastTransitionTime":"2025-10-13T17:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.711801 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce442c80-fcde-4b79-b6f9-f8f25771dfd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10e7a946185d6502e37a1ab60ad1eca36cce991f230188d22b2d6e2ea554f920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://864e330267c8cec8487e1dcf4a50bbb2ba37d81d80361f0873e6bf69de1ed721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-htwnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:42Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.725183 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00fc1050-d713-484e-899e-9bd4e5d7b250\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://591ebf9a13f38e2f458aa34584be8fabc9115335a04af437a2291e7980a903ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18ddea8ad4714addb2f8431d0802d32a36f3d823bb123d4546bc6de1a3a017a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ddea8ad4714addb2f8431d0802d32a36f3d823bb123d4546bc6de1a3a017a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:42Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:42 crc kubenswrapper[4720]: E1013 17:25:42.729675 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a530de0f-daad-4050-9522-69c64a451e75\\\",\\\"systemUUID\\\":\\\"00bb7c43-79d6-45b5-bd02-4b71a0ba6837\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:42Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.733444 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.733506 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.733529 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.733556 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.733580 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:42Z","lastTransitionTime":"2025-10-13T17:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.744579 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:42Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:42 crc kubenswrapper[4720]: E1013 17:25:42.752349 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a530de0f-daad-4050-9522-69c64a451e75\\\",\\\"systemUUID\\\":\\\"00bb7c43-79d6-45b5-bd02-4b71a0ba6837\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:42Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.756447 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.756496 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.756516 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.756538 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.756555 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:42Z","lastTransitionTime":"2025-10-13T17:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.762130 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:42Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:42 crc kubenswrapper[4720]: E1013 17:25:42.776103 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a530de0f-daad-4050-9522-69c64a451e75\\\",\\\"systemUUID\\\":\\\"00bb7c43-79d6-45b5-bd02-4b71a0ba6837\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:42Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.778604 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dnjrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e6efca3-dcc8-488e-8fb1-e14ee0396158\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0f539bcc67139c5b3d51ab01a3114afc20fc44cf5e286dce2b4cad0ea0629c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g676g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c792a043326901c5cbcae8c96b168aee71b56f6e91d0edfc7f58abb5da9c9c28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g676g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dnjrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:42Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.780618 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.780712 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.780771 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.780829 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.780882 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:42Z","lastTransitionTime":"2025-10-13T17:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:42 crc kubenswrapper[4720]: E1013 17:25:42.798814 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a530de0f-daad-4050-9522-69c64a451e75\\\",\\\"systemUUID\\\":\\\"00bb7c43-79d6-45b5-bd02-4b71a0ba6837\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:42Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:42 crc kubenswrapper[4720]: E1013 17:25:42.799067 4720 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.800725 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.800809 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.800876 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.800934 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.800985 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:42Z","lastTransitionTime":"2025-10-13T17:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.802258 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a1f1696-ed7f-4482-a300-ca25b68e09e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2507a30017fbd9947ca456b84812f0b26baabf572768870f5a34b3c7699443cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc60672e282de4d3d6a2d8fd64306ec0fc0d032ade655f88afb0c0b9e0a8e589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cf903094dae4e559bcef3e269240670b5d4693344d600779c394ce1e039b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06eea6556daedf913d87b8b0527b337070528b07cc026de878b4df332598a2d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b452d113d48729b6d4ab59f80138b15e7cb16cc16eef4ec9916901a067986d07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:42Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.820549 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxmjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b45ec2d-5bea-4007-a49f-224a866f93eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://835c6ec8dc7ae3785a23ae45e5c9dc4b3bcc24428ca4c3865a6dd790d5956e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c00868f87f1a3020a1db7e50377bbe90203d6e93799c8ec9f3100dd52a6c0cf6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T17:25:27Z\\\",\\\"message\\\":\\\"2025-10-13T17:24:42+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d76d90a0-c554-4f54-9b32-3c7f604179f6\\\\n2025-10-13T17:24:42+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d76d90a0-c554-4f54-9b32-3c7f604179f6 to /host/opt/cni/bin/\\\\n2025-10-13T17:24:42Z [verbose] multus-daemon started\\\\n2025-10-13T17:24:42Z [verbose] Readiness Indicator file check\\\\n2025-10-13T17:25:27Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xsrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxmjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:42Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.848731 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8064812e-b6aa-4f56-81c9-16154c00abad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c28fde07efa17d20ddecd1c8a0da09a6fa09e98a90b14718565c00aef9c0d25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bbd41d9b0518c8b060b1d01c335b6c9923e7196624fbd49326133faaa2e36ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd8a4dfb390a1643edb4eb4bd01970eaffd8dab0f748effac9e7c691ef4a3ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://013dfade47eccae4339fca0d379d914f63502613f87f1bc621113f7008634475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69d511a66a84fb0fa90f426bd4609c2b3ad915571802c6c024e0370d6e1683a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1aefa46ee989416f0e4fd05f6f12031da637a354fadd7f6caef75ddc2c74ddf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62dd5ae9eb692f394556f2d0b70bf14a7c99fe714ad7084a5d08b324a4435de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9825f520cb9cff0f39359c2b7f6d8ef1e3163ec1c94f80c037ea4afb4300c5d3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T17:25:13Z\\\",\\\"message\\\":\\\"_controller.go:776] Recording success event on pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI1013 17:25:13.141326 6449 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 2.698682ms)\\\\nI1013 17:25:13.141338 6449 ovnkube_controller.go:900] Cache entry expected pod with UID \\\\\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\\\\\" but failed to find it\\\\nI1013 17:25:13.141346 6449 ovnkube_controller.go:804] Add Logical Switch Port event expected pod with UID \\\\\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\\\\\" in cache\\\\nI1013 17:25:13.141945 6449 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1013 17:25:13.142064 6449 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1013 17:25:13.142100 6449 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1013 17:25:13.142115 6449 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1013 17:25:13.142133 6449 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1013 17:25:13.142159 6449 factory.go:656] Stopping watch factory\\\\nI1013 17:25:13.142173 6449 ovnkube.go:599] Stopped ovnkube\\\\nI1013 17:25:13.142239 6449 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1013 17:25:13.142339 6449 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:25:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c62dd5ae9eb692f394556f2d0b70bf14a7c99fe714ad7084a5d08b324a4435de\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T17:25:42Z\\\",\\\"message\\\":\\\"andler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1013 17:25:42.123785 6825 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1013 17:25:42.123794 6825 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1013 17:25:42.123798 6825 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1013 17:25:42.123803 6825 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1013 17:25:42.123823 6825 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1013 17:25:42.123830 6825 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1013 17:25:42.123842 6825 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1013 17:25:42.123855 6825 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1013 17:25:42.123862 6825 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1013 17:25:42.123869 6825 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1013 17:25:42.123884 6825 handler.go:208] Removed *v1.Node event handler 2\\\\nI1013 17:25:42.123897 6825 factory.go:656] Stopping watch factory\\\\nI1013 17:25:42.123912 6825 ovnkube.go:599] Stopped ovnkube\\\\nI1013 17:25:42.123944 6825 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1013 17:25:42.123954 6825 handler.go:208] Removed *v1.Node event handler 7\\\\nF1013 17:25:42.124279 6825 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:25:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34b1fffbebfbd3599bed22fe3fe80129f5b704560c54d09e9ab5e9ea8bba4513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:42Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.861868 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bvtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b35c333b-0d4e-4d9a-a8fe-a1c6f5fbbf75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03411ae3786161605ed0cb08a21661c7c7a4cc93f91320586547b8ffab3e90ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g97k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bvtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:42Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.884711 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea323a1a-c607-40cf-b61c-b7ff03399c45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88b8a46e2e2c2342bd22a4a9b42cadd0188ec83f23815773f6ffb46be79fdf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d875b0d7eb9f13bc697597a33f7196598ae679980b13842e92acf20477526f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e18d0e9df43d366a6e7088cc3d5bdc794882828f60f008eaf29e788e0141ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef2ba92cea782dd4fd31a511e1086777b58c00340014dff74ff6615b07bd10c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7977e20d8558810fb9f8f827830d0378de7d15c71340d396a95b506902c0962\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T17:24:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1013 17:24:28.736714 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 17:24:28.738546 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445610647/tls.crt::/tmp/serving-cert-445610647/tls.key\\\\\\\"\\\\nI1013 17:24:34.860768 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 17:24:34.864611 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 17:24:34.864632 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 17:24:34.864654 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 17:24:34.864660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 17:24:34.871230 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1013 17:24:34.871248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1013 17:24:34.871273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871290 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871305 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 17:24:34.871314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 17:24:34.871324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 17:24:34.871335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1013 17:24:34.872239 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd536f783e3f8f974f25772994c3d00a01c2ba66997a6e06236f1b83b890f8f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:42Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.902449 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd8dd666-aa42-4598-bb52-c0cd9345384d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e96fe80c0a88f19c6c5705c7fd945c3282a104b7767db5c217b6364c045d649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9d9053346b85f12f6cf781f1699dbf8aea670b7e1cc5f4fdc1ffac2c969712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d586c8b48f5d2ca87e3a758dd265611f11678f6fa8dd1118a859923c331c4f67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19c59ec68cb6306af37b90cd4583f5a1a266e93f77566d756223eecf7cdadb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19c59ec68cb6306af37b90cd4583f5a1a266e93f77566d756223eecf7cdadb76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:42Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.903535 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.903604 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.903626 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.903654 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.903676 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:42Z","lastTransitionTime":"2025-10-13T17:25:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.920322 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:42Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.939027 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://705766b44d481300d0fd8cc654aa2dc9046733e5a71ad3a7ab789f85e737dcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a69b8825692cbbb7cce0a52b1ccb7b137ce6c9f8a8d788fff95bb7228cb40e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:42Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.954453 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f85e4aab30e7f49ef05e48cc22fc22e0291e32604a8f74ae1cc2ea57b5889e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:42Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.968758 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f856022da70859465022db433585bd8cad12f8cd2aeae7943491d3f7e5049256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:42Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.980274 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmjlm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f85220d6-f940-4538-806a-2b26bacd7b09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6315723f6cd375a392c8c427a737021b7e01415c902514b09ee43407749caa34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7795z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmjlm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:42Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:42 crc kubenswrapper[4720]: I1013 17:25:42.996829 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7baf94d8-250c-4568-ab1c-a1429b8caf70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435ba16189d6d19892aaab158512c1a84a72103779e3cc4b2c634de344583c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e566f3b66d404dd8da93bba1745b9e67cd63532330591f5b735f6826c07eae2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ec64662f56c2609ebac1383a78840ba4e0d9fb1fe10e666169285d3852e6b40\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d8f079c2674e0c6bea1837755db23bc3205da6a01ac1896d6293226d9c6fb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:42Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.006580 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.006612 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.006622 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.006636 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.006646 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:43Z","lastTransitionTime":"2025-10-13T17:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.108787 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.109012 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.109032 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.109056 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.109074 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:43Z","lastTransitionTime":"2025-10-13T17:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.167534 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.167632 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:25:43 crc kubenswrapper[4720]: E1013 17:25:43.167704 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.167643 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:25:43 crc kubenswrapper[4720]: E1013 17:25:43.167818 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 17:25:43 crc kubenswrapper[4720]: E1013 17:25:43.167951 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.212680 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.212718 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.212728 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.212745 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.212758 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:43Z","lastTransitionTime":"2025-10-13T17:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.314847 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.314902 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.314918 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.314940 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.314957 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:43Z","lastTransitionTime":"2025-10-13T17:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.417736 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.417796 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.417813 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.417836 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.417853 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:43Z","lastTransitionTime":"2025-10-13T17:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.521182 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.521270 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.521290 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.521313 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.521331 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:43Z","lastTransitionTime":"2025-10-13T17:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.624354 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.624413 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.624430 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.624452 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.624470 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:43Z","lastTransitionTime":"2025-10-13T17:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.661923 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pn6lz_8064812e-b6aa-4f56-81c9-16154c00abad/ovnkube-controller/3.log" Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.666005 4720 scope.go:117] "RemoveContainer" containerID="c62dd5ae9eb692f394556f2d0b70bf14a7c99fe714ad7084a5d08b324a4435de" Oct 13 17:25:43 crc kubenswrapper[4720]: E1013 17:25:43.666154 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pn6lz_openshift-ovn-kubernetes(8064812e-b6aa-4f56-81c9-16154c00abad)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" podUID="8064812e-b6aa-4f56-81c9-16154c00abad" Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.685438 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7baf94d8-250c-4568-ab1c-a1429b8caf70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435ba16189d6d19892aaab158512c1a84a72103779e3cc4b2c634de344583c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e566f3b66d404dd8da93bba1745b9e67cd63532330591f5b735f6826c07eae2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ec64662f56c2609ebac1383a78840ba4e0d9fb1fe10e666169285d3852e6b40\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d8f079c2674e0c6bea1837755db23bc3205da6a01ac1896d6293226d9c6fb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:43Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.703002 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd8dd666-aa42-4598-bb52-c0cd9345384d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e96fe80c0a88f19c6c5705c7fd945c3282a104b7767db5c217b6364c045d649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9d9053346b85f12f6cf781f1699dbf8aea670b7e1cc5f4fdc1ffac2c969712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d586c8b48f5d2ca87e3a758dd265611f11678f6fa8dd1118a859923c331c4f67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19c59ec68cb6306af37b90cd4583f5a1a266e93f77566d756223eecf7cdadb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19c59ec68cb6306af37b90cd4583f5a1a266e93f77566d756223eecf7cdadb76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:43Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.719599 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:43Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.728558 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.728677 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.728744 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.728780 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.728843 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:43Z","lastTransitionTime":"2025-10-13T17:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.741262 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://705766b44d481300d0fd8cc654aa2dc9046733e5a71ad3a7ab789f85e737dcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a69b8825692cbbb7cce0a52b1ccb7b137ce6c9f8a8d788fff95bb7228cb40e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:43Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.760276 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f85e4aab30e7f49ef05e48cc22fc22e0291e32604a8f74ae1cc2ea57b5889e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:43Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.779285 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f856022da70859465022db433585bd8cad12f8cd2aeae7943491d3f7e5049256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:43Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.793245 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmjlm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f85220d6-f940-4538-806a-2b26bacd7b09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6315723f6cd375a392c8c427a737021b7e01415c902514b09ee43407749caa34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7795z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmjlm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:43Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.808349 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce442c80-fcde-4b79-b6f9-f8f25771dfd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10e7a946185d6502e37a1ab60ad1eca36cce991f230188d22b2d6e2ea554f920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://864e330267c8cec8487e1dcf4a50bbb2ba37d81d80361f0873e6bf69de1ed721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-htwnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:43Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.831008 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5rgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d7408e5-b529-4396-92b3-2fed275c3250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://439dea1ed9724a215fa1aa1f4a4bd84a4bb998f2f86814f5f4eb46989242f11c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42ff97f0a5fb40b52d5445a00eb8f70f8028610b76baf33117b425b821b5ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f42ff97f0a5fb40b52d5445a00eb8f70f8028610b76baf33117b425b821b5ec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://471bb77042e72bcac4bb28dd26f6151a7154bea0b3ec92272e405e9b22bc0170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://471bb77042e72bcac4bb28dd26f6151a7154bea0b3ec92272e405e9b22bc0170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20cb8981843cc39e8722f560f7a7e75483c93a30edffde9835533f8dc5b50587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20cb8981843cc39e8722f560f7a7e75483c93a30edffde9835533f8dc5b50587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79851ed8c84027d1bb9eccb69bed004ae1cdbe934245c66325eafe42b12ea808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79851ed8c84027d1bb9eccb69bed004ae1cdbe934245c66325eafe42b12ea808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5rgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:43Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.832980 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.833047 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.833070 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.833098 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.833123 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:43Z","lastTransitionTime":"2025-10-13T17:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.847607 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c6ntg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxbvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxbvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c6ntg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:43Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.878909 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a1f1696-ed7f-4482-a300-ca25b68e09e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2507a30017fbd9947ca456b84812f0b26baabf572768870f5a34b3c7699443cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc60672e282de4d3d6a2d8fd64306ec0fc0d032ade655f88afb0c0b9e0a8e589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cf903094dae4e559bcef3e269240670b5d4693344d600779c394ce1e039b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06eea6556daedf913d87b8b0527b337070528b07cc026de878b4df332598a2d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b452d113d48729b6d4ab59f80138b15e7cb16cc16eef4ec9916901a067986d07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:43Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.895350 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00fc1050-d713-484e-899e-9bd4e5d7b250\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://591ebf9a13f38e2f458aa34584be8fabc9115335a04af437a2291e7980a903ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18ddea8ad4714addb2f8431d0802d32a36f3d823bb123d4546bc6de1a3a017a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ddea8ad4714addb2f8431d0802d32a36f3d823bb123d4546bc6de1a3a017a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:43Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.913762 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:43Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.929103 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:43Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.935748 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.935824 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.935840 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.935866 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.935882 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:43Z","lastTransitionTime":"2025-10-13T17:25:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.946916 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dnjrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e6efca3-dcc8-488e-8fb1-e14ee0396158\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0f539bcc67139c5b3d51ab01a3114afc20fc44cf5e286dce2b4cad0ea0629c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g676g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c792a043326901c5cbcae8c96b168aee71b56f6e91d0edfc7f58abb5da9c9c28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g676g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dnjrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:43Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.962310 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea323a1a-c607-40cf-b61c-b7ff03399c45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88b8a46e2e2c2342bd22a4a9b42cadd0188ec83f23815773f6ffb46be79fdf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d875b0d7eb9f13bc697597a33f7196598ae679980b13842e92acf20477526f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e18d0e9df43d366a6e7088cc3d5bdc794882828f60f008eaf29e788e0141ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef2ba92cea782dd4fd31a511e1086777b58c00340014dff74ff6615b07bd10c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7977e20d8558810fb9f8f827830d0378de7d15c71340d396a95b506902c0962\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T17:24:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1013 17:24:28.736714 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 17:24:28.738546 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445610647/tls.crt::/tmp/serving-cert-445610647/tls.key\\\\\\\"\\\\nI1013 17:24:34.860768 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 17:24:34.864611 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 17:24:34.864632 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 17:24:34.864654 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 17:24:34.864660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 17:24:34.871230 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1013 17:24:34.871248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1013 17:24:34.871273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871290 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871305 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 17:24:34.871314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 17:24:34.871324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 17:24:34.871335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1013 17:24:34.872239 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd536f783e3f8f974f25772994c3d00a01c2ba66997a6e06236f1b83b890f8f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:43Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:43 crc kubenswrapper[4720]: I1013 17:25:43.981373 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxmjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b45ec2d-5bea-4007-a49f-224a866f93eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://835c6ec8dc7ae3785a23ae45e5c9dc4b3bcc24428ca4c3865a6dd790d5956e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c00868f87f1a3020a1db7e50377bbe90203d6e93799c8ec9f3100dd52a6c0cf6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T17:25:27Z\\\",\\\"message\\\":\\\"2025-10-13T17:24:42+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d76d90a0-c554-4f54-9b32-3c7f604179f6\\\\n2025-10-13T17:24:42+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d76d90a0-c554-4f54-9b32-3c7f604179f6 to /host/opt/cni/bin/\\\\n2025-10-13T17:24:42Z [verbose] multus-daemon started\\\\n2025-10-13T17:24:42Z [verbose] Readiness Indicator file check\\\\n2025-10-13T17:25:27Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xsrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxmjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:43Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:44 crc kubenswrapper[4720]: I1013 17:25:44.011050 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8064812e-b6aa-4f56-81c9-16154c00abad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c28fde07efa17d20ddecd1c8a0da09a6fa09e98a90b14718565c00aef9c0d25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bbd41d9b0518c8b060b1d01c335b6c9923e7196624fbd49326133faaa2e36ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd8a4dfb390a1643edb4eb4bd01970eaffd8dab0f748effac9e7c691ef4a3ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://013dfade47eccae4339fca0d379d914f63502613f87f1bc621113f7008634475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69d511a66a84fb0fa90f426bd4609c2b3ad915571802c6c024e0370d6e1683a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1aefa46ee989416f0e4fd05f6f12031da637a354fadd7f6caef75ddc2c74ddf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62dd5ae9eb692f394556f2d0b70bf14a7c99fe714ad7084a5d08b324a4435de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c62dd5ae9eb692f394556f2d0b70bf14a7c99fe714ad7084a5d08b324a4435de\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T17:25:42Z\\\",\\\"message\\\":\\\"andler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1013 17:25:42.123785 6825 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1013 17:25:42.123794 6825 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1013 17:25:42.123798 6825 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1013 17:25:42.123803 6825 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1013 17:25:42.123823 6825 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1013 17:25:42.123830 6825 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1013 17:25:42.123842 6825 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1013 17:25:42.123855 6825 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1013 17:25:42.123862 6825 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1013 17:25:42.123869 6825 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1013 17:25:42.123884 6825 handler.go:208] Removed *v1.Node event handler 2\\\\nI1013 17:25:42.123897 6825 factory.go:656] Stopping watch factory\\\\nI1013 17:25:42.123912 6825 ovnkube.go:599] Stopped ovnkube\\\\nI1013 17:25:42.123944 6825 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1013 17:25:42.123954 6825 handler.go:208] Removed *v1.Node event handler 7\\\\nF1013 17:25:42.124279 6825 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:25:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pn6lz_openshift-ovn-kubernetes(8064812e-b6aa-4f56-81c9-16154c00abad)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34b1fffbebfbd3599bed22fe3fe80129f5b704560c54d09e9ab5e9ea8bba4513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:44Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:44 crc kubenswrapper[4720]: I1013 17:25:44.028407 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bvtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b35c333b-0d4e-4d9a-a8fe-a1c6f5fbbf75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03411ae3786161605ed0cb08a21661c7c7a4cc93f91320586547b8ffab3e90ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g97k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bvtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:44Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:44 crc kubenswrapper[4720]: I1013 17:25:44.038621 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:44 crc kubenswrapper[4720]: I1013 17:25:44.038694 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:44 crc kubenswrapper[4720]: I1013 17:25:44.038710 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:44 crc kubenswrapper[4720]: I1013 17:25:44.038734 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:44 crc kubenswrapper[4720]: I1013 17:25:44.038750 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:44Z","lastTransitionTime":"2025-10-13T17:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:44 crc kubenswrapper[4720]: I1013 17:25:44.141327 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:44 crc kubenswrapper[4720]: I1013 17:25:44.141375 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:44 crc kubenswrapper[4720]: I1013 17:25:44.141385 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:44 crc kubenswrapper[4720]: I1013 17:25:44.141400 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:44 crc kubenswrapper[4720]: I1013 17:25:44.141410 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:44Z","lastTransitionTime":"2025-10-13T17:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:44 crc kubenswrapper[4720]: I1013 17:25:44.167263 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c6ntg" Oct 13 17:25:44 crc kubenswrapper[4720]: E1013 17:25:44.167475 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c6ntg" podUID="c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61" Oct 13 17:25:44 crc kubenswrapper[4720]: I1013 17:25:44.243552 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:44 crc kubenswrapper[4720]: I1013 17:25:44.243612 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:44 crc kubenswrapper[4720]: I1013 17:25:44.243621 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:44 crc kubenswrapper[4720]: I1013 17:25:44.243637 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:44 crc kubenswrapper[4720]: I1013 17:25:44.243648 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:44Z","lastTransitionTime":"2025-10-13T17:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:44 crc kubenswrapper[4720]: I1013 17:25:44.346443 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:44 crc kubenswrapper[4720]: I1013 17:25:44.346487 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:44 crc kubenswrapper[4720]: I1013 17:25:44.346496 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:44 crc kubenswrapper[4720]: I1013 17:25:44.346511 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:44 crc kubenswrapper[4720]: I1013 17:25:44.346521 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:44Z","lastTransitionTime":"2025-10-13T17:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:44 crc kubenswrapper[4720]: I1013 17:25:44.449616 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:44 crc kubenswrapper[4720]: I1013 17:25:44.449704 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:44 crc kubenswrapper[4720]: I1013 17:25:44.449722 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:44 crc kubenswrapper[4720]: I1013 17:25:44.449744 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:44 crc kubenswrapper[4720]: I1013 17:25:44.449760 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:44Z","lastTransitionTime":"2025-10-13T17:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:44 crc kubenswrapper[4720]: I1013 17:25:44.552640 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:44 crc kubenswrapper[4720]: I1013 17:25:44.552696 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:44 crc kubenswrapper[4720]: I1013 17:25:44.552713 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:44 crc kubenswrapper[4720]: I1013 17:25:44.552736 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:44 crc kubenswrapper[4720]: I1013 17:25:44.552752 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:44Z","lastTransitionTime":"2025-10-13T17:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:44 crc kubenswrapper[4720]: I1013 17:25:44.656400 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:44 crc kubenswrapper[4720]: I1013 17:25:44.656480 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:44 crc kubenswrapper[4720]: I1013 17:25:44.656504 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:44 crc kubenswrapper[4720]: I1013 17:25:44.656536 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:44 crc kubenswrapper[4720]: I1013 17:25:44.656560 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:44Z","lastTransitionTime":"2025-10-13T17:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:44 crc kubenswrapper[4720]: I1013 17:25:44.759325 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:44 crc kubenswrapper[4720]: I1013 17:25:44.759373 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:44 crc kubenswrapper[4720]: I1013 17:25:44.759384 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:44 crc kubenswrapper[4720]: I1013 17:25:44.759399 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:44 crc kubenswrapper[4720]: I1013 17:25:44.759411 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:44Z","lastTransitionTime":"2025-10-13T17:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:44 crc kubenswrapper[4720]: I1013 17:25:44.862483 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:44 crc kubenswrapper[4720]: I1013 17:25:44.862546 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:44 crc kubenswrapper[4720]: I1013 17:25:44.862562 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:44 crc kubenswrapper[4720]: I1013 17:25:44.862587 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:44 crc kubenswrapper[4720]: I1013 17:25:44.862604 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:44Z","lastTransitionTime":"2025-10-13T17:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:44 crc kubenswrapper[4720]: I1013 17:25:44.965793 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:44 crc kubenswrapper[4720]: I1013 17:25:44.965861 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:44 crc kubenswrapper[4720]: I1013 17:25:44.965878 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:44 crc kubenswrapper[4720]: I1013 17:25:44.965903 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:44 crc kubenswrapper[4720]: I1013 17:25:44.965920 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:44Z","lastTransitionTime":"2025-10-13T17:25:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.069067 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.069129 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.069140 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.069157 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.069167 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:45Z","lastTransitionTime":"2025-10-13T17:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.167150 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.167247 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.167355 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:25:45 crc kubenswrapper[4720]: E1013 17:25:45.167427 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 17:25:45 crc kubenswrapper[4720]: E1013 17:25:45.167609 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 17:25:45 crc kubenswrapper[4720]: E1013 17:25:45.167970 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.170981 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.171044 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.171082 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.171113 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.171138 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:45Z","lastTransitionTime":"2025-10-13T17:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.181524 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bvtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b35c333b-0d4e-4d9a-a8fe-a1c6f5fbbf75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03411ae3786161605ed0cb08a21661c7c7a4cc93f91320586547b8ffab3e90ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g97k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bvtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:45Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.201392 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea323a1a-c607-40cf-b61c-b7ff03399c45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88b8a46e2e2c2342bd22a4a9b42cadd0188ec83f23815773f6ffb46be79fdf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d875b0d7eb9f13bc697597a33f7196598ae679980b13842e92acf20477526f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e18d0e9df43d366a6e7088cc3d5bdc794882828f60f008eaf29e788e0141ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef2ba92cea782dd4fd31a511e1086777b58c00340014dff74ff6615b07bd10c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7977e20d8558810fb9f8f827830d0378de7d15c71340d396a95b506902c0962\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T17:24:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1013 17:24:28.736714 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 17:24:28.738546 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445610647/tls.crt::/tmp/serving-cert-445610647/tls.key\\\\\\\"\\\\nI1013 17:24:34.860768 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 17:24:34.864611 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 17:24:34.864632 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 17:24:34.864654 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 17:24:34.864660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 17:24:34.871230 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1013 17:24:34.871248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1013 17:24:34.871273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871290 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871305 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 17:24:34.871314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 17:24:34.871324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 17:24:34.871335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1013 17:24:34.872239 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd536f783e3f8f974f25772994c3d00a01c2ba66997a6e06236f1b83b890f8f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:45Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.215520 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxmjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b45ec2d-5bea-4007-a49f-224a866f93eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://835c6ec8dc7ae3785a23ae45e5c9dc4b3bcc24428ca4c3865a6dd790d5956e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c00868f87f1a3020a1db7e50377bbe90203d6e93799c8ec9f3100dd52a6c0cf6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T17:25:27Z\\\",\\\"message\\\":\\\"2025-10-13T17:24:42+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d76d90a0-c554-4f54-9b32-3c7f604179f6\\\\n2025-10-13T17:24:42+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d76d90a0-c554-4f54-9b32-3c7f604179f6 to /host/opt/cni/bin/\\\\n2025-10-13T17:24:42Z [verbose] multus-daemon started\\\\n2025-10-13T17:24:42Z [verbose] Readiness Indicator file check\\\\n2025-10-13T17:25:27Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xsrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxmjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:45Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.251082 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8064812e-b6aa-4f56-81c9-16154c00abad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c28fde07efa17d20ddecd1c8a0da09a6fa09e98a90b14718565c00aef9c0d25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bbd41d9b0518c8b060b1d01c335b6c9923e7196624fbd49326133faaa2e36ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd8a4dfb390a1643edb4eb4bd01970eaffd8dab0f748effac9e7c691ef4a3ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://013dfade47eccae4339fca0d379d914f63502613f87f1bc621113f7008634475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69d511a66a84fb0fa90f426bd4609c2b3ad915571802c6c024e0370d6e1683a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1aefa46ee989416f0e4fd05f6f12031da637a354fadd7f6caef75ddc2c74ddf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62dd5ae9eb692f394556f2d0b70bf14a7c99fe714ad7084a5d08b324a4435de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c62dd5ae9eb692f394556f2d0b70bf14a7c99fe714ad7084a5d08b324a4435de\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T17:25:42Z\\\",\\\"message\\\":\\\"andler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1013 17:25:42.123785 6825 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1013 17:25:42.123794 6825 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1013 17:25:42.123798 6825 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1013 17:25:42.123803 6825 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1013 17:25:42.123823 6825 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1013 17:25:42.123830 6825 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1013 17:25:42.123842 6825 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1013 17:25:42.123855 6825 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1013 17:25:42.123862 6825 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1013 17:25:42.123869 6825 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1013 17:25:42.123884 6825 handler.go:208] Removed *v1.Node event handler 2\\\\nI1013 17:25:42.123897 6825 factory.go:656] Stopping watch factory\\\\nI1013 17:25:42.123912 6825 ovnkube.go:599] Stopped ovnkube\\\\nI1013 17:25:42.123944 6825 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1013 17:25:42.123954 6825 handler.go:208] Removed *v1.Node event handler 7\\\\nF1013 17:25:42.124279 6825 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:25:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pn6lz_openshift-ovn-kubernetes(8064812e-b6aa-4f56-81c9-16154c00abad)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34b1fffbebfbd3599bed22fe3fe80129f5b704560c54d09e9ab5e9ea8bba4513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:45Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.266241 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://705766b44d481300d0fd8cc654aa2dc9046733e5a71ad3a7ab789f85e737dcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a69b8825692cbbb7cce0a52b1ccb7b137ce6c9f8a8d788fff95bb7228cb40e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:45Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.275575 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.275616 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.275630 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.275649 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.275660 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:45Z","lastTransitionTime":"2025-10-13T17:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.281520 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f85e4aab30e7f49ef05e48cc22fc22e0291e32604a8f74ae1cc2ea57b5889e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:45Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.300945 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f856022da70859465022db433585bd8cad12f8cd2aeae7943491d3f7e5049256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:45Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.315783 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmjlm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f85220d6-f940-4538-806a-2b26bacd7b09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6315723f6cd375a392c8c427a737021b7e01415c902514b09ee43407749caa34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7795z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmjlm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:45Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.332925 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7baf94d8-250c-4568-ab1c-a1429b8caf70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435ba16189d6d19892aaab158512c1a84a72103779e3cc4b2c634de344583c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e566f3b66d404dd8da93bba1745b9e67cd63532330591f5b735f6826c07eae2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ec64662f56c2609ebac1383a78840ba4e0d9fb1fe10e666169285d3852e6b40\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d8f079c2674e0c6bea1837755db23bc3205da6a01ac1896d6293226d9c6fb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:45Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.351446 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd8dd666-aa42-4598-bb52-c0cd9345384d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e96fe80c0a88f19c6c5705c7fd945c3282a104b7767db5c217b6364c045d649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9d9053346b85f12f6cf781f1699dbf8aea670b7e1cc5f4fdc1ffac2c969712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d586c8b48f5d2ca87e3a758dd265611f11678f6fa8dd1118a859923c331c4f67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19c59ec68cb6306af37b90cd4583f5a1a266e93f77566d756223eecf7cdadb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19c59ec68cb6306af37b90cd4583f5a1a266e93f77566d756223eecf7cdadb76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:45Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.372055 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:45Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.378168 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.378257 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.378277 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.378299 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.378315 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:45Z","lastTransitionTime":"2025-10-13T17:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.389365 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce442c80-fcde-4b79-b6f9-f8f25771dfd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10e7a946185d6502e37a1ab60ad1eca36cce991f230188d22b2d6e2ea554f920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://864e330267c8cec8487e1dcf4a50bbb2ba37d81d80361f0873e6bf69de1ed721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-htwnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:45Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.411763 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5rgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d7408e5-b529-4396-92b3-2fed275c3250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://439dea1ed9724a215fa1aa1f4a4bd84a4bb998f2f86814f5f4eb46989242f11c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42ff97f0a5fb40b52d5445a00eb8f70f8028610b76baf33117b425b821b5ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f42ff97f0a5fb40b52d5445a00eb8f70f8028610b76baf33117b425b821b5ec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://471bb77042e72bcac4bb28dd26f6151a7154bea0b3ec92272e405e9b22bc0170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://471bb77042e72bcac4bb28dd26f6151a7154bea0b3ec92272e405e9b22bc0170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20cb8981843cc39e8722f560f7a7e75483c93a30edffde9835533f8dc5b50587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20cb8981843cc39e8722f560f7a7e75483c93a30edffde9835533f8dc5b50587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79851ed8c84027d1bb9eccb69bed004ae1cdbe934245c66325eafe42b12ea808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79851ed8c84027d1bb9eccb69bed004ae1cdbe934245c66325eafe42b12ea808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5rgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:45Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.426986 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c6ntg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxbvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxbvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c6ntg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:45Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.444872 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:45Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.460929 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dnjrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e6efca3-dcc8-488e-8fb1-e14ee0396158\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0f539bcc67139c5b3d51ab01a3114afc20fc44cf5e286dce2b4cad0ea0629c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g676g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c792a043326901c5cbcae8c96b168aee71b56f6e91d0edfc7f58abb5da9c9c28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g676g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dnjrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:45Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.481170 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.481230 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.481243 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.481262 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.481277 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:45Z","lastTransitionTime":"2025-10-13T17:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.492576 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a1f1696-ed7f-4482-a300-ca25b68e09e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2507a30017fbd9947ca456b84812f0b26baabf572768870f5a34b3c7699443cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc60672e282de4d3d6a2d8fd64306ec0fc0d032ade655f88afb0c0b9e0a8e589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cf903094dae4e559bcef3e269240670b5d4693344d600779c394ce1e039b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06eea6556daedf913d87b8b0527b337070528b07cc026de878b4df332598a2d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b452d113d48729b6d4ab59f80138b15e7cb16cc16eef4ec9916901a067986d07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:45Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.508711 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00fc1050-d713-484e-899e-9bd4e5d7b250\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://591ebf9a13f38e2f458aa34584be8fabc9115335a04af437a2291e7980a903ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18ddea8ad4714addb2f8431d0802d32a36f3d823bb123d4546bc6de1a3a017a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ddea8ad4714addb2f8431d0802d32a36f3d823bb123d4546bc6de1a3a017a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:45Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.525770 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:45Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.583685 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.583748 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.583760 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.583775 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.583787 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:45Z","lastTransitionTime":"2025-10-13T17:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.686084 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.686153 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.686171 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.686249 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.686269 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:45Z","lastTransitionTime":"2025-10-13T17:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.789409 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.789450 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.789459 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.789475 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.789487 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:45Z","lastTransitionTime":"2025-10-13T17:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.891873 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.891909 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.891917 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.891931 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.891940 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:45Z","lastTransitionTime":"2025-10-13T17:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.994858 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.994909 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.994927 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.994948 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:45 crc kubenswrapper[4720]: I1013 17:25:45.994965 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:45Z","lastTransitionTime":"2025-10-13T17:25:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:46 crc kubenswrapper[4720]: I1013 17:25:46.097053 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:46 crc kubenswrapper[4720]: I1013 17:25:46.097105 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:46 crc kubenswrapper[4720]: I1013 17:25:46.097121 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:46 crc kubenswrapper[4720]: I1013 17:25:46.097144 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:46 crc kubenswrapper[4720]: I1013 17:25:46.097160 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:46Z","lastTransitionTime":"2025-10-13T17:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:46 crc kubenswrapper[4720]: I1013 17:25:46.167516 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c6ntg" Oct 13 17:25:46 crc kubenswrapper[4720]: E1013 17:25:46.167698 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c6ntg" podUID="c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61" Oct 13 17:25:46 crc kubenswrapper[4720]: I1013 17:25:46.199689 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:46 crc kubenswrapper[4720]: I1013 17:25:46.199761 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:46 crc kubenswrapper[4720]: I1013 17:25:46.199781 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:46 crc kubenswrapper[4720]: I1013 17:25:46.199803 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:46 crc kubenswrapper[4720]: I1013 17:25:46.199819 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:46Z","lastTransitionTime":"2025-10-13T17:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:46 crc kubenswrapper[4720]: I1013 17:25:46.303259 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:46 crc kubenswrapper[4720]: I1013 17:25:46.303347 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:46 crc kubenswrapper[4720]: I1013 17:25:46.303376 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:46 crc kubenswrapper[4720]: I1013 17:25:46.303402 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:46 crc kubenswrapper[4720]: I1013 17:25:46.303420 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:46Z","lastTransitionTime":"2025-10-13T17:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:46 crc kubenswrapper[4720]: I1013 17:25:46.405931 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:46 crc kubenswrapper[4720]: I1013 17:25:46.405973 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:46 crc kubenswrapper[4720]: I1013 17:25:46.405982 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:46 crc kubenswrapper[4720]: I1013 17:25:46.405998 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:46 crc kubenswrapper[4720]: I1013 17:25:46.406008 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:46Z","lastTransitionTime":"2025-10-13T17:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:46 crc kubenswrapper[4720]: I1013 17:25:46.508487 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:46 crc kubenswrapper[4720]: I1013 17:25:46.508537 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:46 crc kubenswrapper[4720]: I1013 17:25:46.508553 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:46 crc kubenswrapper[4720]: I1013 17:25:46.508580 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:46 crc kubenswrapper[4720]: I1013 17:25:46.508597 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:46Z","lastTransitionTime":"2025-10-13T17:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:46 crc kubenswrapper[4720]: I1013 17:25:46.612115 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:46 crc kubenswrapper[4720]: I1013 17:25:46.612181 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:46 crc kubenswrapper[4720]: I1013 17:25:46.612235 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:46 crc kubenswrapper[4720]: I1013 17:25:46.612264 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:46 crc kubenswrapper[4720]: I1013 17:25:46.612285 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:46Z","lastTransitionTime":"2025-10-13T17:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:46 crc kubenswrapper[4720]: I1013 17:25:46.715362 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:46 crc kubenswrapper[4720]: I1013 17:25:46.715413 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:46 crc kubenswrapper[4720]: I1013 17:25:46.715436 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:46 crc kubenswrapper[4720]: I1013 17:25:46.715466 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:46 crc kubenswrapper[4720]: I1013 17:25:46.715490 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:46Z","lastTransitionTime":"2025-10-13T17:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:46 crc kubenswrapper[4720]: I1013 17:25:46.819079 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:46 crc kubenswrapper[4720]: I1013 17:25:46.819111 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:46 crc kubenswrapper[4720]: I1013 17:25:46.819121 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:46 crc kubenswrapper[4720]: I1013 17:25:46.819134 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:46 crc kubenswrapper[4720]: I1013 17:25:46.819144 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:46Z","lastTransitionTime":"2025-10-13T17:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:46 crc kubenswrapper[4720]: I1013 17:25:46.921639 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:46 crc kubenswrapper[4720]: I1013 17:25:46.921675 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:46 crc kubenswrapper[4720]: I1013 17:25:46.921684 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:46 crc kubenswrapper[4720]: I1013 17:25:46.921697 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:46 crc kubenswrapper[4720]: I1013 17:25:46.921708 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:46Z","lastTransitionTime":"2025-10-13T17:25:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:47 crc kubenswrapper[4720]: I1013 17:25:47.023793 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:47 crc kubenswrapper[4720]: I1013 17:25:47.023837 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:47 crc kubenswrapper[4720]: I1013 17:25:47.023846 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:47 crc kubenswrapper[4720]: I1013 17:25:47.023860 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:47 crc kubenswrapper[4720]: I1013 17:25:47.023871 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:47Z","lastTransitionTime":"2025-10-13T17:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:47 crc kubenswrapper[4720]: I1013 17:25:47.126884 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:47 crc kubenswrapper[4720]: I1013 17:25:47.126957 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:47 crc kubenswrapper[4720]: I1013 17:25:47.126978 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:47 crc kubenswrapper[4720]: I1013 17:25:47.127008 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:47 crc kubenswrapper[4720]: I1013 17:25:47.127033 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:47Z","lastTransitionTime":"2025-10-13T17:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:47 crc kubenswrapper[4720]: I1013 17:25:47.167803 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:25:47 crc kubenswrapper[4720]: I1013 17:25:47.167938 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:25:47 crc kubenswrapper[4720]: E1013 17:25:47.168126 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 17:25:47 crc kubenswrapper[4720]: I1013 17:25:47.168171 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:25:47 crc kubenswrapper[4720]: E1013 17:25:47.168422 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 17:25:47 crc kubenswrapper[4720]: E1013 17:25:47.168482 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 17:25:47 crc kubenswrapper[4720]: I1013 17:25:47.230013 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:47 crc kubenswrapper[4720]: I1013 17:25:47.230099 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:47 crc kubenswrapper[4720]: I1013 17:25:47.230123 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:47 crc kubenswrapper[4720]: I1013 17:25:47.230152 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:47 crc kubenswrapper[4720]: I1013 17:25:47.230173 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:47Z","lastTransitionTime":"2025-10-13T17:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:47 crc kubenswrapper[4720]: I1013 17:25:47.332239 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:47 crc kubenswrapper[4720]: I1013 17:25:47.332304 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:47 crc kubenswrapper[4720]: I1013 17:25:47.332323 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:47 crc kubenswrapper[4720]: I1013 17:25:47.332347 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:47 crc kubenswrapper[4720]: I1013 17:25:47.332366 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:47Z","lastTransitionTime":"2025-10-13T17:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:47 crc kubenswrapper[4720]: I1013 17:25:47.434355 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:47 crc kubenswrapper[4720]: I1013 17:25:47.434414 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:47 crc kubenswrapper[4720]: I1013 17:25:47.434433 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:47 crc kubenswrapper[4720]: I1013 17:25:47.434454 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:47 crc kubenswrapper[4720]: I1013 17:25:47.434470 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:47Z","lastTransitionTime":"2025-10-13T17:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:47 crc kubenswrapper[4720]: I1013 17:25:47.536448 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:47 crc kubenswrapper[4720]: I1013 17:25:47.536511 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:47 crc kubenswrapper[4720]: I1013 17:25:47.536522 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:47 crc kubenswrapper[4720]: I1013 17:25:47.536534 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:47 crc kubenswrapper[4720]: I1013 17:25:47.536544 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:47Z","lastTransitionTime":"2025-10-13T17:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:47 crc kubenswrapper[4720]: I1013 17:25:47.639211 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:47 crc kubenswrapper[4720]: I1013 17:25:47.639250 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:47 crc kubenswrapper[4720]: I1013 17:25:47.639259 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:47 crc kubenswrapper[4720]: I1013 17:25:47.639278 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:47 crc kubenswrapper[4720]: I1013 17:25:47.639289 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:47Z","lastTransitionTime":"2025-10-13T17:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:47 crc kubenswrapper[4720]: I1013 17:25:47.741343 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:47 crc kubenswrapper[4720]: I1013 17:25:47.741389 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:47 crc kubenswrapper[4720]: I1013 17:25:47.741405 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:47 crc kubenswrapper[4720]: I1013 17:25:47.741430 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:47 crc kubenswrapper[4720]: I1013 17:25:47.741452 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:47Z","lastTransitionTime":"2025-10-13T17:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:47 crc kubenswrapper[4720]: I1013 17:25:47.844271 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:47 crc kubenswrapper[4720]: I1013 17:25:47.844330 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:47 crc kubenswrapper[4720]: I1013 17:25:47.844341 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:47 crc kubenswrapper[4720]: I1013 17:25:47.844359 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:47 crc kubenswrapper[4720]: I1013 17:25:47.844374 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:47Z","lastTransitionTime":"2025-10-13T17:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:47 crc kubenswrapper[4720]: I1013 17:25:47.947318 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:47 crc kubenswrapper[4720]: I1013 17:25:47.947364 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:47 crc kubenswrapper[4720]: I1013 17:25:47.947376 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:47 crc kubenswrapper[4720]: I1013 17:25:47.947394 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:47 crc kubenswrapper[4720]: I1013 17:25:47.947408 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:47Z","lastTransitionTime":"2025-10-13T17:25:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:48 crc kubenswrapper[4720]: I1013 17:25:48.049999 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:48 crc kubenswrapper[4720]: I1013 17:25:48.050047 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:48 crc kubenswrapper[4720]: I1013 17:25:48.050068 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:48 crc kubenswrapper[4720]: I1013 17:25:48.050095 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:48 crc kubenswrapper[4720]: I1013 17:25:48.050115 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:48Z","lastTransitionTime":"2025-10-13T17:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:48 crc kubenswrapper[4720]: I1013 17:25:48.153062 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:48 crc kubenswrapper[4720]: I1013 17:25:48.153337 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:48 crc kubenswrapper[4720]: I1013 17:25:48.153434 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:48 crc kubenswrapper[4720]: I1013 17:25:48.153556 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:48 crc kubenswrapper[4720]: I1013 17:25:48.153642 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:48Z","lastTransitionTime":"2025-10-13T17:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:48 crc kubenswrapper[4720]: I1013 17:25:48.167774 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c6ntg" Oct 13 17:25:48 crc kubenswrapper[4720]: E1013 17:25:48.167920 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c6ntg" podUID="c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61" Oct 13 17:25:48 crc kubenswrapper[4720]: I1013 17:25:48.255408 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:48 crc kubenswrapper[4720]: I1013 17:25:48.255452 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:48 crc kubenswrapper[4720]: I1013 17:25:48.255463 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:48 crc kubenswrapper[4720]: I1013 17:25:48.255479 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:48 crc kubenswrapper[4720]: I1013 17:25:48.255490 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:48Z","lastTransitionTime":"2025-10-13T17:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:48 crc kubenswrapper[4720]: I1013 17:25:48.358398 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:48 crc kubenswrapper[4720]: I1013 17:25:48.358452 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:48 crc kubenswrapper[4720]: I1013 17:25:48.358463 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:48 crc kubenswrapper[4720]: I1013 17:25:48.358479 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:48 crc kubenswrapper[4720]: I1013 17:25:48.358491 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:48Z","lastTransitionTime":"2025-10-13T17:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:48 crc kubenswrapper[4720]: I1013 17:25:48.460625 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:48 crc kubenswrapper[4720]: I1013 17:25:48.460695 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:48 crc kubenswrapper[4720]: I1013 17:25:48.460712 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:48 crc kubenswrapper[4720]: I1013 17:25:48.460736 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:48 crc kubenswrapper[4720]: I1013 17:25:48.460754 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:48Z","lastTransitionTime":"2025-10-13T17:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:48 crc kubenswrapper[4720]: I1013 17:25:48.563790 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:48 crc kubenswrapper[4720]: I1013 17:25:48.564092 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:48 crc kubenswrapper[4720]: I1013 17:25:48.564276 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:48 crc kubenswrapper[4720]: I1013 17:25:48.564453 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:48 crc kubenswrapper[4720]: I1013 17:25:48.564716 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:48Z","lastTransitionTime":"2025-10-13T17:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:48 crc kubenswrapper[4720]: I1013 17:25:48.667526 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:48 crc kubenswrapper[4720]: I1013 17:25:48.667949 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:48 crc kubenswrapper[4720]: I1013 17:25:48.668272 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:48 crc kubenswrapper[4720]: I1013 17:25:48.668685 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:48 crc kubenswrapper[4720]: I1013 17:25:48.668962 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:48Z","lastTransitionTime":"2025-10-13T17:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:48 crc kubenswrapper[4720]: I1013 17:25:48.772485 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:48 crc kubenswrapper[4720]: I1013 17:25:48.772840 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:48 crc kubenswrapper[4720]: I1013 17:25:48.772989 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:48 crc kubenswrapper[4720]: I1013 17:25:48.773123 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:48 crc kubenswrapper[4720]: I1013 17:25:48.773305 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:48Z","lastTransitionTime":"2025-10-13T17:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:48 crc kubenswrapper[4720]: I1013 17:25:48.876300 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:48 crc kubenswrapper[4720]: I1013 17:25:48.876339 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:48 crc kubenswrapper[4720]: I1013 17:25:48.876348 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:48 crc kubenswrapper[4720]: I1013 17:25:48.876366 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:48 crc kubenswrapper[4720]: I1013 17:25:48.876377 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:48Z","lastTransitionTime":"2025-10-13T17:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:48 crc kubenswrapper[4720]: I1013 17:25:48.978669 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:48 crc kubenswrapper[4720]: I1013 17:25:48.978724 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:48 crc kubenswrapper[4720]: I1013 17:25:48.978733 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:48 crc kubenswrapper[4720]: I1013 17:25:48.978749 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:48 crc kubenswrapper[4720]: I1013 17:25:48.978759 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:48Z","lastTransitionTime":"2025-10-13T17:25:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:49 crc kubenswrapper[4720]: I1013 17:25:49.081039 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:49 crc kubenswrapper[4720]: I1013 17:25:49.081085 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:49 crc kubenswrapper[4720]: I1013 17:25:49.081095 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:49 crc kubenswrapper[4720]: I1013 17:25:49.081115 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:49 crc kubenswrapper[4720]: I1013 17:25:49.081126 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:49Z","lastTransitionTime":"2025-10-13T17:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:49 crc kubenswrapper[4720]: I1013 17:25:49.167977 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:25:49 crc kubenswrapper[4720]: I1013 17:25:49.168050 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:25:49 crc kubenswrapper[4720]: I1013 17:25:49.167997 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:25:49 crc kubenswrapper[4720]: E1013 17:25:49.168141 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 17:25:49 crc kubenswrapper[4720]: E1013 17:25:49.168333 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 17:25:49 crc kubenswrapper[4720]: E1013 17:25:49.168489 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 17:25:49 crc kubenswrapper[4720]: I1013 17:25:49.184182 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:49 crc kubenswrapper[4720]: I1013 17:25:49.184233 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:49 crc kubenswrapper[4720]: I1013 17:25:49.184243 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:49 crc kubenswrapper[4720]: I1013 17:25:49.184257 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:49 crc kubenswrapper[4720]: I1013 17:25:49.184265 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:49Z","lastTransitionTime":"2025-10-13T17:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:49 crc kubenswrapper[4720]: I1013 17:25:49.287725 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:49 crc kubenswrapper[4720]: I1013 17:25:49.287773 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:49 crc kubenswrapper[4720]: I1013 17:25:49.287784 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:49 crc kubenswrapper[4720]: I1013 17:25:49.287801 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:49 crc kubenswrapper[4720]: I1013 17:25:49.287812 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:49Z","lastTransitionTime":"2025-10-13T17:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:49 crc kubenswrapper[4720]: I1013 17:25:49.390876 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:49 crc kubenswrapper[4720]: I1013 17:25:49.390962 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:49 crc kubenswrapper[4720]: I1013 17:25:49.390982 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:49 crc kubenswrapper[4720]: I1013 17:25:49.391004 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:49 crc kubenswrapper[4720]: I1013 17:25:49.391024 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:49Z","lastTransitionTime":"2025-10-13T17:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:49 crc kubenswrapper[4720]: I1013 17:25:49.493530 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:49 crc kubenswrapper[4720]: I1013 17:25:49.493590 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:49 crc kubenswrapper[4720]: I1013 17:25:49.493609 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:49 crc kubenswrapper[4720]: I1013 17:25:49.493632 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:49 crc kubenswrapper[4720]: I1013 17:25:49.493651 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:49Z","lastTransitionTime":"2025-10-13T17:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:49 crc kubenswrapper[4720]: I1013 17:25:49.596293 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:49 crc kubenswrapper[4720]: I1013 17:25:49.596427 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:49 crc kubenswrapper[4720]: I1013 17:25:49.596445 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:49 crc kubenswrapper[4720]: I1013 17:25:49.596469 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:49 crc kubenswrapper[4720]: I1013 17:25:49.596485 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:49Z","lastTransitionTime":"2025-10-13T17:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:49 crc kubenswrapper[4720]: I1013 17:25:49.699514 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:49 crc kubenswrapper[4720]: I1013 17:25:49.699574 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:49 crc kubenswrapper[4720]: I1013 17:25:49.699593 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:49 crc kubenswrapper[4720]: I1013 17:25:49.699617 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:49 crc kubenswrapper[4720]: I1013 17:25:49.699634 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:49Z","lastTransitionTime":"2025-10-13T17:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:49 crc kubenswrapper[4720]: I1013 17:25:49.802682 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:49 crc kubenswrapper[4720]: I1013 17:25:49.802729 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:49 crc kubenswrapper[4720]: I1013 17:25:49.802741 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:49 crc kubenswrapper[4720]: I1013 17:25:49.802758 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:49 crc kubenswrapper[4720]: I1013 17:25:49.802770 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:49Z","lastTransitionTime":"2025-10-13T17:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:49 crc kubenswrapper[4720]: I1013 17:25:49.905484 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:49 crc kubenswrapper[4720]: I1013 17:25:49.905523 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:49 crc kubenswrapper[4720]: I1013 17:25:49.905534 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:49 crc kubenswrapper[4720]: I1013 17:25:49.905549 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:49 crc kubenswrapper[4720]: I1013 17:25:49.905562 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:49Z","lastTransitionTime":"2025-10-13T17:25:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:50 crc kubenswrapper[4720]: I1013 17:25:50.008803 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:50 crc kubenswrapper[4720]: I1013 17:25:50.008867 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:50 crc kubenswrapper[4720]: I1013 17:25:50.008883 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:50 crc kubenswrapper[4720]: I1013 17:25:50.008907 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:50 crc kubenswrapper[4720]: I1013 17:25:50.008926 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:50Z","lastTransitionTime":"2025-10-13T17:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:50 crc kubenswrapper[4720]: I1013 17:25:50.112284 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:50 crc kubenswrapper[4720]: I1013 17:25:50.112346 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:50 crc kubenswrapper[4720]: I1013 17:25:50.112367 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:50 crc kubenswrapper[4720]: I1013 17:25:50.112398 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:50 crc kubenswrapper[4720]: I1013 17:25:50.112419 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:50Z","lastTransitionTime":"2025-10-13T17:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:50 crc kubenswrapper[4720]: I1013 17:25:50.203906 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c6ntg" Oct 13 17:25:50 crc kubenswrapper[4720]: I1013 17:25:50.203966 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:25:50 crc kubenswrapper[4720]: E1013 17:25:50.204108 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c6ntg" podUID="c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61" Oct 13 17:25:50 crc kubenswrapper[4720]: E1013 17:25:50.204302 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 17:25:50 crc kubenswrapper[4720]: I1013 17:25:50.215653 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:50 crc kubenswrapper[4720]: I1013 17:25:50.215703 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:50 crc kubenswrapper[4720]: I1013 17:25:50.215714 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:50 crc kubenswrapper[4720]: I1013 17:25:50.215731 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:50 crc kubenswrapper[4720]: I1013 17:25:50.215745 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:50Z","lastTransitionTime":"2025-10-13T17:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:50 crc kubenswrapper[4720]: I1013 17:25:50.318519 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:50 crc kubenswrapper[4720]: I1013 17:25:50.318559 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:50 crc kubenswrapper[4720]: I1013 17:25:50.318568 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:50 crc kubenswrapper[4720]: I1013 17:25:50.318581 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:50 crc kubenswrapper[4720]: I1013 17:25:50.318592 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:50Z","lastTransitionTime":"2025-10-13T17:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:50 crc kubenswrapper[4720]: I1013 17:25:50.421404 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:50 crc kubenswrapper[4720]: I1013 17:25:50.421464 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:50 crc kubenswrapper[4720]: I1013 17:25:50.421481 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:50 crc kubenswrapper[4720]: I1013 17:25:50.421503 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:50 crc kubenswrapper[4720]: I1013 17:25:50.421522 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:50Z","lastTransitionTime":"2025-10-13T17:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:50 crc kubenswrapper[4720]: I1013 17:25:50.523582 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:50 crc kubenswrapper[4720]: I1013 17:25:50.523645 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:50 crc kubenswrapper[4720]: I1013 17:25:50.523661 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:50 crc kubenswrapper[4720]: I1013 17:25:50.523683 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:50 crc kubenswrapper[4720]: I1013 17:25:50.523703 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:50Z","lastTransitionTime":"2025-10-13T17:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:50 crc kubenswrapper[4720]: I1013 17:25:50.625915 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:50 crc kubenswrapper[4720]: I1013 17:25:50.625951 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:50 crc kubenswrapper[4720]: I1013 17:25:50.625959 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:50 crc kubenswrapper[4720]: I1013 17:25:50.625973 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:50 crc kubenswrapper[4720]: I1013 17:25:50.625982 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:50Z","lastTransitionTime":"2025-10-13T17:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:50 crc kubenswrapper[4720]: I1013 17:25:50.729060 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:50 crc kubenswrapper[4720]: I1013 17:25:50.729116 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:50 crc kubenswrapper[4720]: I1013 17:25:50.729127 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:50 crc kubenswrapper[4720]: I1013 17:25:50.729143 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:50 crc kubenswrapper[4720]: I1013 17:25:50.729156 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:50Z","lastTransitionTime":"2025-10-13T17:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:50 crc kubenswrapper[4720]: I1013 17:25:50.832913 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:50 crc kubenswrapper[4720]: I1013 17:25:50.832992 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:50 crc kubenswrapper[4720]: I1013 17:25:50.833010 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:50 crc kubenswrapper[4720]: I1013 17:25:50.833036 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:50 crc kubenswrapper[4720]: I1013 17:25:50.833053 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:50Z","lastTransitionTime":"2025-10-13T17:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:50 crc kubenswrapper[4720]: I1013 17:25:50.940912 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:50 crc kubenswrapper[4720]: I1013 17:25:50.940967 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:50 crc kubenswrapper[4720]: I1013 17:25:50.940978 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:50 crc kubenswrapper[4720]: I1013 17:25:50.940994 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:50 crc kubenswrapper[4720]: I1013 17:25:50.941006 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:50Z","lastTransitionTime":"2025-10-13T17:25:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:51 crc kubenswrapper[4720]: I1013 17:25:51.042961 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:51 crc kubenswrapper[4720]: I1013 17:25:51.043022 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:51 crc kubenswrapper[4720]: I1013 17:25:51.043041 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:51 crc kubenswrapper[4720]: I1013 17:25:51.043065 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:51 crc kubenswrapper[4720]: I1013 17:25:51.043082 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:51Z","lastTransitionTime":"2025-10-13T17:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:51 crc kubenswrapper[4720]: I1013 17:25:51.145670 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:51 crc kubenswrapper[4720]: I1013 17:25:51.145730 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:51 crc kubenswrapper[4720]: I1013 17:25:51.145747 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:51 crc kubenswrapper[4720]: I1013 17:25:51.145767 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:51 crc kubenswrapper[4720]: I1013 17:25:51.145779 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:51Z","lastTransitionTime":"2025-10-13T17:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:51 crc kubenswrapper[4720]: I1013 17:25:51.167432 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:25:51 crc kubenswrapper[4720]: I1013 17:25:51.167509 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:25:51 crc kubenswrapper[4720]: E1013 17:25:51.167629 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 17:25:51 crc kubenswrapper[4720]: E1013 17:25:51.167718 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 17:25:51 crc kubenswrapper[4720]: I1013 17:25:51.248680 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:51 crc kubenswrapper[4720]: I1013 17:25:51.248723 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:51 crc kubenswrapper[4720]: I1013 17:25:51.248737 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:51 crc kubenswrapper[4720]: I1013 17:25:51.248758 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:51 crc kubenswrapper[4720]: I1013 17:25:51.248773 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:51Z","lastTransitionTime":"2025-10-13T17:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:51 crc kubenswrapper[4720]: I1013 17:25:51.352017 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:51 crc kubenswrapper[4720]: I1013 17:25:51.352094 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:51 crc kubenswrapper[4720]: I1013 17:25:51.352129 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:51 crc kubenswrapper[4720]: I1013 17:25:51.352161 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:51 crc kubenswrapper[4720]: I1013 17:25:51.352182 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:51Z","lastTransitionTime":"2025-10-13T17:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:51 crc kubenswrapper[4720]: I1013 17:25:51.455311 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:51 crc kubenswrapper[4720]: I1013 17:25:51.455367 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:51 crc kubenswrapper[4720]: I1013 17:25:51.455383 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:51 crc kubenswrapper[4720]: I1013 17:25:51.455405 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:51 crc kubenswrapper[4720]: I1013 17:25:51.455421 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:51Z","lastTransitionTime":"2025-10-13T17:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:51 crc kubenswrapper[4720]: I1013 17:25:51.558073 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:51 crc kubenswrapper[4720]: I1013 17:25:51.558117 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:51 crc kubenswrapper[4720]: I1013 17:25:51.558171 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:51 crc kubenswrapper[4720]: I1013 17:25:51.558211 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:51 crc kubenswrapper[4720]: I1013 17:25:51.558221 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:51Z","lastTransitionTime":"2025-10-13T17:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:51 crc kubenswrapper[4720]: I1013 17:25:51.660812 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:51 crc kubenswrapper[4720]: I1013 17:25:51.660876 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:51 crc kubenswrapper[4720]: I1013 17:25:51.660891 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:51 crc kubenswrapper[4720]: I1013 17:25:51.660912 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:51 crc kubenswrapper[4720]: I1013 17:25:51.660948 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:51Z","lastTransitionTime":"2025-10-13T17:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:51 crc kubenswrapper[4720]: I1013 17:25:51.764227 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:51 crc kubenswrapper[4720]: I1013 17:25:51.764289 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:51 crc kubenswrapper[4720]: I1013 17:25:51.764313 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:51 crc kubenswrapper[4720]: I1013 17:25:51.764340 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:51 crc kubenswrapper[4720]: I1013 17:25:51.764360 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:51Z","lastTransitionTime":"2025-10-13T17:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:51 crc kubenswrapper[4720]: I1013 17:25:51.867445 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:51 crc kubenswrapper[4720]: I1013 17:25:51.867528 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:51 crc kubenswrapper[4720]: I1013 17:25:51.867546 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:51 crc kubenswrapper[4720]: I1013 17:25:51.867569 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:51 crc kubenswrapper[4720]: I1013 17:25:51.867619 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:51Z","lastTransitionTime":"2025-10-13T17:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:51 crc kubenswrapper[4720]: I1013 17:25:51.970998 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:51 crc kubenswrapper[4720]: I1013 17:25:51.971062 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:51 crc kubenswrapper[4720]: I1013 17:25:51.971083 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:51 crc kubenswrapper[4720]: I1013 17:25:51.971110 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:51 crc kubenswrapper[4720]: I1013 17:25:51.971130 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:51Z","lastTransitionTime":"2025-10-13T17:25:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.074923 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.074982 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.075003 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.075029 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.075112 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:52Z","lastTransitionTime":"2025-10-13T17:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.167473 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c6ntg" Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.167499 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:25:52 crc kubenswrapper[4720]: E1013 17:25:52.167661 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c6ntg" podUID="c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61" Oct 13 17:25:52 crc kubenswrapper[4720]: E1013 17:25:52.167760 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.176869 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.176916 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.176925 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.176944 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.176957 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:52Z","lastTransitionTime":"2025-10-13T17:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.279908 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.279997 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.280014 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.280037 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.280084 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:52Z","lastTransitionTime":"2025-10-13T17:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.381945 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.381981 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.381991 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.382020 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.382031 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:52Z","lastTransitionTime":"2025-10-13T17:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.484876 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.484931 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.484942 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.484957 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.484966 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:52Z","lastTransitionTime":"2025-10-13T17:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.587369 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.587412 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.587425 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.587439 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.587449 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:52Z","lastTransitionTime":"2025-10-13T17:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.690266 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.690325 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.690342 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.690367 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.690383 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:52Z","lastTransitionTime":"2025-10-13T17:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.792720 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.792752 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.792761 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.792773 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.792782 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:52Z","lastTransitionTime":"2025-10-13T17:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.896391 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.896466 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.896483 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.896511 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.896527 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:52Z","lastTransitionTime":"2025-10-13T17:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.919441 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.919491 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.919509 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.919531 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.919546 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:52Z","lastTransitionTime":"2025-10-13T17:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:52 crc kubenswrapper[4720]: E1013 17:25:52.936336 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a530de0f-daad-4050-9522-69c64a451e75\\\",\\\"systemUUID\\\":\\\"00bb7c43-79d6-45b5-bd02-4b71a0ba6837\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:52Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.941437 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.941491 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.941507 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.941529 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.941546 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:52Z","lastTransitionTime":"2025-10-13T17:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:52 crc kubenswrapper[4720]: E1013 17:25:52.961512 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a530de0f-daad-4050-9522-69c64a451e75\\\",\\\"systemUUID\\\":\\\"00bb7c43-79d6-45b5-bd02-4b71a0ba6837\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:52Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.965956 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.965988 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.965999 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.966014 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.966026 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:52Z","lastTransitionTime":"2025-10-13T17:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:52 crc kubenswrapper[4720]: E1013 17:25:52.978594 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a530de0f-daad-4050-9522-69c64a451e75\\\",\\\"systemUUID\\\":\\\"00bb7c43-79d6-45b5-bd02-4b71a0ba6837\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:52Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.982421 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.982473 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.982493 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.982524 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:52 crc kubenswrapper[4720]: I1013 17:25:52.982547 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:52Z","lastTransitionTime":"2025-10-13T17:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:52 crc kubenswrapper[4720]: E1013 17:25:52.997447 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a530de0f-daad-4050-9522-69c64a451e75\\\",\\\"systemUUID\\\":\\\"00bb7c43-79d6-45b5-bd02-4b71a0ba6837\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:52Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:53 crc kubenswrapper[4720]: I1013 17:25:53.005509 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:53 crc kubenswrapper[4720]: I1013 17:25:53.005583 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:53 crc kubenswrapper[4720]: I1013 17:25:53.005603 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:53 crc kubenswrapper[4720]: I1013 17:25:53.005630 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:53 crc kubenswrapper[4720]: I1013 17:25:53.005656 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:53Z","lastTransitionTime":"2025-10-13T17:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:53 crc kubenswrapper[4720]: E1013 17:25:53.022459 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-13T17:25:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a530de0f-daad-4050-9522-69c64a451e75\\\",\\\"systemUUID\\\":\\\"00bb7c43-79d6-45b5-bd02-4b71a0ba6837\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:53Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:53 crc kubenswrapper[4720]: E1013 17:25:53.022626 4720 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 13 17:25:53 crc kubenswrapper[4720]: I1013 17:25:53.024212 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:53 crc kubenswrapper[4720]: I1013 17:25:53.024246 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:53 crc kubenswrapper[4720]: I1013 17:25:53.024256 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:53 crc kubenswrapper[4720]: I1013 17:25:53.024273 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:53 crc kubenswrapper[4720]: I1013 17:25:53.024284 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:53Z","lastTransitionTime":"2025-10-13T17:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:53 crc kubenswrapper[4720]: I1013 17:25:53.127118 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:53 crc kubenswrapper[4720]: I1013 17:25:53.127178 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:53 crc kubenswrapper[4720]: I1013 17:25:53.127219 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:53 crc kubenswrapper[4720]: I1013 17:25:53.127241 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:53 crc kubenswrapper[4720]: I1013 17:25:53.127260 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:53Z","lastTransitionTime":"2025-10-13T17:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:53 crc kubenswrapper[4720]: I1013 17:25:53.167456 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:25:53 crc kubenswrapper[4720]: E1013 17:25:53.167599 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 17:25:53 crc kubenswrapper[4720]: I1013 17:25:53.167657 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:25:53 crc kubenswrapper[4720]: E1013 17:25:53.167908 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 17:25:53 crc kubenswrapper[4720]: I1013 17:25:53.229570 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:53 crc kubenswrapper[4720]: I1013 17:25:53.229618 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:53 crc kubenswrapper[4720]: I1013 17:25:53.229634 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:53 crc kubenswrapper[4720]: I1013 17:25:53.229656 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:53 crc kubenswrapper[4720]: I1013 17:25:53.229673 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:53Z","lastTransitionTime":"2025-10-13T17:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:53 crc kubenswrapper[4720]: I1013 17:25:53.332674 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:53 crc kubenswrapper[4720]: I1013 17:25:53.332847 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:53 crc kubenswrapper[4720]: I1013 17:25:53.332880 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:53 crc kubenswrapper[4720]: I1013 17:25:53.332909 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:53 crc kubenswrapper[4720]: I1013 17:25:53.332930 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:53Z","lastTransitionTime":"2025-10-13T17:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:53 crc kubenswrapper[4720]: I1013 17:25:53.434783 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:53 crc kubenswrapper[4720]: I1013 17:25:53.434824 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:53 crc kubenswrapper[4720]: I1013 17:25:53.434833 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:53 crc kubenswrapper[4720]: I1013 17:25:53.434846 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:53 crc kubenswrapper[4720]: I1013 17:25:53.434856 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:53Z","lastTransitionTime":"2025-10-13T17:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:53 crc kubenswrapper[4720]: I1013 17:25:53.538268 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:53 crc kubenswrapper[4720]: I1013 17:25:53.538321 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:53 crc kubenswrapper[4720]: I1013 17:25:53.538338 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:53 crc kubenswrapper[4720]: I1013 17:25:53.538360 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:53 crc kubenswrapper[4720]: I1013 17:25:53.538379 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:53Z","lastTransitionTime":"2025-10-13T17:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:53 crc kubenswrapper[4720]: I1013 17:25:53.640247 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:53 crc kubenswrapper[4720]: I1013 17:25:53.640320 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:53 crc kubenswrapper[4720]: I1013 17:25:53.640330 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:53 crc kubenswrapper[4720]: I1013 17:25:53.640345 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:53 crc kubenswrapper[4720]: I1013 17:25:53.640354 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:53Z","lastTransitionTime":"2025-10-13T17:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:53 crc kubenswrapper[4720]: I1013 17:25:53.742710 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:53 crc kubenswrapper[4720]: I1013 17:25:53.742770 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:53 crc kubenswrapper[4720]: I1013 17:25:53.742791 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:53 crc kubenswrapper[4720]: I1013 17:25:53.742824 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:53 crc kubenswrapper[4720]: I1013 17:25:53.742849 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:53Z","lastTransitionTime":"2025-10-13T17:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:53 crc kubenswrapper[4720]: I1013 17:25:53.845949 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:53 crc kubenswrapper[4720]: I1013 17:25:53.846008 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:53 crc kubenswrapper[4720]: I1013 17:25:53.846047 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:53 crc kubenswrapper[4720]: I1013 17:25:53.846072 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:53 crc kubenswrapper[4720]: I1013 17:25:53.846089 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:53Z","lastTransitionTime":"2025-10-13T17:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:53 crc kubenswrapper[4720]: I1013 17:25:53.948330 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:53 crc kubenswrapper[4720]: I1013 17:25:53.948381 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:53 crc kubenswrapper[4720]: I1013 17:25:53.948397 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:53 crc kubenswrapper[4720]: I1013 17:25:53.948419 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:53 crc kubenswrapper[4720]: I1013 17:25:53.948435 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:53Z","lastTransitionTime":"2025-10-13T17:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:54 crc kubenswrapper[4720]: I1013 17:25:54.051148 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:54 crc kubenswrapper[4720]: I1013 17:25:54.051215 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:54 crc kubenswrapper[4720]: I1013 17:25:54.051228 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:54 crc kubenswrapper[4720]: I1013 17:25:54.051243 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:54 crc kubenswrapper[4720]: I1013 17:25:54.051253 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:54Z","lastTransitionTime":"2025-10-13T17:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:54 crc kubenswrapper[4720]: I1013 17:25:54.153786 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:54 crc kubenswrapper[4720]: I1013 17:25:54.153827 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:54 crc kubenswrapper[4720]: I1013 17:25:54.153836 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:54 crc kubenswrapper[4720]: I1013 17:25:54.153849 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:54 crc kubenswrapper[4720]: I1013 17:25:54.153858 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:54Z","lastTransitionTime":"2025-10-13T17:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:54 crc kubenswrapper[4720]: I1013 17:25:54.167490 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:25:54 crc kubenswrapper[4720]: I1013 17:25:54.167499 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c6ntg" Oct 13 17:25:54 crc kubenswrapper[4720]: E1013 17:25:54.167596 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 17:25:54 crc kubenswrapper[4720]: E1013 17:25:54.167729 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c6ntg" podUID="c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61" Oct 13 17:25:54 crc kubenswrapper[4720]: I1013 17:25:54.255577 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:54 crc kubenswrapper[4720]: I1013 17:25:54.255614 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:54 crc kubenswrapper[4720]: I1013 17:25:54.255624 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:54 crc kubenswrapper[4720]: I1013 17:25:54.255639 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:54 crc kubenswrapper[4720]: I1013 17:25:54.255648 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:54Z","lastTransitionTime":"2025-10-13T17:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:54 crc kubenswrapper[4720]: I1013 17:25:54.357913 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:54 crc kubenswrapper[4720]: I1013 17:25:54.357946 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:54 crc kubenswrapper[4720]: I1013 17:25:54.357957 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:54 crc kubenswrapper[4720]: I1013 17:25:54.357971 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:54 crc kubenswrapper[4720]: I1013 17:25:54.357982 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:54Z","lastTransitionTime":"2025-10-13T17:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:54 crc kubenswrapper[4720]: I1013 17:25:54.460376 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:54 crc kubenswrapper[4720]: I1013 17:25:54.460419 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:54 crc kubenswrapper[4720]: I1013 17:25:54.460428 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:54 crc kubenswrapper[4720]: I1013 17:25:54.460442 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:54 crc kubenswrapper[4720]: I1013 17:25:54.460451 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:54Z","lastTransitionTime":"2025-10-13T17:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:54 crc kubenswrapper[4720]: I1013 17:25:54.563116 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:54 crc kubenswrapper[4720]: I1013 17:25:54.563150 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:54 crc kubenswrapper[4720]: I1013 17:25:54.563158 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:54 crc kubenswrapper[4720]: I1013 17:25:54.563172 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:54 crc kubenswrapper[4720]: I1013 17:25:54.563182 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:54Z","lastTransitionTime":"2025-10-13T17:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:54 crc kubenswrapper[4720]: I1013 17:25:54.664890 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:54 crc kubenswrapper[4720]: I1013 17:25:54.664955 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:54 crc kubenswrapper[4720]: I1013 17:25:54.664980 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:54 crc kubenswrapper[4720]: I1013 17:25:54.665010 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:54 crc kubenswrapper[4720]: I1013 17:25:54.665032 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:54Z","lastTransitionTime":"2025-10-13T17:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:54 crc kubenswrapper[4720]: I1013 17:25:54.766916 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:54 crc kubenswrapper[4720]: I1013 17:25:54.766963 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:54 crc kubenswrapper[4720]: I1013 17:25:54.766972 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:54 crc kubenswrapper[4720]: I1013 17:25:54.766988 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:54 crc kubenswrapper[4720]: I1013 17:25:54.767001 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:54Z","lastTransitionTime":"2025-10-13T17:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:54 crc kubenswrapper[4720]: I1013 17:25:54.869216 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:54 crc kubenswrapper[4720]: I1013 17:25:54.869262 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:54 crc kubenswrapper[4720]: I1013 17:25:54.869272 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:54 crc kubenswrapper[4720]: I1013 17:25:54.869287 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:54 crc kubenswrapper[4720]: I1013 17:25:54.869296 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:54Z","lastTransitionTime":"2025-10-13T17:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:54 crc kubenswrapper[4720]: I1013 17:25:54.972405 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:54 crc kubenswrapper[4720]: I1013 17:25:54.972449 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:54 crc kubenswrapper[4720]: I1013 17:25:54.972460 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:54 crc kubenswrapper[4720]: I1013 17:25:54.972477 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:54 crc kubenswrapper[4720]: I1013 17:25:54.972486 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:54Z","lastTransitionTime":"2025-10-13T17:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.074990 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.075031 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.075042 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.075056 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.075065 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:55Z","lastTransitionTime":"2025-10-13T17:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.167489 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.167506 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:25:55 crc kubenswrapper[4720]: E1013 17:25:55.167925 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 17:25:55 crc kubenswrapper[4720]: E1013 17:25:55.168121 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.168138 4720 scope.go:117] "RemoveContainer" containerID="c62dd5ae9eb692f394556f2d0b70bf14a7c99fe714ad7084a5d08b324a4435de" Oct 13 17:25:55 crc kubenswrapper[4720]: E1013 17:25:55.168317 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pn6lz_openshift-ovn-kubernetes(8064812e-b6aa-4f56-81c9-16154c00abad)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" podUID="8064812e-b6aa-4f56-81c9-16154c00abad" Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.177286 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.177322 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.177331 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.177343 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.177353 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:55Z","lastTransitionTime":"2025-10-13T17:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.182622 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dnjrc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e6efca3-dcc8-488e-8fb1-e14ee0396158\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0f539bcc67139c5b3d51ab01a3114afc20fc44cf5e286dce2b4cad0ea0629c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g676g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c792a043326901c5cbcae8c96b168aee71b56f6e91d0edfc7f58abb5da9c9c28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g676g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dnjrc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:55Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.204918 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a1f1696-ed7f-4482-a300-ca25b68e09e3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2507a30017fbd9947ca456b84812f0b26baabf572768870f5a34b3c7699443cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc60672e282de4d3d6a2d8fd64306ec0fc0d032ade655f88afb0c0b9e0a8e589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9cf903094dae4e559bcef3e269240670b5d4693344d600779c394ce1e039b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06eea6556daedf913d87b8b0527b337070528b07cc026de878b4df332598a2d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b452d113d48729b6d4ab59f80138b15e7cb16cc16eef4ec9916901a067986d07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94a4c50dd473ea159477a3032070031c70d5a9217c054f826e3c0af6ae4df563\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41221277cc046beb4578e4f6a0c7030df64342da1be667a920080a429836c657\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7d443225a5db6a200176bd16ba94dd92c1e81eef85d61cff623cfe7959077ef6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:55Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.219081 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00fc1050-d713-484e-899e-9bd4e5d7b250\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://591ebf9a13f38e2f458aa34584be8fabc9115335a04af437a2291e7980a903ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18ddea8ad4714addb2f8431d0802d32a36f3d823bb123d4546bc6de1a3a017a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18ddea8ad4714addb2f8431d0802d32a36f3d823bb123d4546bc6de1a3a017a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:55Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.234581 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:55Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.250473 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:55Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.268420 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea323a1a-c607-40cf-b61c-b7ff03399c45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88b8a46e2e2c2342bd22a4a9b42cadd0188ec83f23815773f6ffb46be79fdf83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7d875b0d7eb9f13bc697597a33f7196598ae679980b13842e92acf20477526f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e18d0e9df43d366a6e7088cc3d5bdc794882828f60f008eaf29e788e0141ac2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef2ba92cea782dd4fd31a511e1086777b58c00340014dff74ff6615b07bd10c5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7977e20d8558810fb9f8f827830d0378de7d15c71340d396a95b506902c0962\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-13T17:24:34Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1013 17:24:28.736714 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1013 17:24:28.738546 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-445610647/tls.crt::/tmp/serving-cert-445610647/tls.key\\\\\\\"\\\\nI1013 17:24:34.860768 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1013 17:24:34.864611 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1013 17:24:34.864632 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1013 17:24:34.864654 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1013 17:24:34.864660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1013 17:24:34.871230 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1013 17:24:34.871248 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1013 17:24:34.871273 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871290 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1013 17:24:34.871305 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1013 17:24:34.871314 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1013 17:24:34.871324 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1013 17:24:34.871335 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1013 17:24:34.872239 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd536f783e3f8f974f25772994c3d00a01c2ba66997a6e06236f1b83b890f8f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00ad2086cbe25df22e52552acfb410e252eb17a610267b47eb7cfc5dced1c9ff\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:55Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.284951 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.285018 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.285040 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.285072 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.285093 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:55Z","lastTransitionTime":"2025-10-13T17:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.285949 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lxmjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b45ec2d-5bea-4007-a49f-224a866f93eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://835c6ec8dc7ae3785a23ae45e5c9dc4b3bcc24428ca4c3865a6dd790d5956e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c00868f87f1a3020a1db7e50377bbe90203d6e93799c8ec9f3100dd52a6c0cf6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T17:25:27Z\\\",\\\"message\\\":\\\"2025-10-13T17:24:42+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d76d90a0-c554-4f54-9b32-3c7f604179f6\\\\n2025-10-13T17:24:42+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d76d90a0-c554-4f54-9b32-3c7f604179f6 to /host/opt/cni/bin/\\\\n2025-10-13T17:24:42Z [verbose] multus-daemon started\\\\n2025-10-13T17:24:42Z [verbose] Readiness Indicator file check\\\\n2025-10-13T17:25:27Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:25:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6xsrv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lxmjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:55Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.313134 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8064812e-b6aa-4f56-81c9-16154c00abad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c28fde07efa17d20ddecd1c8a0da09a6fa09e98a90b14718565c00aef9c0d25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7bbd41d9b0518c8b060b1d01c335b6c9923e7196624fbd49326133faaa2e36ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd8a4dfb390a1643edb4eb4bd01970eaffd8dab0f748effac9e7c691ef4a3ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://013dfade47eccae4339fca0d379d914f63502613f87f1bc621113f7008634475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69d511a66a84fb0fa90f426bd4609c2b3ad915571802c6c024e0370d6e1683a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1aefa46ee989416f0e4fd05f6f12031da637a354fadd7f6caef75ddc2c74ddf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62dd5ae9eb692f394556f2d0b70bf14a7c99fe714ad7084a5d08b324a4435de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c62dd5ae9eb692f394556f2d0b70bf14a7c99fe714ad7084a5d08b324a4435de\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-13T17:25:42Z\\\",\\\"message\\\":\\\"andler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1013 17:25:42.123785 6825 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1013 17:25:42.123794 6825 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1013 17:25:42.123798 6825 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1013 17:25:42.123803 6825 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1013 17:25:42.123823 6825 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1013 17:25:42.123830 6825 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1013 17:25:42.123842 6825 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1013 17:25:42.123855 6825 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1013 17:25:42.123862 6825 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1013 17:25:42.123869 6825 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1013 17:25:42.123884 6825 handler.go:208] Removed *v1.Node event handler 2\\\\nI1013 17:25:42.123897 6825 factory.go:656] Stopping watch factory\\\\nI1013 17:25:42.123912 6825 ovnkube.go:599] Stopped ovnkube\\\\nI1013 17:25:42.123944 6825 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1013 17:25:42.123954 6825 handler.go:208] Removed *v1.Node event handler 7\\\\nF1013 17:25:42.124279 6825 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-13T17:25:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pn6lz_openshift-ovn-kubernetes(8064812e-b6aa-4f56-81c9-16154c00abad)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34b1fffbebfbd3599bed22fe3fe80129f5b704560c54d09e9ab5e9ea8bba4513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rm5x2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pn6lz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:55Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.323678 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bvtrx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b35c333b-0d4e-4d9a-a8fe-a1c6f5fbbf75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03411ae3786161605ed0cb08a21661c7c7a4cc93f91320586547b8ffab3e90ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4g97k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:41Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bvtrx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:55Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.337991 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f85e4aab30e7f49ef05e48cc22fc22e0291e32604a8f74ae1cc2ea57b5889e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:55Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.358027 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f856022da70859465022db433585bd8cad12f8cd2aeae7943491d3f7e5049256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:55Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.370142 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pmjlm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f85220d6-f940-4538-806a-2b26bacd7b09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6315723f6cd375a392c8c427a737021b7e01415c902514b09ee43407749caa34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7795z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:39Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pmjlm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:55Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.383322 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7baf94d8-250c-4568-ab1c-a1429b8caf70\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://435ba16189d6d19892aaab158512c1a84a72103779e3cc4b2c634de344583c89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e566f3b66d404dd8da93bba1745b9e67cd63532330591f5b735f6826c07eae2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ec64662f56c2609ebac1383a78840ba4e0d9fb1fe10e666169285d3852e6b40\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d8f079c2674e0c6bea1837755db23bc3205da6a01ac1896d6293226d9c6fb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:55Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.390636 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.390715 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.390732 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.390754 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.390804 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:55Z","lastTransitionTime":"2025-10-13T17:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.401303 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fd8dd666-aa42-4598-bb52-c0cd9345384d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:25:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e96fe80c0a88f19c6c5705c7fd945c3282a104b7767db5c217b6364c045d649\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f9d9053346b85f12f6cf781f1699dbf8aea670b7e1cc5f4fdc1ffac2c969712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d586c8b48f5d2ca87e3a758dd265611f11678f6fa8dd1118a859923c331c4f67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19c59ec68cb6306af37b90cd4583f5a1a266e93f77566d756223eecf7cdadb76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19c59ec68cb6306af37b90cd4583f5a1a266e93f77566d756223eecf7cdadb76\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:16Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:15Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:55Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.415020 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:55Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.431228 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://705766b44d481300d0fd8cc654aa2dc9046733e5a71ad3a7ab789f85e737dcee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a69b8825692cbbb7cce0a52b1ccb7b137ce6c9f8a8d788fff95bb7228cb40e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:55Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.445853 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce442c80-fcde-4b79-b6f9-f8f25771dfd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10e7a946185d6502e37a1ab60ad1eca36cce991f230188d22b2d6e2ea554f920\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://864e330267c8cec8487e1dcf4a50bbb2ba37d81d80361f0873e6bf69de1ed721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4cjp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-htwnl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:55Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.460451 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5rgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d7408e5-b529-4396-92b3-2fed275c3250\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://439dea1ed9724a215fa1aa1f4a4bd84a4bb998f2f86814f5f4eb46989242f11c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-13T17:24:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c62b3a707852ca08b9d89356832535447d4f46ec09ab4a94cb00a86b1c1c5b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb704e899e1b5a0193271209940163a17a1dcb93a2fc5d447beb8a081799897b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42ff97f0a5fb40b52d5445a00eb8f70f8028610b76baf33117b425b821b5ec2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f42ff97f0a5fb40b52d5445a00eb8f70f8028610b76baf33117b425b821b5ec2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://471bb77042e72bcac4bb28dd26f6151a7154bea0b3ec92272e405e9b22bc0170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://471bb77042e72bcac4bb28dd26f6151a7154bea0b3ec92272e405e9b22bc0170\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20cb8981843cc39e8722f560f7a7e75483c93a30edffde9835533f8dc5b50587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20cb8981843cc39e8722f560f7a7e75483c93a30edffde9835533f8dc5b50587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79851ed8c84027d1bb9eccb69bed004ae1cdbe934245c66325eafe42b12ea808\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79851ed8c84027d1bb9eccb69bed004ae1cdbe934245c66325eafe42b12ea808\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-13T17:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-13T17:24:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9mrnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5rgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:55Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.473296 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-c6ntg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-13T17:24:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxbvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mxbvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-13T17:24:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-c6ntg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-13T17:25:55Z is after 2025-08-24T17:21:41Z" Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.493232 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.493293 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.493310 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.493328 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.493340 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:55Z","lastTransitionTime":"2025-10-13T17:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.595595 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.595640 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.595656 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.595680 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.595709 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:55Z","lastTransitionTime":"2025-10-13T17:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.698120 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.698177 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.698229 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.698265 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.698284 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:55Z","lastTransitionTime":"2025-10-13T17:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.801371 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.801437 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.801454 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.801476 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.801492 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:55Z","lastTransitionTime":"2025-10-13T17:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.904083 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.904140 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.904156 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.904178 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:55 crc kubenswrapper[4720]: I1013 17:25:55.904249 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:55Z","lastTransitionTime":"2025-10-13T17:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:56 crc kubenswrapper[4720]: I1013 17:25:56.007258 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:56 crc kubenswrapper[4720]: I1013 17:25:56.007321 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:56 crc kubenswrapper[4720]: I1013 17:25:56.007347 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:56 crc kubenswrapper[4720]: I1013 17:25:56.007375 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:56 crc kubenswrapper[4720]: I1013 17:25:56.007397 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:56Z","lastTransitionTime":"2025-10-13T17:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:56 crc kubenswrapper[4720]: I1013 17:25:56.110547 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:56 crc kubenswrapper[4720]: I1013 17:25:56.110614 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:56 crc kubenswrapper[4720]: I1013 17:25:56.110637 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:56 crc kubenswrapper[4720]: I1013 17:25:56.110663 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:56 crc kubenswrapper[4720]: I1013 17:25:56.110684 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:56Z","lastTransitionTime":"2025-10-13T17:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:56 crc kubenswrapper[4720]: I1013 17:25:56.167455 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c6ntg" Oct 13 17:25:56 crc kubenswrapper[4720]: I1013 17:25:56.167481 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:25:56 crc kubenswrapper[4720]: E1013 17:25:56.167561 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c6ntg" podUID="c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61" Oct 13 17:25:56 crc kubenswrapper[4720]: E1013 17:25:56.167692 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 17:25:56 crc kubenswrapper[4720]: I1013 17:25:56.213665 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:56 crc kubenswrapper[4720]: I1013 17:25:56.213707 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:56 crc kubenswrapper[4720]: I1013 17:25:56.213723 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:56 crc kubenswrapper[4720]: I1013 17:25:56.213744 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:56 crc kubenswrapper[4720]: I1013 17:25:56.213759 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:56Z","lastTransitionTime":"2025-10-13T17:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:56 crc kubenswrapper[4720]: I1013 17:25:56.316483 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:56 crc kubenswrapper[4720]: I1013 17:25:56.316518 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:56 crc kubenswrapper[4720]: I1013 17:25:56.316526 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:56 crc kubenswrapper[4720]: I1013 17:25:56.316538 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:56 crc kubenswrapper[4720]: I1013 17:25:56.316547 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:56Z","lastTransitionTime":"2025-10-13T17:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:56 crc kubenswrapper[4720]: I1013 17:25:56.419130 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:56 crc kubenswrapper[4720]: I1013 17:25:56.419161 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:56 crc kubenswrapper[4720]: I1013 17:25:56.419169 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:56 crc kubenswrapper[4720]: I1013 17:25:56.419181 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:56 crc kubenswrapper[4720]: I1013 17:25:56.419217 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:56Z","lastTransitionTime":"2025-10-13T17:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:56 crc kubenswrapper[4720]: I1013 17:25:56.522262 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:56 crc kubenswrapper[4720]: I1013 17:25:56.522314 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:56 crc kubenswrapper[4720]: I1013 17:25:56.522330 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:56 crc kubenswrapper[4720]: I1013 17:25:56.522352 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:56 crc kubenswrapper[4720]: I1013 17:25:56.522370 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:56Z","lastTransitionTime":"2025-10-13T17:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:56 crc kubenswrapper[4720]: I1013 17:25:56.625086 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:56 crc kubenswrapper[4720]: I1013 17:25:56.625144 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:56 crc kubenswrapper[4720]: I1013 17:25:56.625155 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:56 crc kubenswrapper[4720]: I1013 17:25:56.625173 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:56 crc kubenswrapper[4720]: I1013 17:25:56.625197 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:56Z","lastTransitionTime":"2025-10-13T17:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:56 crc kubenswrapper[4720]: I1013 17:25:56.727315 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:56 crc kubenswrapper[4720]: I1013 17:25:56.727382 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:56 crc kubenswrapper[4720]: I1013 17:25:56.727405 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:56 crc kubenswrapper[4720]: I1013 17:25:56.727435 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:56 crc kubenswrapper[4720]: I1013 17:25:56.727459 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:56Z","lastTransitionTime":"2025-10-13T17:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:56 crc kubenswrapper[4720]: I1013 17:25:56.830566 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:56 crc kubenswrapper[4720]: I1013 17:25:56.830608 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:56 crc kubenswrapper[4720]: I1013 17:25:56.830618 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:56 crc kubenswrapper[4720]: I1013 17:25:56.830636 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:56 crc kubenswrapper[4720]: I1013 17:25:56.830646 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:56Z","lastTransitionTime":"2025-10-13T17:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:56 crc kubenswrapper[4720]: I1013 17:25:56.932712 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:56 crc kubenswrapper[4720]: I1013 17:25:56.932760 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:56 crc kubenswrapper[4720]: I1013 17:25:56.932772 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:56 crc kubenswrapper[4720]: I1013 17:25:56.932787 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:56 crc kubenswrapper[4720]: I1013 17:25:56.932797 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:56Z","lastTransitionTime":"2025-10-13T17:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:57 crc kubenswrapper[4720]: I1013 17:25:57.035923 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:57 crc kubenswrapper[4720]: I1013 17:25:57.035986 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:57 crc kubenswrapper[4720]: I1013 17:25:57.036002 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:57 crc kubenswrapper[4720]: I1013 17:25:57.036031 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:57 crc kubenswrapper[4720]: I1013 17:25:57.036049 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:57Z","lastTransitionTime":"2025-10-13T17:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:57 crc kubenswrapper[4720]: I1013 17:25:57.138949 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:57 crc kubenswrapper[4720]: I1013 17:25:57.139015 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:57 crc kubenswrapper[4720]: I1013 17:25:57.139033 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:57 crc kubenswrapper[4720]: I1013 17:25:57.139056 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:57 crc kubenswrapper[4720]: I1013 17:25:57.139073 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:57Z","lastTransitionTime":"2025-10-13T17:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:57 crc kubenswrapper[4720]: I1013 17:25:57.167957 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:25:57 crc kubenswrapper[4720]: I1013 17:25:57.168048 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:25:57 crc kubenswrapper[4720]: E1013 17:25:57.168132 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 17:25:57 crc kubenswrapper[4720]: E1013 17:25:57.168275 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 17:25:57 crc kubenswrapper[4720]: I1013 17:25:57.241798 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:57 crc kubenswrapper[4720]: I1013 17:25:57.241878 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:57 crc kubenswrapper[4720]: I1013 17:25:57.241899 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:57 crc kubenswrapper[4720]: I1013 17:25:57.241942 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:57 crc kubenswrapper[4720]: I1013 17:25:57.241962 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:57Z","lastTransitionTime":"2025-10-13T17:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:57 crc kubenswrapper[4720]: I1013 17:25:57.345399 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:57 crc kubenswrapper[4720]: I1013 17:25:57.345452 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:57 crc kubenswrapper[4720]: I1013 17:25:57.345475 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:57 crc kubenswrapper[4720]: I1013 17:25:57.345503 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:57 crc kubenswrapper[4720]: I1013 17:25:57.345525 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:57Z","lastTransitionTime":"2025-10-13T17:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:57 crc kubenswrapper[4720]: I1013 17:25:57.448323 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:57 crc kubenswrapper[4720]: I1013 17:25:57.448389 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:57 crc kubenswrapper[4720]: I1013 17:25:57.448406 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:57 crc kubenswrapper[4720]: I1013 17:25:57.448430 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:57 crc kubenswrapper[4720]: I1013 17:25:57.448448 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:57Z","lastTransitionTime":"2025-10-13T17:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:57 crc kubenswrapper[4720]: I1013 17:25:57.550711 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:57 crc kubenswrapper[4720]: I1013 17:25:57.550776 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:57 crc kubenswrapper[4720]: I1013 17:25:57.550797 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:57 crc kubenswrapper[4720]: I1013 17:25:57.550826 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:57 crc kubenswrapper[4720]: I1013 17:25:57.550849 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:57Z","lastTransitionTime":"2025-10-13T17:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:57 crc kubenswrapper[4720]: I1013 17:25:57.653823 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:57 crc kubenswrapper[4720]: I1013 17:25:57.653890 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:57 crc kubenswrapper[4720]: I1013 17:25:57.653907 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:57 crc kubenswrapper[4720]: I1013 17:25:57.653930 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:57 crc kubenswrapper[4720]: I1013 17:25:57.653947 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:57Z","lastTransitionTime":"2025-10-13T17:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:57 crc kubenswrapper[4720]: I1013 17:25:57.756415 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:57 crc kubenswrapper[4720]: I1013 17:25:57.756483 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:57 crc kubenswrapper[4720]: I1013 17:25:57.756497 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:57 crc kubenswrapper[4720]: I1013 17:25:57.756514 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:57 crc kubenswrapper[4720]: I1013 17:25:57.756525 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:57Z","lastTransitionTime":"2025-10-13T17:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:57 crc kubenswrapper[4720]: I1013 17:25:57.859264 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:57 crc kubenswrapper[4720]: I1013 17:25:57.859349 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:57 crc kubenswrapper[4720]: I1013 17:25:57.859367 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:57 crc kubenswrapper[4720]: I1013 17:25:57.859404 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:57 crc kubenswrapper[4720]: I1013 17:25:57.859420 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:57Z","lastTransitionTime":"2025-10-13T17:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:57 crc kubenswrapper[4720]: I1013 17:25:57.962538 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:57 crc kubenswrapper[4720]: I1013 17:25:57.962584 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:57 crc kubenswrapper[4720]: I1013 17:25:57.962594 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:57 crc kubenswrapper[4720]: I1013 17:25:57.962612 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:57 crc kubenswrapper[4720]: I1013 17:25:57.962623 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:57Z","lastTransitionTime":"2025-10-13T17:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:58 crc kubenswrapper[4720]: I1013 17:25:58.065298 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:58 crc kubenswrapper[4720]: I1013 17:25:58.065339 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:58 crc kubenswrapper[4720]: I1013 17:25:58.065350 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:58 crc kubenswrapper[4720]: I1013 17:25:58.065365 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:58 crc kubenswrapper[4720]: I1013 17:25:58.065375 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:58Z","lastTransitionTime":"2025-10-13T17:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:58 crc kubenswrapper[4720]: I1013 17:25:58.167674 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:25:58 crc kubenswrapper[4720]: I1013 17:25:58.167861 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:58 crc kubenswrapper[4720]: I1013 17:25:58.167891 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:58 crc kubenswrapper[4720]: I1013 17:25:58.167901 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:58 crc kubenswrapper[4720]: I1013 17:25:58.167915 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:58 crc kubenswrapper[4720]: I1013 17:25:58.167926 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:58Z","lastTransitionTime":"2025-10-13T17:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:58 crc kubenswrapper[4720]: E1013 17:25:58.168013 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 17:25:58 crc kubenswrapper[4720]: I1013 17:25:58.168128 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c6ntg" Oct 13 17:25:58 crc kubenswrapper[4720]: E1013 17:25:58.168293 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c6ntg" podUID="c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61" Oct 13 17:25:58 crc kubenswrapper[4720]: I1013 17:25:58.269895 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:58 crc kubenswrapper[4720]: I1013 17:25:58.269940 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:58 crc kubenswrapper[4720]: I1013 17:25:58.269948 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:58 crc kubenswrapper[4720]: I1013 17:25:58.269962 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:58 crc kubenswrapper[4720]: I1013 17:25:58.269971 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:58Z","lastTransitionTime":"2025-10-13T17:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:58 crc kubenswrapper[4720]: I1013 17:25:58.285724 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61-metrics-certs\") pod \"network-metrics-daemon-c6ntg\" (UID: \"c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61\") " pod="openshift-multus/network-metrics-daemon-c6ntg" Oct 13 17:25:58 crc kubenswrapper[4720]: E1013 17:25:58.285865 4720 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 13 17:25:58 crc kubenswrapper[4720]: E1013 17:25:58.285915 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61-metrics-certs podName:c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61 nodeName:}" failed. No retries permitted until 2025-10-13 17:27:02.285899639 +0000 UTC m=+167.743149771 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61-metrics-certs") pod "network-metrics-daemon-c6ntg" (UID: "c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 13 17:25:58 crc kubenswrapper[4720]: I1013 17:25:58.372267 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:58 crc kubenswrapper[4720]: I1013 17:25:58.372308 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:58 crc kubenswrapper[4720]: I1013 17:25:58.372317 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:58 crc kubenswrapper[4720]: I1013 17:25:58.372330 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:58 crc kubenswrapper[4720]: I1013 17:25:58.372340 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:58Z","lastTransitionTime":"2025-10-13T17:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:58 crc kubenswrapper[4720]: I1013 17:25:58.474866 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:58 crc kubenswrapper[4720]: I1013 17:25:58.474929 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:58 crc kubenswrapper[4720]: I1013 17:25:58.474947 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:58 crc kubenswrapper[4720]: I1013 17:25:58.474974 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:58 crc kubenswrapper[4720]: I1013 17:25:58.474992 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:58Z","lastTransitionTime":"2025-10-13T17:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:58 crc kubenswrapper[4720]: I1013 17:25:58.577521 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:58 crc kubenswrapper[4720]: I1013 17:25:58.577555 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:58 crc kubenswrapper[4720]: I1013 17:25:58.577565 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:58 crc kubenswrapper[4720]: I1013 17:25:58.577578 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:58 crc kubenswrapper[4720]: I1013 17:25:58.577588 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:58Z","lastTransitionTime":"2025-10-13T17:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:58 crc kubenswrapper[4720]: I1013 17:25:58.680312 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:58 crc kubenswrapper[4720]: I1013 17:25:58.680386 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:58 crc kubenswrapper[4720]: I1013 17:25:58.680395 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:58 crc kubenswrapper[4720]: I1013 17:25:58.680420 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:58 crc kubenswrapper[4720]: I1013 17:25:58.680432 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:58Z","lastTransitionTime":"2025-10-13T17:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:58 crc kubenswrapper[4720]: I1013 17:25:58.783431 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:58 crc kubenswrapper[4720]: I1013 17:25:58.783495 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:58 crc kubenswrapper[4720]: I1013 17:25:58.783515 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:58 crc kubenswrapper[4720]: I1013 17:25:58.783537 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:58 crc kubenswrapper[4720]: I1013 17:25:58.783554 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:58Z","lastTransitionTime":"2025-10-13T17:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:58 crc kubenswrapper[4720]: I1013 17:25:58.886035 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:58 crc kubenswrapper[4720]: I1013 17:25:58.886099 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:58 crc kubenswrapper[4720]: I1013 17:25:58.886115 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:58 crc kubenswrapper[4720]: I1013 17:25:58.886142 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:58 crc kubenswrapper[4720]: I1013 17:25:58.886158 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:58Z","lastTransitionTime":"2025-10-13T17:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:58 crc kubenswrapper[4720]: I1013 17:25:58.989011 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:58 crc kubenswrapper[4720]: I1013 17:25:58.989069 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:58 crc kubenswrapper[4720]: I1013 17:25:58.989086 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:58 crc kubenswrapper[4720]: I1013 17:25:58.989109 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:58 crc kubenswrapper[4720]: I1013 17:25:58.989126 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:58Z","lastTransitionTime":"2025-10-13T17:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:59 crc kubenswrapper[4720]: I1013 17:25:59.092477 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:59 crc kubenswrapper[4720]: I1013 17:25:59.093052 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:59 crc kubenswrapper[4720]: I1013 17:25:59.093106 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:59 crc kubenswrapper[4720]: I1013 17:25:59.093130 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:59 crc kubenswrapper[4720]: I1013 17:25:59.093473 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:59Z","lastTransitionTime":"2025-10-13T17:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:59 crc kubenswrapper[4720]: I1013 17:25:59.168519 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:25:59 crc kubenswrapper[4720]: I1013 17:25:59.168635 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:25:59 crc kubenswrapper[4720]: E1013 17:25:59.168703 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 17:25:59 crc kubenswrapper[4720]: E1013 17:25:59.168825 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 17:25:59 crc kubenswrapper[4720]: I1013 17:25:59.195006 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:59 crc kubenswrapper[4720]: I1013 17:25:59.195058 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:59 crc kubenswrapper[4720]: I1013 17:25:59.195068 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:59 crc kubenswrapper[4720]: I1013 17:25:59.195085 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:59 crc kubenswrapper[4720]: I1013 17:25:59.195097 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:59Z","lastTransitionTime":"2025-10-13T17:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:59 crc kubenswrapper[4720]: I1013 17:25:59.297261 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:59 crc kubenswrapper[4720]: I1013 17:25:59.297307 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:59 crc kubenswrapper[4720]: I1013 17:25:59.297316 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:59 crc kubenswrapper[4720]: I1013 17:25:59.297330 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:59 crc kubenswrapper[4720]: I1013 17:25:59.297339 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:59Z","lastTransitionTime":"2025-10-13T17:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:59 crc kubenswrapper[4720]: I1013 17:25:59.400168 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:59 crc kubenswrapper[4720]: I1013 17:25:59.400258 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:59 crc kubenswrapper[4720]: I1013 17:25:59.400275 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:59 crc kubenswrapper[4720]: I1013 17:25:59.400300 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:59 crc kubenswrapper[4720]: I1013 17:25:59.400318 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:59Z","lastTransitionTime":"2025-10-13T17:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:59 crc kubenswrapper[4720]: I1013 17:25:59.503613 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:59 crc kubenswrapper[4720]: I1013 17:25:59.503652 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:59 crc kubenswrapper[4720]: I1013 17:25:59.503661 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:59 crc kubenswrapper[4720]: I1013 17:25:59.503675 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:59 crc kubenswrapper[4720]: I1013 17:25:59.503685 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:59Z","lastTransitionTime":"2025-10-13T17:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:59 crc kubenswrapper[4720]: I1013 17:25:59.606572 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:59 crc kubenswrapper[4720]: I1013 17:25:59.606635 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:59 crc kubenswrapper[4720]: I1013 17:25:59.606656 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:59 crc kubenswrapper[4720]: I1013 17:25:59.606680 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:59 crc kubenswrapper[4720]: I1013 17:25:59.606697 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:59Z","lastTransitionTime":"2025-10-13T17:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:59 crc kubenswrapper[4720]: I1013 17:25:59.709047 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:59 crc kubenswrapper[4720]: I1013 17:25:59.709110 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:59 crc kubenswrapper[4720]: I1013 17:25:59.709129 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:59 crc kubenswrapper[4720]: I1013 17:25:59.709219 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:59 crc kubenswrapper[4720]: I1013 17:25:59.709246 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:59Z","lastTransitionTime":"2025-10-13T17:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:59 crc kubenswrapper[4720]: I1013 17:25:59.811473 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:59 crc kubenswrapper[4720]: I1013 17:25:59.811535 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:59 crc kubenswrapper[4720]: I1013 17:25:59.811558 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:59 crc kubenswrapper[4720]: I1013 17:25:59.811590 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:59 crc kubenswrapper[4720]: I1013 17:25:59.811612 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:59Z","lastTransitionTime":"2025-10-13T17:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:25:59 crc kubenswrapper[4720]: I1013 17:25:59.913843 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:25:59 crc kubenswrapper[4720]: I1013 17:25:59.913905 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:25:59 crc kubenswrapper[4720]: I1013 17:25:59.913925 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:25:59 crc kubenswrapper[4720]: I1013 17:25:59.913955 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:25:59 crc kubenswrapper[4720]: I1013 17:25:59.913974 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:25:59Z","lastTransitionTime":"2025-10-13T17:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:26:00 crc kubenswrapper[4720]: I1013 17:26:00.017275 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:26:00 crc kubenswrapper[4720]: I1013 17:26:00.017356 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:26:00 crc kubenswrapper[4720]: I1013 17:26:00.017378 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:26:00 crc kubenswrapper[4720]: I1013 17:26:00.017416 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:26:00 crc kubenswrapper[4720]: I1013 17:26:00.017444 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:26:00Z","lastTransitionTime":"2025-10-13T17:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:26:00 crc kubenswrapper[4720]: I1013 17:26:00.121249 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:26:00 crc kubenswrapper[4720]: I1013 17:26:00.121305 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:26:00 crc kubenswrapper[4720]: I1013 17:26:00.121326 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:26:00 crc kubenswrapper[4720]: I1013 17:26:00.121350 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:26:00 crc kubenswrapper[4720]: I1013 17:26:00.121383 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:26:00Z","lastTransitionTime":"2025-10-13T17:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:26:00 crc kubenswrapper[4720]: I1013 17:26:00.168339 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c6ntg" Oct 13 17:26:00 crc kubenswrapper[4720]: I1013 17:26:00.168341 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:26:00 crc kubenswrapper[4720]: E1013 17:26:00.168685 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c6ntg" podUID="c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61" Oct 13 17:26:00 crc kubenswrapper[4720]: E1013 17:26:00.168794 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 17:26:00 crc kubenswrapper[4720]: I1013 17:26:00.223862 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:26:00 crc kubenswrapper[4720]: I1013 17:26:00.223917 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:26:00 crc kubenswrapper[4720]: I1013 17:26:00.223934 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:26:00 crc kubenswrapper[4720]: I1013 17:26:00.223958 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:26:00 crc kubenswrapper[4720]: I1013 17:26:00.223976 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:26:00Z","lastTransitionTime":"2025-10-13T17:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:26:00 crc kubenswrapper[4720]: I1013 17:26:00.326977 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:26:00 crc kubenswrapper[4720]: I1013 17:26:00.327026 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:26:00 crc kubenswrapper[4720]: I1013 17:26:00.327036 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:26:00 crc kubenswrapper[4720]: I1013 17:26:00.327052 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:26:00 crc kubenswrapper[4720]: I1013 17:26:00.327063 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:26:00Z","lastTransitionTime":"2025-10-13T17:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:26:00 crc kubenswrapper[4720]: I1013 17:26:00.430090 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:26:00 crc kubenswrapper[4720]: I1013 17:26:00.430160 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:26:00 crc kubenswrapper[4720]: I1013 17:26:00.430170 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:26:00 crc kubenswrapper[4720]: I1013 17:26:00.430206 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:26:00 crc kubenswrapper[4720]: I1013 17:26:00.430218 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:26:00Z","lastTransitionTime":"2025-10-13T17:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:26:00 crc kubenswrapper[4720]: I1013 17:26:00.532695 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:26:00 crc kubenswrapper[4720]: I1013 17:26:00.532768 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:26:00 crc kubenswrapper[4720]: I1013 17:26:00.532790 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:26:00 crc kubenswrapper[4720]: I1013 17:26:00.532820 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:26:00 crc kubenswrapper[4720]: I1013 17:26:00.532842 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:26:00Z","lastTransitionTime":"2025-10-13T17:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:26:00 crc kubenswrapper[4720]: I1013 17:26:00.635759 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:26:00 crc kubenswrapper[4720]: I1013 17:26:00.635816 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:26:00 crc kubenswrapper[4720]: I1013 17:26:00.635835 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:26:00 crc kubenswrapper[4720]: I1013 17:26:00.635859 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:26:00 crc kubenswrapper[4720]: I1013 17:26:00.635880 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:26:00Z","lastTransitionTime":"2025-10-13T17:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:26:00 crc kubenswrapper[4720]: I1013 17:26:00.738913 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:26:00 crc kubenswrapper[4720]: I1013 17:26:00.738974 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:26:00 crc kubenswrapper[4720]: I1013 17:26:00.738992 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:26:00 crc kubenswrapper[4720]: I1013 17:26:00.739016 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:26:00 crc kubenswrapper[4720]: I1013 17:26:00.739032 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:26:00Z","lastTransitionTime":"2025-10-13T17:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:26:00 crc kubenswrapper[4720]: I1013 17:26:00.841760 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:26:00 crc kubenswrapper[4720]: I1013 17:26:00.841821 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:26:00 crc kubenswrapper[4720]: I1013 17:26:00.841839 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:26:00 crc kubenswrapper[4720]: I1013 17:26:00.841865 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:26:00 crc kubenswrapper[4720]: I1013 17:26:00.841882 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:26:00Z","lastTransitionTime":"2025-10-13T17:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:26:00 crc kubenswrapper[4720]: I1013 17:26:00.944658 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:26:00 crc kubenswrapper[4720]: I1013 17:26:00.944722 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:26:00 crc kubenswrapper[4720]: I1013 17:26:00.944740 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:26:00 crc kubenswrapper[4720]: I1013 17:26:00.944765 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:26:00 crc kubenswrapper[4720]: I1013 17:26:00.944782 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:26:00Z","lastTransitionTime":"2025-10-13T17:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:26:01 crc kubenswrapper[4720]: I1013 17:26:01.047824 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:26:01 crc kubenswrapper[4720]: I1013 17:26:01.047873 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:26:01 crc kubenswrapper[4720]: I1013 17:26:01.047884 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:26:01 crc kubenswrapper[4720]: I1013 17:26:01.047904 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:26:01 crc kubenswrapper[4720]: I1013 17:26:01.047915 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:26:01Z","lastTransitionTime":"2025-10-13T17:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:26:01 crc kubenswrapper[4720]: I1013 17:26:01.150496 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:26:01 crc kubenswrapper[4720]: I1013 17:26:01.150539 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:26:01 crc kubenswrapper[4720]: I1013 17:26:01.150549 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:26:01 crc kubenswrapper[4720]: I1013 17:26:01.150566 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:26:01 crc kubenswrapper[4720]: I1013 17:26:01.150594 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:26:01Z","lastTransitionTime":"2025-10-13T17:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:26:01 crc kubenswrapper[4720]: I1013 17:26:01.167897 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:26:01 crc kubenswrapper[4720]: I1013 17:26:01.167985 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:26:01 crc kubenswrapper[4720]: E1013 17:26:01.168149 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 17:26:01 crc kubenswrapper[4720]: E1013 17:26:01.168279 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 17:26:01 crc kubenswrapper[4720]: I1013 17:26:01.253179 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:26:01 crc kubenswrapper[4720]: I1013 17:26:01.253333 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:26:01 crc kubenswrapper[4720]: I1013 17:26:01.253356 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:26:01 crc kubenswrapper[4720]: I1013 17:26:01.253387 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:26:01 crc kubenswrapper[4720]: I1013 17:26:01.253412 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:26:01Z","lastTransitionTime":"2025-10-13T17:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:26:01 crc kubenswrapper[4720]: I1013 17:26:01.356617 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:26:01 crc kubenswrapper[4720]: I1013 17:26:01.356662 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:26:01 crc kubenswrapper[4720]: I1013 17:26:01.356671 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:26:01 crc kubenswrapper[4720]: I1013 17:26:01.356685 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:26:01 crc kubenswrapper[4720]: I1013 17:26:01.356696 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:26:01Z","lastTransitionTime":"2025-10-13T17:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:26:01 crc kubenswrapper[4720]: I1013 17:26:01.459505 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:26:01 crc kubenswrapper[4720]: I1013 17:26:01.459556 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:26:01 crc kubenswrapper[4720]: I1013 17:26:01.459572 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:26:01 crc kubenswrapper[4720]: I1013 17:26:01.459597 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:26:01 crc kubenswrapper[4720]: I1013 17:26:01.459615 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:26:01Z","lastTransitionTime":"2025-10-13T17:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:26:01 crc kubenswrapper[4720]: I1013 17:26:01.562727 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:26:01 crc kubenswrapper[4720]: I1013 17:26:01.562785 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:26:01 crc kubenswrapper[4720]: I1013 17:26:01.562801 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:26:01 crc kubenswrapper[4720]: I1013 17:26:01.562824 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:26:01 crc kubenswrapper[4720]: I1013 17:26:01.562840 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:26:01Z","lastTransitionTime":"2025-10-13T17:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:26:01 crc kubenswrapper[4720]: I1013 17:26:01.667237 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:26:01 crc kubenswrapper[4720]: I1013 17:26:01.667321 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:26:01 crc kubenswrapper[4720]: I1013 17:26:01.667346 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:26:01 crc kubenswrapper[4720]: I1013 17:26:01.667394 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:26:01 crc kubenswrapper[4720]: I1013 17:26:01.667433 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:26:01Z","lastTransitionTime":"2025-10-13T17:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:26:01 crc kubenswrapper[4720]: I1013 17:26:01.770322 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:26:01 crc kubenswrapper[4720]: I1013 17:26:01.770395 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:26:01 crc kubenswrapper[4720]: I1013 17:26:01.770417 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:26:01 crc kubenswrapper[4720]: I1013 17:26:01.770446 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:26:01 crc kubenswrapper[4720]: I1013 17:26:01.770467 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:26:01Z","lastTransitionTime":"2025-10-13T17:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:26:01 crc kubenswrapper[4720]: I1013 17:26:01.872931 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:26:01 crc kubenswrapper[4720]: I1013 17:26:01.873077 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:26:01 crc kubenswrapper[4720]: I1013 17:26:01.873118 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:26:01 crc kubenswrapper[4720]: I1013 17:26:01.873163 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:26:01 crc kubenswrapper[4720]: I1013 17:26:01.873230 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:26:01Z","lastTransitionTime":"2025-10-13T17:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:26:01 crc kubenswrapper[4720]: I1013 17:26:01.975702 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:26:01 crc kubenswrapper[4720]: I1013 17:26:01.976294 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:26:01 crc kubenswrapper[4720]: I1013 17:26:01.976518 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:26:01 crc kubenswrapper[4720]: I1013 17:26:01.976726 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:26:01 crc kubenswrapper[4720]: I1013 17:26:01.976978 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:26:01Z","lastTransitionTime":"2025-10-13T17:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:26:02 crc kubenswrapper[4720]: I1013 17:26:02.079956 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:26:02 crc kubenswrapper[4720]: I1013 17:26:02.080014 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:26:02 crc kubenswrapper[4720]: I1013 17:26:02.080035 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:26:02 crc kubenswrapper[4720]: I1013 17:26:02.080065 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:26:02 crc kubenswrapper[4720]: I1013 17:26:02.080088 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:26:02Z","lastTransitionTime":"2025-10-13T17:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:26:02 crc kubenswrapper[4720]: I1013 17:26:02.167495 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:26:02 crc kubenswrapper[4720]: I1013 17:26:02.167548 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c6ntg" Oct 13 17:26:02 crc kubenswrapper[4720]: E1013 17:26:02.167638 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 17:26:02 crc kubenswrapper[4720]: E1013 17:26:02.167854 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c6ntg" podUID="c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61" Oct 13 17:26:02 crc kubenswrapper[4720]: I1013 17:26:02.182313 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:26:02 crc kubenswrapper[4720]: I1013 17:26:02.182339 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:26:02 crc kubenswrapper[4720]: I1013 17:26:02.182350 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:26:02 crc kubenswrapper[4720]: I1013 17:26:02.182364 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:26:02 crc kubenswrapper[4720]: I1013 17:26:02.182372 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:26:02Z","lastTransitionTime":"2025-10-13T17:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:26:02 crc kubenswrapper[4720]: I1013 17:26:02.285335 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:26:02 crc kubenswrapper[4720]: I1013 17:26:02.285398 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:26:02 crc kubenswrapper[4720]: I1013 17:26:02.285424 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:26:02 crc kubenswrapper[4720]: I1013 17:26:02.285455 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:26:02 crc kubenswrapper[4720]: I1013 17:26:02.285477 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:26:02Z","lastTransitionTime":"2025-10-13T17:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:26:02 crc kubenswrapper[4720]: I1013 17:26:02.387981 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:26:02 crc kubenswrapper[4720]: I1013 17:26:02.388045 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:26:02 crc kubenswrapper[4720]: I1013 17:26:02.388063 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:26:02 crc kubenswrapper[4720]: I1013 17:26:02.388085 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:26:02 crc kubenswrapper[4720]: I1013 17:26:02.388104 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:26:02Z","lastTransitionTime":"2025-10-13T17:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:26:02 crc kubenswrapper[4720]: I1013 17:26:02.490693 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:26:02 crc kubenswrapper[4720]: I1013 17:26:02.490754 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:26:02 crc kubenswrapper[4720]: I1013 17:26:02.490773 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:26:02 crc kubenswrapper[4720]: I1013 17:26:02.490798 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:26:02 crc kubenswrapper[4720]: I1013 17:26:02.490815 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:26:02Z","lastTransitionTime":"2025-10-13T17:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:26:02 crc kubenswrapper[4720]: I1013 17:26:02.593235 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:26:02 crc kubenswrapper[4720]: I1013 17:26:02.593270 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:26:02 crc kubenswrapper[4720]: I1013 17:26:02.593280 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:26:02 crc kubenswrapper[4720]: I1013 17:26:02.593295 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:26:02 crc kubenswrapper[4720]: I1013 17:26:02.593306 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:26:02Z","lastTransitionTime":"2025-10-13T17:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:26:02 crc kubenswrapper[4720]: I1013 17:26:02.696153 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:26:02 crc kubenswrapper[4720]: I1013 17:26:02.696209 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:26:02 crc kubenswrapper[4720]: I1013 17:26:02.696219 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:26:02 crc kubenswrapper[4720]: I1013 17:26:02.696232 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:26:02 crc kubenswrapper[4720]: I1013 17:26:02.696240 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:26:02Z","lastTransitionTime":"2025-10-13T17:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:26:02 crc kubenswrapper[4720]: I1013 17:26:02.798251 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:26:02 crc kubenswrapper[4720]: I1013 17:26:02.798283 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:26:02 crc kubenswrapper[4720]: I1013 17:26:02.798294 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:26:02 crc kubenswrapper[4720]: I1013 17:26:02.798308 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:26:02 crc kubenswrapper[4720]: I1013 17:26:02.798319 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:26:02Z","lastTransitionTime":"2025-10-13T17:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:26:02 crc kubenswrapper[4720]: I1013 17:26:02.900955 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:26:02 crc kubenswrapper[4720]: I1013 17:26:02.901001 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:26:02 crc kubenswrapper[4720]: I1013 17:26:02.901011 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:26:02 crc kubenswrapper[4720]: I1013 17:26:02.901023 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:26:02 crc kubenswrapper[4720]: I1013 17:26:02.901032 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:26:02Z","lastTransitionTime":"2025-10-13T17:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:26:03 crc kubenswrapper[4720]: I1013 17:26:03.003331 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:26:03 crc kubenswrapper[4720]: I1013 17:26:03.003387 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:26:03 crc kubenswrapper[4720]: I1013 17:26:03.003433 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:26:03 crc kubenswrapper[4720]: I1013 17:26:03.003457 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:26:03 crc kubenswrapper[4720]: I1013 17:26:03.003474 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:26:03Z","lastTransitionTime":"2025-10-13T17:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:26:03 crc kubenswrapper[4720]: I1013 17:26:03.106713 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:26:03 crc kubenswrapper[4720]: I1013 17:26:03.106770 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:26:03 crc kubenswrapper[4720]: I1013 17:26:03.106788 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:26:03 crc kubenswrapper[4720]: I1013 17:26:03.106812 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:26:03 crc kubenswrapper[4720]: I1013 17:26:03.106829 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:26:03Z","lastTransitionTime":"2025-10-13T17:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:26:03 crc kubenswrapper[4720]: I1013 17:26:03.167416 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:26:03 crc kubenswrapper[4720]: I1013 17:26:03.167550 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:26:03 crc kubenswrapper[4720]: E1013 17:26:03.167821 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 17:26:03 crc kubenswrapper[4720]: E1013 17:26:03.168049 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 17:26:03 crc kubenswrapper[4720]: I1013 17:26:03.209575 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:26:03 crc kubenswrapper[4720]: I1013 17:26:03.209615 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:26:03 crc kubenswrapper[4720]: I1013 17:26:03.209624 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:26:03 crc kubenswrapper[4720]: I1013 17:26:03.209637 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:26:03 crc kubenswrapper[4720]: I1013 17:26:03.209647 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:26:03Z","lastTransitionTime":"2025-10-13T17:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:26:03 crc kubenswrapper[4720]: I1013 17:26:03.312914 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:26:03 crc kubenswrapper[4720]: I1013 17:26:03.312955 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:26:03 crc kubenswrapper[4720]: I1013 17:26:03.312965 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:26:03 crc kubenswrapper[4720]: I1013 17:26:03.312979 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:26:03 crc kubenswrapper[4720]: I1013 17:26:03.312988 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:26:03Z","lastTransitionTime":"2025-10-13T17:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:26:03 crc kubenswrapper[4720]: I1013 17:26:03.360408 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 13 17:26:03 crc kubenswrapper[4720]: I1013 17:26:03.360833 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 13 17:26:03 crc kubenswrapper[4720]: I1013 17:26:03.361101 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 13 17:26:03 crc kubenswrapper[4720]: I1013 17:26:03.361580 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 13 17:26:03 crc kubenswrapper[4720]: I1013 17:26:03.362080 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-13T17:26:03Z","lastTransitionTime":"2025-10-13T17:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 13 17:26:03 crc kubenswrapper[4720]: I1013 17:26:03.436599 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-99dn7"] Oct 13 17:26:03 crc kubenswrapper[4720]: I1013 17:26:03.438608 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-99dn7" Oct 13 17:26:03 crc kubenswrapper[4720]: I1013 17:26:03.442496 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Oct 13 17:26:03 crc kubenswrapper[4720]: I1013 17:26:03.442707 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Oct 13 17:26:03 crc kubenswrapper[4720]: I1013 17:26:03.443756 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Oct 13 17:26:03 crc kubenswrapper[4720]: I1013 17:26:03.444157 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Oct 13 17:26:03 crc kubenswrapper[4720]: I1013 17:26:03.488050 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podStartSLOduration=84.488030276 podStartE2EDuration="1m24.488030276s" podCreationTimestamp="2025-10-13 17:24:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:26:03.46426973 +0000 UTC m=+108.921519892" watchObservedRunningTime="2025-10-13 17:26:03.488030276 +0000 UTC m=+108.945280418" Oct 13 17:26:03 crc kubenswrapper[4720]: I1013 17:26:03.507273 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-5rgfd" podStartSLOduration=84.507240109 podStartE2EDuration="1m24.507240109s" podCreationTimestamp="2025-10-13 17:24:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:26:03.488671482 +0000 UTC m=+108.945921624" watchObservedRunningTime="2025-10-13 17:26:03.507240109 +0000 UTC m=+108.964490281" Oct 13 17:26:03 crc kubenswrapper[4720]: I1013 17:26:03.547148 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e0241104-0cc4-408e-b677-217a50111352-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-99dn7\" (UID: \"e0241104-0cc4-408e-b677-217a50111352\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-99dn7" Oct 13 17:26:03 crc kubenswrapper[4720]: I1013 17:26:03.547261 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/e0241104-0cc4-408e-b677-217a50111352-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-99dn7\" (UID: \"e0241104-0cc4-408e-b677-217a50111352\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-99dn7" Oct 13 17:26:03 crc kubenswrapper[4720]: I1013 17:26:03.547301 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e0241104-0cc4-408e-b677-217a50111352-service-ca\") pod \"cluster-version-operator-5c965bbfc6-99dn7\" (UID: \"e0241104-0cc4-408e-b677-217a50111352\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-99dn7" Oct 13 17:26:03 crc kubenswrapper[4720]: I1013 17:26:03.547364 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0241104-0cc4-408e-b677-217a50111352-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-99dn7\" (UID: \"e0241104-0cc4-408e-b677-217a50111352\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-99dn7" Oct 13 17:26:03 crc kubenswrapper[4720]: I1013 17:26:03.547413 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/e0241104-0cc4-408e-b677-217a50111352-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-99dn7\" (UID: \"e0241104-0cc4-408e-b677-217a50111352\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-99dn7" Oct 13 17:26:03 crc kubenswrapper[4720]: I1013 17:26:03.577849 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=86.577815251 podStartE2EDuration="1m26.577815251s" podCreationTimestamp="2025-10-13 17:24:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:26:03.560928527 +0000 UTC m=+109.018178699" watchObservedRunningTime="2025-10-13 17:26:03.577815251 +0000 UTC m=+109.035065423" Oct 13 17:26:03 crc kubenswrapper[4720]: I1013 17:26:03.597279 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=41.597249289 podStartE2EDuration="41.597249289s" podCreationTimestamp="2025-10-13 17:25:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:26:03.578078617 +0000 UTC m=+109.035328779" watchObservedRunningTime="2025-10-13 17:26:03.597249289 +0000 UTC m=+109.054499471" Oct 13 17:26:03 crc kubenswrapper[4720]: I1013 17:26:03.648687 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e0241104-0cc4-408e-b677-217a50111352-service-ca\") pod \"cluster-version-operator-5c965bbfc6-99dn7\" (UID: \"e0241104-0cc4-408e-b677-217a50111352\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-99dn7" Oct 13 17:26:03 crc kubenswrapper[4720]: I1013 17:26:03.648803 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0241104-0cc4-408e-b677-217a50111352-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-99dn7\" (UID: \"e0241104-0cc4-408e-b677-217a50111352\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-99dn7" Oct 13 17:26:03 crc kubenswrapper[4720]: I1013 17:26:03.648839 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/e0241104-0cc4-408e-b677-217a50111352-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-99dn7\" (UID: \"e0241104-0cc4-408e-b677-217a50111352\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-99dn7" Oct 13 17:26:03 crc kubenswrapper[4720]: I1013 17:26:03.648921 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e0241104-0cc4-408e-b677-217a50111352-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-99dn7\" (UID: \"e0241104-0cc4-408e-b677-217a50111352\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-99dn7" Oct 13 17:26:03 crc kubenswrapper[4720]: I1013 17:26:03.648954 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/e0241104-0cc4-408e-b677-217a50111352-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-99dn7\" (UID: \"e0241104-0cc4-408e-b677-217a50111352\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-99dn7" Oct 13 17:26:03 crc kubenswrapper[4720]: I1013 17:26:03.649025 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/e0241104-0cc4-408e-b677-217a50111352-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-99dn7\" (UID: \"e0241104-0cc4-408e-b677-217a50111352\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-99dn7" Oct 13 17:26:03 crc kubenswrapper[4720]: I1013 17:26:03.649050 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/e0241104-0cc4-408e-b677-217a50111352-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-99dn7\" (UID: \"e0241104-0cc4-408e-b677-217a50111352\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-99dn7" Oct 13 17:26:03 crc kubenswrapper[4720]: I1013 17:26:03.650496 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e0241104-0cc4-408e-b677-217a50111352-service-ca\") pod \"cluster-version-operator-5c965bbfc6-99dn7\" (UID: \"e0241104-0cc4-408e-b677-217a50111352\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-99dn7" Oct 13 17:26:03 crc kubenswrapper[4720]: I1013 17:26:03.658779 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dnjrc" podStartSLOduration=83.658753003 podStartE2EDuration="1m23.658753003s" podCreationTimestamp="2025-10-13 17:24:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:26:03.633859188 +0000 UTC m=+109.091109350" watchObservedRunningTime="2025-10-13 17:26:03.658753003 +0000 UTC m=+109.116003175" Oct 13 17:26:03 crc kubenswrapper[4720]: I1013 17:26:03.658884 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0241104-0cc4-408e-b677-217a50111352-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-99dn7\" (UID: \"e0241104-0cc4-408e-b677-217a50111352\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-99dn7" Oct 13 17:26:03 crc kubenswrapper[4720]: I1013 17:26:03.680274 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e0241104-0cc4-408e-b677-217a50111352-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-99dn7\" (UID: \"e0241104-0cc4-408e-b677-217a50111352\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-99dn7" Oct 13 17:26:03 crc kubenswrapper[4720]: I1013 17:26:03.683963 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=88.683944585 podStartE2EDuration="1m28.683944585s" podCreationTimestamp="2025-10-13 17:24:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:26:03.659338277 +0000 UTC m=+109.116588419" watchObservedRunningTime="2025-10-13 17:26:03.683944585 +0000 UTC m=+109.141194757" Oct 13 17:26:03 crc kubenswrapper[4720]: I1013 17:26:03.710020 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-lxmjt" podStartSLOduration=84.709996139 podStartE2EDuration="1m24.709996139s" podCreationTimestamp="2025-10-13 17:24:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:26:03.683932075 +0000 UTC m=+109.141182277" watchObservedRunningTime="2025-10-13 17:26:03.709996139 +0000 UTC m=+109.167246281" Oct 13 17:26:03 crc kubenswrapper[4720]: I1013 17:26:03.720922 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-bvtrx" podStartSLOduration=84.720900393 podStartE2EDuration="1m24.720900393s" podCreationTimestamp="2025-10-13 17:24:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:26:03.720440391 +0000 UTC m=+109.177690543" watchObservedRunningTime="2025-10-13 17:26:03.720900393 +0000 UTC m=+109.178150545" Oct 13 17:26:03 crc kubenswrapper[4720]: I1013 17:26:03.754569 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=88.754547248 podStartE2EDuration="1m28.754547248s" podCreationTimestamp="2025-10-13 17:24:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:26:03.754545248 +0000 UTC m=+109.211795420" watchObservedRunningTime="2025-10-13 17:26:03.754547248 +0000 UTC m=+109.211797400" Oct 13 17:26:03 crc kubenswrapper[4720]: I1013 17:26:03.754929 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-pmjlm" podStartSLOduration=84.754924727 podStartE2EDuration="1m24.754924727s" podCreationTimestamp="2025-10-13 17:24:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:26:03.736634368 +0000 UTC m=+109.193884510" watchObservedRunningTime="2025-10-13 17:26:03.754924727 +0000 UTC m=+109.212174869" Oct 13 17:26:03 crc kubenswrapper[4720]: I1013 17:26:03.755967 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-99dn7" Oct 13 17:26:03 crc kubenswrapper[4720]: I1013 17:26:03.770651 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=55.770628641 podStartE2EDuration="55.770628641s" podCreationTimestamp="2025-10-13 17:25:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:26:03.769586195 +0000 UTC m=+109.226836337" watchObservedRunningTime="2025-10-13 17:26:03.770628641 +0000 UTC m=+109.227878783" Oct 13 17:26:04 crc kubenswrapper[4720]: I1013 17:26:04.167970 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c6ntg" Oct 13 17:26:04 crc kubenswrapper[4720]: I1013 17:26:04.168075 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:26:04 crc kubenswrapper[4720]: E1013 17:26:04.168208 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c6ntg" podUID="c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61" Oct 13 17:26:04 crc kubenswrapper[4720]: E1013 17:26:04.168345 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 17:26:04 crc kubenswrapper[4720]: I1013 17:26:04.731813 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-99dn7" event={"ID":"e0241104-0cc4-408e-b677-217a50111352","Type":"ContainerStarted","Data":"37d5c731cdd4eb0ff88f2f9d535bcb18896161f010169bf42d1789b59bf6a81d"} Oct 13 17:26:04 crc kubenswrapper[4720]: I1013 17:26:04.731885 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-99dn7" event={"ID":"e0241104-0cc4-408e-b677-217a50111352","Type":"ContainerStarted","Data":"32c8e83266d71cb7b7b5be62faedaac9a7934d54fe4de77ef1452f202abb44eb"} Oct 13 17:26:04 crc kubenswrapper[4720]: I1013 17:26:04.750678 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-99dn7" podStartSLOduration=85.750656017 podStartE2EDuration="1m25.750656017s" podCreationTimestamp="2025-10-13 17:24:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:26:04.749954209 +0000 UTC m=+110.207204381" watchObservedRunningTime="2025-10-13 17:26:04.750656017 +0000 UTC m=+110.207906179" Oct 13 17:26:05 crc kubenswrapper[4720]: I1013 17:26:05.167647 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:26:05 crc kubenswrapper[4720]: I1013 17:26:05.167733 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:26:05 crc kubenswrapper[4720]: E1013 17:26:05.169454 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 17:26:05 crc kubenswrapper[4720]: E1013 17:26:05.169694 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 17:26:06 crc kubenswrapper[4720]: I1013 17:26:06.167939 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:26:06 crc kubenswrapper[4720]: I1013 17:26:06.168003 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c6ntg" Oct 13 17:26:06 crc kubenswrapper[4720]: E1013 17:26:06.168106 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 17:26:06 crc kubenswrapper[4720]: E1013 17:26:06.168257 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c6ntg" podUID="c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61" Oct 13 17:26:07 crc kubenswrapper[4720]: I1013 17:26:07.167856 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:26:07 crc kubenswrapper[4720]: I1013 17:26:07.167856 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:26:07 crc kubenswrapper[4720]: E1013 17:26:07.168063 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 17:26:07 crc kubenswrapper[4720]: E1013 17:26:07.168264 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 17:26:08 crc kubenswrapper[4720]: I1013 17:26:08.167679 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:26:08 crc kubenswrapper[4720]: I1013 17:26:08.167747 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c6ntg" Oct 13 17:26:08 crc kubenswrapper[4720]: E1013 17:26:08.167890 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 17:26:08 crc kubenswrapper[4720]: E1013 17:26:08.168072 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c6ntg" podUID="c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61" Oct 13 17:26:09 crc kubenswrapper[4720]: I1013 17:26:09.167923 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:26:09 crc kubenswrapper[4720]: I1013 17:26:09.168110 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:26:09 crc kubenswrapper[4720]: E1013 17:26:09.169109 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 17:26:09 crc kubenswrapper[4720]: E1013 17:26:09.169113 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 17:26:10 crc kubenswrapper[4720]: I1013 17:26:10.167497 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:26:10 crc kubenswrapper[4720]: I1013 17:26:10.167646 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c6ntg" Oct 13 17:26:10 crc kubenswrapper[4720]: E1013 17:26:10.167742 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 17:26:10 crc kubenswrapper[4720]: E1013 17:26:10.168277 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c6ntg" podUID="c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61" Oct 13 17:26:10 crc kubenswrapper[4720]: I1013 17:26:10.168773 4720 scope.go:117] "RemoveContainer" containerID="c62dd5ae9eb692f394556f2d0b70bf14a7c99fe714ad7084a5d08b324a4435de" Oct 13 17:26:10 crc kubenswrapper[4720]: E1013 17:26:10.169027 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pn6lz_openshift-ovn-kubernetes(8064812e-b6aa-4f56-81c9-16154c00abad)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" podUID="8064812e-b6aa-4f56-81c9-16154c00abad" Oct 13 17:26:11 crc kubenswrapper[4720]: I1013 17:26:11.167403 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:26:11 crc kubenswrapper[4720]: E1013 17:26:11.167517 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 17:26:11 crc kubenswrapper[4720]: I1013 17:26:11.167606 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:26:11 crc kubenswrapper[4720]: E1013 17:26:11.167758 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 17:26:12 crc kubenswrapper[4720]: I1013 17:26:12.167310 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c6ntg" Oct 13 17:26:12 crc kubenswrapper[4720]: I1013 17:26:12.167369 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:26:12 crc kubenswrapper[4720]: E1013 17:26:12.167514 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c6ntg" podUID="c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61" Oct 13 17:26:12 crc kubenswrapper[4720]: E1013 17:26:12.167646 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 17:26:13 crc kubenswrapper[4720]: I1013 17:26:13.168084 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:26:13 crc kubenswrapper[4720]: I1013 17:26:13.168409 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:26:13 crc kubenswrapper[4720]: E1013 17:26:13.168542 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 17:26:13 crc kubenswrapper[4720]: E1013 17:26:13.168791 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 17:26:13 crc kubenswrapper[4720]: I1013 17:26:13.764133 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lxmjt_7b45ec2d-5bea-4007-a49f-224a866f93eb/kube-multus/1.log" Oct 13 17:26:13 crc kubenswrapper[4720]: I1013 17:26:13.764787 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lxmjt_7b45ec2d-5bea-4007-a49f-224a866f93eb/kube-multus/0.log" Oct 13 17:26:13 crc kubenswrapper[4720]: I1013 17:26:13.764861 4720 generic.go:334] "Generic (PLEG): container finished" podID="7b45ec2d-5bea-4007-a49f-224a866f93eb" containerID="835c6ec8dc7ae3785a23ae45e5c9dc4b3bcc24428ca4c3865a6dd790d5956e74" exitCode=1 Oct 13 17:26:13 crc kubenswrapper[4720]: I1013 17:26:13.764902 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lxmjt" event={"ID":"7b45ec2d-5bea-4007-a49f-224a866f93eb","Type":"ContainerDied","Data":"835c6ec8dc7ae3785a23ae45e5c9dc4b3bcc24428ca4c3865a6dd790d5956e74"} Oct 13 17:26:13 crc kubenswrapper[4720]: I1013 17:26:13.764947 4720 scope.go:117] "RemoveContainer" containerID="c00868f87f1a3020a1db7e50377bbe90203d6e93799c8ec9f3100dd52a6c0cf6" Oct 13 17:26:13 crc kubenswrapper[4720]: I1013 17:26:13.765618 4720 scope.go:117] "RemoveContainer" containerID="835c6ec8dc7ae3785a23ae45e5c9dc4b3bcc24428ca4c3865a6dd790d5956e74" Oct 13 17:26:13 crc kubenswrapper[4720]: E1013 17:26:13.765932 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-lxmjt_openshift-multus(7b45ec2d-5bea-4007-a49f-224a866f93eb)\"" pod="openshift-multus/multus-lxmjt" podUID="7b45ec2d-5bea-4007-a49f-224a866f93eb" Oct 13 17:26:14 crc kubenswrapper[4720]: I1013 17:26:14.168092 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c6ntg" Oct 13 17:26:14 crc kubenswrapper[4720]: I1013 17:26:14.168130 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:26:14 crc kubenswrapper[4720]: E1013 17:26:14.168625 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 17:26:14 crc kubenswrapper[4720]: E1013 17:26:14.168511 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c6ntg" podUID="c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61" Oct 13 17:26:14 crc kubenswrapper[4720]: I1013 17:26:14.769732 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lxmjt_7b45ec2d-5bea-4007-a49f-224a866f93eb/kube-multus/1.log" Oct 13 17:26:15 crc kubenswrapper[4720]: E1013 17:26:15.167134 4720 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Oct 13 17:26:15 crc kubenswrapper[4720]: I1013 17:26:15.167532 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:26:15 crc kubenswrapper[4720]: I1013 17:26:15.167639 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:26:15 crc kubenswrapper[4720]: E1013 17:26:15.169756 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 17:26:15 crc kubenswrapper[4720]: E1013 17:26:15.169896 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 17:26:15 crc kubenswrapper[4720]: E1013 17:26:15.244632 4720 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 13 17:26:16 crc kubenswrapper[4720]: I1013 17:26:16.167876 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c6ntg" Oct 13 17:26:16 crc kubenswrapper[4720]: I1013 17:26:16.167968 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:26:16 crc kubenswrapper[4720]: E1013 17:26:16.168392 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c6ntg" podUID="c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61" Oct 13 17:26:16 crc kubenswrapper[4720]: E1013 17:26:16.168502 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 17:26:17 crc kubenswrapper[4720]: I1013 17:26:17.168334 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:26:17 crc kubenswrapper[4720]: I1013 17:26:17.168347 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:26:17 crc kubenswrapper[4720]: E1013 17:26:17.168511 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 17:26:17 crc kubenswrapper[4720]: E1013 17:26:17.168605 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 17:26:18 crc kubenswrapper[4720]: I1013 17:26:18.168016 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:26:18 crc kubenswrapper[4720]: I1013 17:26:18.168073 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c6ntg" Oct 13 17:26:18 crc kubenswrapper[4720]: E1013 17:26:18.168157 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 17:26:18 crc kubenswrapper[4720]: E1013 17:26:18.168483 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c6ntg" podUID="c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61" Oct 13 17:26:19 crc kubenswrapper[4720]: I1013 17:26:19.167545 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:26:19 crc kubenswrapper[4720]: E1013 17:26:19.167685 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 17:26:19 crc kubenswrapper[4720]: I1013 17:26:19.172940 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:26:19 crc kubenswrapper[4720]: E1013 17:26:19.173096 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 17:26:20 crc kubenswrapper[4720]: I1013 17:26:20.168299 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:26:20 crc kubenswrapper[4720]: I1013 17:26:20.168299 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c6ntg" Oct 13 17:26:20 crc kubenswrapper[4720]: E1013 17:26:20.168614 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 17:26:20 crc kubenswrapper[4720]: E1013 17:26:20.168770 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c6ntg" podUID="c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61" Oct 13 17:26:20 crc kubenswrapper[4720]: E1013 17:26:20.246058 4720 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 13 17:26:21 crc kubenswrapper[4720]: I1013 17:26:21.168067 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:26:21 crc kubenswrapper[4720]: I1013 17:26:21.168118 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:26:21 crc kubenswrapper[4720]: E1013 17:26:21.168225 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 17:26:21 crc kubenswrapper[4720]: E1013 17:26:21.168383 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 17:26:22 crc kubenswrapper[4720]: I1013 17:26:22.168446 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c6ntg" Oct 13 17:26:22 crc kubenswrapper[4720]: E1013 17:26:22.169402 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c6ntg" podUID="c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61" Oct 13 17:26:22 crc kubenswrapper[4720]: I1013 17:26:22.168960 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:26:22 crc kubenswrapper[4720]: E1013 17:26:22.169791 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 17:26:23 crc kubenswrapper[4720]: I1013 17:26:23.167262 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:26:23 crc kubenswrapper[4720]: I1013 17:26:23.167390 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:26:23 crc kubenswrapper[4720]: E1013 17:26:23.167753 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 17:26:23 crc kubenswrapper[4720]: E1013 17:26:23.168331 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 17:26:24 crc kubenswrapper[4720]: I1013 17:26:24.167386 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c6ntg" Oct 13 17:26:24 crc kubenswrapper[4720]: I1013 17:26:24.167440 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:26:24 crc kubenswrapper[4720]: E1013 17:26:24.167635 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 17:26:24 crc kubenswrapper[4720]: I1013 17:26:24.167772 4720 scope.go:117] "RemoveContainer" containerID="835c6ec8dc7ae3785a23ae45e5c9dc4b3bcc24428ca4c3865a6dd790d5956e74" Oct 13 17:26:24 crc kubenswrapper[4720]: E1013 17:26:24.167796 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c6ntg" podUID="c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61" Oct 13 17:26:24 crc kubenswrapper[4720]: I1013 17:26:24.807176 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lxmjt_7b45ec2d-5bea-4007-a49f-224a866f93eb/kube-multus/1.log" Oct 13 17:26:24 crc kubenswrapper[4720]: I1013 17:26:24.807337 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lxmjt" event={"ID":"7b45ec2d-5bea-4007-a49f-224a866f93eb","Type":"ContainerStarted","Data":"8f158c84b32e0700f7fbee0860433b8a5b48b6d18ab5d0d8013224b5fa78ff3a"} Oct 13 17:26:25 crc kubenswrapper[4720]: I1013 17:26:25.167877 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:26:25 crc kubenswrapper[4720]: E1013 17:26:25.169825 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 17:26:25 crc kubenswrapper[4720]: I1013 17:26:25.169873 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:26:25 crc kubenswrapper[4720]: E1013 17:26:25.170541 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 17:26:25 crc kubenswrapper[4720]: I1013 17:26:25.171066 4720 scope.go:117] "RemoveContainer" containerID="c62dd5ae9eb692f394556f2d0b70bf14a7c99fe714ad7084a5d08b324a4435de" Oct 13 17:26:25 crc kubenswrapper[4720]: E1013 17:26:25.247424 4720 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 13 17:26:25 crc kubenswrapper[4720]: I1013 17:26:25.813816 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pn6lz_8064812e-b6aa-4f56-81c9-16154c00abad/ovnkube-controller/3.log" Oct 13 17:26:25 crc kubenswrapper[4720]: I1013 17:26:25.817444 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" event={"ID":"8064812e-b6aa-4f56-81c9-16154c00abad","Type":"ContainerStarted","Data":"2cba8cf1a0fc25e74ef53b5ad2f9b7078a9a2aada0f97b16b1dc4a11e0104571"} Oct 13 17:26:25 crc kubenswrapper[4720]: I1013 17:26:25.818076 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:26:25 crc kubenswrapper[4720]: I1013 17:26:25.850896 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" podStartSLOduration=106.850872283 podStartE2EDuration="1m46.850872283s" podCreationTimestamp="2025-10-13 17:24:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:26:25.850269048 +0000 UTC m=+131.307519240" watchObservedRunningTime="2025-10-13 17:26:25.850872283 +0000 UTC m=+131.308122455" Oct 13 17:26:25 crc kubenswrapper[4720]: I1013 17:26:25.994800 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-c6ntg"] Oct 13 17:26:25 crc kubenswrapper[4720]: I1013 17:26:25.994963 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c6ntg" Oct 13 17:26:25 crc kubenswrapper[4720]: E1013 17:26:25.995100 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c6ntg" podUID="c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61" Oct 13 17:26:26 crc kubenswrapper[4720]: I1013 17:26:26.167699 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:26:26 crc kubenswrapper[4720]: E1013 17:26:26.167878 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 17:26:27 crc kubenswrapper[4720]: I1013 17:26:27.167437 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c6ntg" Oct 13 17:26:27 crc kubenswrapper[4720]: I1013 17:26:27.167505 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:26:27 crc kubenswrapper[4720]: I1013 17:26:27.167509 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:26:27 crc kubenswrapper[4720]: E1013 17:26:27.167651 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c6ntg" podUID="c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61" Oct 13 17:26:27 crc kubenswrapper[4720]: E1013 17:26:27.167780 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 17:26:27 crc kubenswrapper[4720]: E1013 17:26:27.167935 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 17:26:28 crc kubenswrapper[4720]: I1013 17:26:28.168220 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:26:28 crc kubenswrapper[4720]: E1013 17:26:28.168404 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 17:26:29 crc kubenswrapper[4720]: I1013 17:26:29.167607 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:26:29 crc kubenswrapper[4720]: I1013 17:26:29.167799 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:26:29 crc kubenswrapper[4720]: E1013 17:26:29.168094 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 13 17:26:29 crc kubenswrapper[4720]: E1013 17:26:29.168310 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 13 17:26:29 crc kubenswrapper[4720]: I1013 17:26:29.167875 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c6ntg" Oct 13 17:26:29 crc kubenswrapper[4720]: E1013 17:26:29.168463 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-c6ntg" podUID="c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61" Oct 13 17:26:30 crc kubenswrapper[4720]: I1013 17:26:30.167563 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:26:30 crc kubenswrapper[4720]: E1013 17:26:30.167694 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 13 17:26:31 crc kubenswrapper[4720]: I1013 17:26:31.167858 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:26:31 crc kubenswrapper[4720]: I1013 17:26:31.167859 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c6ntg" Oct 13 17:26:31 crc kubenswrapper[4720]: I1013 17:26:31.167901 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:26:31 crc kubenswrapper[4720]: I1013 17:26:31.172257 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Oct 13 17:26:31 crc kubenswrapper[4720]: I1013 17:26:31.172319 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Oct 13 17:26:31 crc kubenswrapper[4720]: I1013 17:26:31.172464 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Oct 13 17:26:31 crc kubenswrapper[4720]: I1013 17:26:31.172542 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Oct 13 17:26:31 crc kubenswrapper[4720]: I1013 17:26:31.172961 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Oct 13 17:26:31 crc kubenswrapper[4720]: I1013 17:26:31.174673 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Oct 13 17:26:32 crc kubenswrapper[4720]: I1013 17:26:32.167064 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.759233 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.811782 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-mm8n9"] Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.812874 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-mm8n9" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.813758 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-dvc6z"] Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.814115 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-dvc6z" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.817059 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.817970 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-9mm85"] Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.818727 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9mm85" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.819140 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4vqw6"] Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.819728 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4vqw6" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.821964 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-xbb4t"] Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.822462 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-xbb4t" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.826652 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.834557 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.834962 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.835275 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.835495 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.835750 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.835921 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.835794 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.835845 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.836052 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.836285 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.836388 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.836637 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.836786 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.836938 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.836953 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.837055 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.837163 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.844736 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.846440 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.846727 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.847386 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.847794 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.848081 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.848469 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.848498 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.848529 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.848717 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.849138 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.849374 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.849614 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.850242 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.850477 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.851320 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.871264 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-p8hj6"] Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.872080 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p8hj6" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.874558 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.875085 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.875354 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.876719 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.879111 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lwp58"] Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.880486 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-lwp58" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.888453 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.888931 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.890682 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.890803 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.890875 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.890905 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.890681 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.890759 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.890814 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.890822 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.891383 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.891641 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-mg7s9"] Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.893066 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.893254 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.894836 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.898138 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.900507 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-5cfzh"] Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.902311 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mg7s9" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.902578 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-jrc4l"] Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.903009 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-8c7d6"] Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.903505 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-5cfzh" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.903743 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-8c7d6" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.904458 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-jrc4l" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.905573 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fvlfw"] Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.906612 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fvlfw" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.907440 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.913791 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-pc8vw"] Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.920973 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.921999 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.922584 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.922671 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.922774 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.922846 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.923917 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.925282 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.928667 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.928810 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.928838 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.928854 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.929218 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.929265 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.930059 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.930113 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.930139 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.930503 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.930869 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.930907 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.931294 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.931399 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.931507 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.931901 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.932219 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.932840 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.938916 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.940144 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.954981 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jd5bf"] Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.955788 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-l8tzk"] Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.956152 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-w5xfl"] Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.956660 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.957099 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pc8vw" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.957331 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jd5bf" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.959360 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-l8tzk" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.959994 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.964182 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-9mm85"] Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.964235 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.964426 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.964662 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.964761 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.968928 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4vqw6"] Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.968981 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-qnvwg"] Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.969546 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-dvc6z"] Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.969639 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-qnvwg" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.970737 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.973346 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.973772 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-7vb4h"] Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.974467 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.974710 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.974771 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.974878 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.975007 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.975095 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.975163 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.975271 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.975407 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.975583 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lwp58"] Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.975658 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-mm8n9"] Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.975683 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.975747 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7vb4h" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.976234 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.976555 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.977645 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e7db006-6ea5-44bb-89ad-0d6a6a4810ca-trusted-ca-bundle\") pod \"apiserver-76f77b778f-mm8n9\" (UID: \"7e7db006-6ea5-44bb-89ad-0d6a6a4810ca\") " pod="openshift-apiserver/apiserver-76f77b778f-mm8n9" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.977716 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5179459-d832-4419-96ce-44dd4f055e98-config\") pod \"route-controller-manager-6576b87f9c-9mm85\" (UID: \"b5179459-d832-4419-96ce-44dd4f055e98\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9mm85" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.977749 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r28hp\" (UniqueName: \"kubernetes.io/projected/3a4c2111-a5d9-4567-91c6-59733a5d711f-kube-api-access-r28hp\") pod \"openshift-apiserver-operator-796bbdcf4f-4vqw6\" (UID: \"3a4c2111-a5d9-4567-91c6-59733a5d711f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4vqw6" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.977780 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/36b6918f-fd99-4666-982d-7636ea1d9a1c-machine-approver-tls\") pod \"machine-approver-56656f9798-p8hj6\" (UID: \"36b6918f-fd99-4666-982d-7636ea1d9a1c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p8hj6" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.977815 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb6ms\" (UniqueName: \"kubernetes.io/projected/7e7db006-6ea5-44bb-89ad-0d6a6a4810ca-kube-api-access-zb6ms\") pod \"apiserver-76f77b778f-mm8n9\" (UID: \"7e7db006-6ea5-44bb-89ad-0d6a6a4810ca\") " pod="openshift-apiserver/apiserver-76f77b778f-mm8n9" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.977840 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/7e7db006-6ea5-44bb-89ad-0d6a6a4810ca-audit\") pod \"apiserver-76f77b778f-mm8n9\" (UID: \"7e7db006-6ea5-44bb-89ad-0d6a6a4810ca\") " pod="openshift-apiserver/apiserver-76f77b778f-mm8n9" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.977867 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b5179459-d832-4419-96ce-44dd4f055e98-client-ca\") pod \"route-controller-manager-6576b87f9c-9mm85\" (UID: \"b5179459-d832-4419-96ce-44dd4f055e98\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9mm85" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.977915 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7e7db006-6ea5-44bb-89ad-0d6a6a4810ca-encryption-config\") pod \"apiserver-76f77b778f-mm8n9\" (UID: \"7e7db006-6ea5-44bb-89ad-0d6a6a4810ca\") " pod="openshift-apiserver/apiserver-76f77b778f-mm8n9" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.977940 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5abe378d-2b00-4d15-af94-5141934fca47-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-xbb4t\" (UID: \"5abe378d-2b00-4d15-af94-5141934fca47\") " pod="openshift-authentication/oauth-openshift-558db77b4-xbb4t" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.977964 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5abe378d-2b00-4d15-af94-5141934fca47-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-xbb4t\" (UID: \"5abe378d-2b00-4d15-af94-5141934fca47\") " pod="openshift-authentication/oauth-openshift-558db77b4-xbb4t" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.977992 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5abe378d-2b00-4d15-af94-5141934fca47-audit-dir\") pod \"oauth-openshift-558db77b4-xbb4t\" (UID: \"5abe378d-2b00-4d15-af94-5141934fca47\") " pod="openshift-authentication/oauth-openshift-558db77b4-xbb4t" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.978019 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5abe378d-2b00-4d15-af94-5141934fca47-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-xbb4t\" (UID: \"5abe378d-2b00-4d15-af94-5141934fca47\") " pod="openshift-authentication/oauth-openshift-558db77b4-xbb4t" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.978072 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndkph\" (UniqueName: \"kubernetes.io/projected/36b6918f-fd99-4666-982d-7636ea1d9a1c-kube-api-access-ndkph\") pod \"machine-approver-56656f9798-p8hj6\" (UID: \"36b6918f-fd99-4666-982d-7636ea1d9a1c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p8hj6" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.978156 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5abe378d-2b00-4d15-af94-5141934fca47-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-xbb4t\" (UID: \"5abe378d-2b00-4d15-af94-5141934fca47\") " pod="openshift-authentication/oauth-openshift-558db77b4-xbb4t" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.978263 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e7db006-6ea5-44bb-89ad-0d6a6a4810ca-serving-cert\") pod \"apiserver-76f77b778f-mm8n9\" (UID: \"7e7db006-6ea5-44bb-89ad-0d6a6a4810ca\") " pod="openshift-apiserver/apiserver-76f77b778f-mm8n9" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.978353 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5abe378d-2b00-4d15-af94-5141934fca47-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-xbb4t\" (UID: \"5abe378d-2b00-4d15-af94-5141934fca47\") " pod="openshift-authentication/oauth-openshift-558db77b4-xbb4t" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.978432 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5abe378d-2b00-4d15-af94-5141934fca47-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-xbb4t\" (UID: \"5abe378d-2b00-4d15-af94-5141934fca47\") " pod="openshift-authentication/oauth-openshift-558db77b4-xbb4t" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.978509 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/43c40b45-9695-4d29-b627-c4ab23d1d6d0-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-dvc6z\" (UID: \"43c40b45-9695-4d29-b627-c4ab23d1d6d0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dvc6z" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.978675 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzzrp\" (UniqueName: \"kubernetes.io/projected/b5179459-d832-4419-96ce-44dd4f055e98-kube-api-access-rzzrp\") pod \"route-controller-manager-6576b87f9c-9mm85\" (UID: \"b5179459-d832-4419-96ce-44dd4f055e98\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9mm85" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.978712 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5abe378d-2b00-4d15-af94-5141934fca47-audit-policies\") pod \"oauth-openshift-558db77b4-xbb4t\" (UID: \"5abe378d-2b00-4d15-af94-5141934fca47\") " pod="openshift-authentication/oauth-openshift-558db77b4-xbb4t" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.978742 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a4c2111-a5d9-4567-91c6-59733a5d711f-config\") pod \"openshift-apiserver-operator-796bbdcf4f-4vqw6\" (UID: \"3a4c2111-a5d9-4567-91c6-59733a5d711f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4vqw6" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.978771 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5abe378d-2b00-4d15-af94-5141934fca47-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-xbb4t\" (UID: \"5abe378d-2b00-4d15-af94-5141934fca47\") " pod="openshift-authentication/oauth-openshift-558db77b4-xbb4t" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.978800 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5abe378d-2b00-4d15-af94-5141934fca47-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-xbb4t\" (UID: \"5abe378d-2b00-4d15-af94-5141934fca47\") " pod="openshift-authentication/oauth-openshift-558db77b4-xbb4t" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.978830 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a4c2111-a5d9-4567-91c6-59733a5d711f-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-4vqw6\" (UID: \"3a4c2111-a5d9-4567-91c6-59733a5d711f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4vqw6" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.978858 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5179459-d832-4419-96ce-44dd4f055e98-serving-cert\") pod \"route-controller-manager-6576b87f9c-9mm85\" (UID: \"b5179459-d832-4419-96ce-44dd4f055e98\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9mm85" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.978886 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5abe378d-2b00-4d15-af94-5141934fca47-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-xbb4t\" (UID: \"5abe378d-2b00-4d15-af94-5141934fca47\") " pod="openshift-authentication/oauth-openshift-558db77b4-xbb4t" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.978916 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krpdj\" (UniqueName: \"kubernetes.io/projected/43c40b45-9695-4d29-b627-c4ab23d1d6d0-kube-api-access-krpdj\") pod \"machine-api-operator-5694c8668f-dvc6z\" (UID: \"43c40b45-9695-4d29-b627-c4ab23d1d6d0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dvc6z" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.978943 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5abe378d-2b00-4d15-af94-5141934fca47-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-xbb4t\" (UID: \"5abe378d-2b00-4d15-af94-5141934fca47\") " pod="openshift-authentication/oauth-openshift-558db77b4-xbb4t" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.978983 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7e7db006-6ea5-44bb-89ad-0d6a6a4810ca-etcd-client\") pod \"apiserver-76f77b778f-mm8n9\" (UID: \"7e7db006-6ea5-44bb-89ad-0d6a6a4810ca\") " pod="openshift-apiserver/apiserver-76f77b778f-mm8n9" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.979007 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7e7db006-6ea5-44bb-89ad-0d6a6a4810ca-audit-dir\") pod \"apiserver-76f77b778f-mm8n9\" (UID: \"7e7db006-6ea5-44bb-89ad-0d6a6a4810ca\") " pod="openshift-apiserver/apiserver-76f77b778f-mm8n9" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.979038 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5abe378d-2b00-4d15-af94-5141934fca47-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-xbb4t\" (UID: \"5abe378d-2b00-4d15-af94-5141934fca47\") " pod="openshift-authentication/oauth-openshift-558db77b4-xbb4t" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.979071 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36b6918f-fd99-4666-982d-7636ea1d9a1c-config\") pod \"machine-approver-56656f9798-p8hj6\" (UID: \"36b6918f-fd99-4666-982d-7636ea1d9a1c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p8hj6" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.979096 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/7e7db006-6ea5-44bb-89ad-0d6a6a4810ca-image-import-ca\") pod \"apiserver-76f77b778f-mm8n9\" (UID: \"7e7db006-6ea5-44bb-89ad-0d6a6a4810ca\") " pod="openshift-apiserver/apiserver-76f77b778f-mm8n9" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.979124 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43c40b45-9695-4d29-b627-c4ab23d1d6d0-config\") pod \"machine-api-operator-5694c8668f-dvc6z\" (UID: \"43c40b45-9695-4d29-b627-c4ab23d1d6d0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dvc6z" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.979152 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7e7db006-6ea5-44bb-89ad-0d6a6a4810ca-etcd-serving-ca\") pod \"apiserver-76f77b778f-mm8n9\" (UID: \"7e7db006-6ea5-44bb-89ad-0d6a6a4810ca\") " pod="openshift-apiserver/apiserver-76f77b778f-mm8n9" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.979181 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm6tt\" (UniqueName: \"kubernetes.io/projected/5abe378d-2b00-4d15-af94-5141934fca47-kube-api-access-pm6tt\") pod \"oauth-openshift-558db77b4-xbb4t\" (UID: \"5abe378d-2b00-4d15-af94-5141934fca47\") " pod="openshift-authentication/oauth-openshift-558db77b4-xbb4t" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.979243 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e7db006-6ea5-44bb-89ad-0d6a6a4810ca-config\") pod \"apiserver-76f77b778f-mm8n9\" (UID: \"7e7db006-6ea5-44bb-89ad-0d6a6a4810ca\") " pod="openshift-apiserver/apiserver-76f77b778f-mm8n9" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.979271 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7e7db006-6ea5-44bb-89ad-0d6a6a4810ca-node-pullsecrets\") pod \"apiserver-76f77b778f-mm8n9\" (UID: \"7e7db006-6ea5-44bb-89ad-0d6a6a4810ca\") " pod="openshift-apiserver/apiserver-76f77b778f-mm8n9" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.978631 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.979667 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/43c40b45-9695-4d29-b627-c4ab23d1d6d0-images\") pod \"machine-api-operator-5694c8668f-dvc6z\" (UID: \"43c40b45-9695-4d29-b627-c4ab23d1d6d0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dvc6z" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.979710 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/36b6918f-fd99-4666-982d-7636ea1d9a1c-auth-proxy-config\") pod \"machine-approver-56656f9798-p8hj6\" (UID: \"36b6918f-fd99-4666-982d-7636ea1d9a1c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p8hj6" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.979881 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-l8tzk"] Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.979140 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.985981 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-8c7d6"] Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.986955 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-5cfzh"] Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.987804 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sxcr7"] Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.988365 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sxcr7" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.988774 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5kh7f"] Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.989162 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5kh7f" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.989441 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dl79l"] Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.990967 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-jvfrr"] Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.992755 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dl79l" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.993859 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4x4s2"] Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.994020 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-jvfrr" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.994137 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-jw72g"] Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.994259 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4x4s2" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.994357 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Oct 13 17:26:33 crc kubenswrapper[4720]: I1013 17:26:33.995116 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jw72g" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.010979 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.011626 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-b5ljk"] Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.013372 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b5ljk" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.019377 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.022548 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mfblh"] Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.028143 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-vttl9"] Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.028495 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mfblh" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.028938 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-vttl9" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.029669 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-hjhns"] Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.033750 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-hjhns" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.039285 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.042697 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-56p82"] Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.043700 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7xzcq"] Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.044252 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-56p82" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.044757 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7xzcq" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.047762 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-jrc4l"] Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.049531 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-vm4rv"] Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.050505 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vm4rv" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.051125 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-mg7s9"] Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.052351 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vhgw6"] Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.052946 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vhgw6" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.053408 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-qnvwg"] Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.054357 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-xbb4t"] Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.055520 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.055540 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-w5xfl"] Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.056503 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mjzm6"] Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.057168 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mjzm6" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.057853 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29339595-f78vr"] Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.058642 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29339595-f78vr" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.058792 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xpdm2"] Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.059781 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xpdm2" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.059900 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9klwx"] Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.060426 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9klwx" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.061280 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-9snlb"] Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.061973 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-9snlb" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.062812 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-5tn27"] Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.063230 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-5tn27" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.064144 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jd5bf"] Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.065436 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-5tn27"] Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.066405 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-jw72g"] Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.067333 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-9tdzd"] Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.068142 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-9tdzd" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.068312 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mfblh"] Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.069298 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fvlfw"] Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.070215 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-pc8vw"] Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.072223 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sxcr7"] Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.073268 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-scfnf"] Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.073922 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-scfnf" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.074382 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-7vb4h"] Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.075297 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.075331 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-9snlb"] Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.076442 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dl79l"] Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.077567 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vhgw6"] Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.078616 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-jvfrr"] Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.079644 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-hjhns"] Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.080271 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36b6918f-fd99-4666-982d-7636ea1d9a1c-config\") pod \"machine-approver-56656f9798-p8hj6\" (UID: \"36b6918f-fd99-4666-982d-7636ea1d9a1c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p8hj6" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.080309 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43c40b45-9695-4d29-b627-c4ab23d1d6d0-config\") pod \"machine-api-operator-5694c8668f-dvc6z\" (UID: \"43c40b45-9695-4d29-b627-c4ab23d1d6d0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dvc6z" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.080334 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7e7db006-6ea5-44bb-89ad-0d6a6a4810ca-etcd-serving-ca\") pod \"apiserver-76f77b778f-mm8n9\" (UID: \"7e7db006-6ea5-44bb-89ad-0d6a6a4810ca\") " pod="openshift-apiserver/apiserver-76f77b778f-mm8n9" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.080359 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/7e7db006-6ea5-44bb-89ad-0d6a6a4810ca-image-import-ca\") pod \"apiserver-76f77b778f-mm8n9\" (UID: \"7e7db006-6ea5-44bb-89ad-0d6a6a4810ca\") " pod="openshift-apiserver/apiserver-76f77b778f-mm8n9" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.080385 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pm6tt\" (UniqueName: \"kubernetes.io/projected/5abe378d-2b00-4d15-af94-5141934fca47-kube-api-access-pm6tt\") pod \"oauth-openshift-558db77b4-xbb4t\" (UID: \"5abe378d-2b00-4d15-af94-5141934fca47\") " pod="openshift-authentication/oauth-openshift-558db77b4-xbb4t" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.080408 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e7db006-6ea5-44bb-89ad-0d6a6a4810ca-config\") pod \"apiserver-76f77b778f-mm8n9\" (UID: \"7e7db006-6ea5-44bb-89ad-0d6a6a4810ca\") " pod="openshift-apiserver/apiserver-76f77b778f-mm8n9" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.080434 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/43c40b45-9695-4d29-b627-c4ab23d1d6d0-images\") pod \"machine-api-operator-5694c8668f-dvc6z\" (UID: \"43c40b45-9695-4d29-b627-c4ab23d1d6d0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dvc6z" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.080454 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/36b6918f-fd99-4666-982d-7636ea1d9a1c-auth-proxy-config\") pod \"machine-approver-56656f9798-p8hj6\" (UID: \"36b6918f-fd99-4666-982d-7636ea1d9a1c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p8hj6" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.080476 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7e7db006-6ea5-44bb-89ad-0d6a6a4810ca-node-pullsecrets\") pod \"apiserver-76f77b778f-mm8n9\" (UID: \"7e7db006-6ea5-44bb-89ad-0d6a6a4810ca\") " pod="openshift-apiserver/apiserver-76f77b778f-mm8n9" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.080510 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e7db006-6ea5-44bb-89ad-0d6a6a4810ca-trusted-ca-bundle\") pod \"apiserver-76f77b778f-mm8n9\" (UID: \"7e7db006-6ea5-44bb-89ad-0d6a6a4810ca\") " pod="openshift-apiserver/apiserver-76f77b778f-mm8n9" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.080533 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5179459-d832-4419-96ce-44dd4f055e98-config\") pod \"route-controller-manager-6576b87f9c-9mm85\" (UID: \"b5179459-d832-4419-96ce-44dd4f055e98\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9mm85" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.080557 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r28hp\" (UniqueName: \"kubernetes.io/projected/3a4c2111-a5d9-4567-91c6-59733a5d711f-kube-api-access-r28hp\") pod \"openshift-apiserver-operator-796bbdcf4f-4vqw6\" (UID: \"3a4c2111-a5d9-4567-91c6-59733a5d711f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4vqw6" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.080579 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/36b6918f-fd99-4666-982d-7636ea1d9a1c-machine-approver-tls\") pod \"machine-approver-56656f9798-p8hj6\" (UID: \"36b6918f-fd99-4666-982d-7636ea1d9a1c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p8hj6" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.080607 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zb6ms\" (UniqueName: \"kubernetes.io/projected/7e7db006-6ea5-44bb-89ad-0d6a6a4810ca-kube-api-access-zb6ms\") pod \"apiserver-76f77b778f-mm8n9\" (UID: \"7e7db006-6ea5-44bb-89ad-0d6a6a4810ca\") " pod="openshift-apiserver/apiserver-76f77b778f-mm8n9" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.080626 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/7e7db006-6ea5-44bb-89ad-0d6a6a4810ca-audit\") pod \"apiserver-76f77b778f-mm8n9\" (UID: \"7e7db006-6ea5-44bb-89ad-0d6a6a4810ca\") " pod="openshift-apiserver/apiserver-76f77b778f-mm8n9" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.080653 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b5179459-d832-4419-96ce-44dd4f055e98-client-ca\") pod \"route-controller-manager-6576b87f9c-9mm85\" (UID: \"b5179459-d832-4419-96ce-44dd4f055e98\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9mm85" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.080697 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5abe378d-2b00-4d15-af94-5141934fca47-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-xbb4t\" (UID: \"5abe378d-2b00-4d15-af94-5141934fca47\") " pod="openshift-authentication/oauth-openshift-558db77b4-xbb4t" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.080731 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5abe378d-2b00-4d15-af94-5141934fca47-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-xbb4t\" (UID: \"5abe378d-2b00-4d15-af94-5141934fca47\") " pod="openshift-authentication/oauth-openshift-558db77b4-xbb4t" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.080764 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7e7db006-6ea5-44bb-89ad-0d6a6a4810ca-encryption-config\") pod \"apiserver-76f77b778f-mm8n9\" (UID: \"7e7db006-6ea5-44bb-89ad-0d6a6a4810ca\") " pod="openshift-apiserver/apiserver-76f77b778f-mm8n9" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.080789 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5abe378d-2b00-4d15-af94-5141934fca47-audit-dir\") pod \"oauth-openshift-558db77b4-xbb4t\" (UID: \"5abe378d-2b00-4d15-af94-5141934fca47\") " pod="openshift-authentication/oauth-openshift-558db77b4-xbb4t" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.080812 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5abe378d-2b00-4d15-af94-5141934fca47-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-xbb4t\" (UID: \"5abe378d-2b00-4d15-af94-5141934fca47\") " pod="openshift-authentication/oauth-openshift-558db77b4-xbb4t" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.080836 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndkph\" (UniqueName: \"kubernetes.io/projected/36b6918f-fd99-4666-982d-7636ea1d9a1c-kube-api-access-ndkph\") pod \"machine-approver-56656f9798-p8hj6\" (UID: \"36b6918f-fd99-4666-982d-7636ea1d9a1c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p8hj6" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.080874 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5abe378d-2b00-4d15-af94-5141934fca47-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-xbb4t\" (UID: \"5abe378d-2b00-4d15-af94-5141934fca47\") " pod="openshift-authentication/oauth-openshift-558db77b4-xbb4t" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.080922 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e7db006-6ea5-44bb-89ad-0d6a6a4810ca-serving-cert\") pod \"apiserver-76f77b778f-mm8n9\" (UID: \"7e7db006-6ea5-44bb-89ad-0d6a6a4810ca\") " pod="openshift-apiserver/apiserver-76f77b778f-mm8n9" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.080946 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5abe378d-2b00-4d15-af94-5141934fca47-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-xbb4t\" (UID: \"5abe378d-2b00-4d15-af94-5141934fca47\") " pod="openshift-authentication/oauth-openshift-558db77b4-xbb4t" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.080971 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5abe378d-2b00-4d15-af94-5141934fca47-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-xbb4t\" (UID: \"5abe378d-2b00-4d15-af94-5141934fca47\") " pod="openshift-authentication/oauth-openshift-558db77b4-xbb4t" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.080997 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/43c40b45-9695-4d29-b627-c4ab23d1d6d0-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-dvc6z\" (UID: \"43c40b45-9695-4d29-b627-c4ab23d1d6d0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dvc6z" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.081020 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzzrp\" (UniqueName: \"kubernetes.io/projected/b5179459-d832-4419-96ce-44dd4f055e98-kube-api-access-rzzrp\") pod \"route-controller-manager-6576b87f9c-9mm85\" (UID: \"b5179459-d832-4419-96ce-44dd4f055e98\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9mm85" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.081044 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5abe378d-2b00-4d15-af94-5141934fca47-audit-policies\") pod \"oauth-openshift-558db77b4-xbb4t\" (UID: \"5abe378d-2b00-4d15-af94-5141934fca47\") " pod="openshift-authentication/oauth-openshift-558db77b4-xbb4t" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.081081 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a4c2111-a5d9-4567-91c6-59733a5d711f-config\") pod \"openshift-apiserver-operator-796bbdcf4f-4vqw6\" (UID: \"3a4c2111-a5d9-4567-91c6-59733a5d711f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4vqw6" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.081102 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5abe378d-2b00-4d15-af94-5141934fca47-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-xbb4t\" (UID: \"5abe378d-2b00-4d15-af94-5141934fca47\") " pod="openshift-authentication/oauth-openshift-558db77b4-xbb4t" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.081126 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5abe378d-2b00-4d15-af94-5141934fca47-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-xbb4t\" (UID: \"5abe378d-2b00-4d15-af94-5141934fca47\") " pod="openshift-authentication/oauth-openshift-558db77b4-xbb4t" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.081142 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36b6918f-fd99-4666-982d-7636ea1d9a1c-config\") pod \"machine-approver-56656f9798-p8hj6\" (UID: \"36b6918f-fd99-4666-982d-7636ea1d9a1c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p8hj6" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.081150 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5179459-d832-4419-96ce-44dd4f055e98-serving-cert\") pod \"route-controller-manager-6576b87f9c-9mm85\" (UID: \"b5179459-d832-4419-96ce-44dd4f055e98\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9mm85" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.081252 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a4c2111-a5d9-4567-91c6-59733a5d711f-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-4vqw6\" (UID: \"3a4c2111-a5d9-4567-91c6-59733a5d711f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4vqw6" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.081844 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5abe378d-2b00-4d15-af94-5141934fca47-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-xbb4t\" (UID: \"5abe378d-2b00-4d15-af94-5141934fca47\") " pod="openshift-authentication/oauth-openshift-558db77b4-xbb4t" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.081992 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krpdj\" (UniqueName: \"kubernetes.io/projected/43c40b45-9695-4d29-b627-c4ab23d1d6d0-kube-api-access-krpdj\") pod \"machine-api-operator-5694c8668f-dvc6z\" (UID: \"43c40b45-9695-4d29-b627-c4ab23d1d6d0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dvc6z" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.082680 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/36b6918f-fd99-4666-982d-7636ea1d9a1c-auth-proxy-config\") pod \"machine-approver-56656f9798-p8hj6\" (UID: \"36b6918f-fd99-4666-982d-7636ea1d9a1c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p8hj6" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.082716 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5abe378d-2b00-4d15-af94-5141934fca47-audit-dir\") pod \"oauth-openshift-558db77b4-xbb4t\" (UID: \"5abe378d-2b00-4d15-af94-5141934fca47\") " pod="openshift-authentication/oauth-openshift-558db77b4-xbb4t" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.082688 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e7db006-6ea5-44bb-89ad-0d6a6a4810ca-config\") pod \"apiserver-76f77b778f-mm8n9\" (UID: \"7e7db006-6ea5-44bb-89ad-0d6a6a4810ca\") " pod="openshift-apiserver/apiserver-76f77b778f-mm8n9" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.082743 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43c40b45-9695-4d29-b627-c4ab23d1d6d0-config\") pod \"machine-api-operator-5694c8668f-dvc6z\" (UID: \"43c40b45-9695-4d29-b627-c4ab23d1d6d0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dvc6z" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.082825 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7e7db006-6ea5-44bb-89ad-0d6a6a4810ca-node-pullsecrets\") pod \"apiserver-76f77b778f-mm8n9\" (UID: \"7e7db006-6ea5-44bb-89ad-0d6a6a4810ca\") " pod="openshift-apiserver/apiserver-76f77b778f-mm8n9" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.083355 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5kh7f"] Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.083599 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/43c40b45-9695-4d29-b627-c4ab23d1d6d0-images\") pod \"machine-api-operator-5694c8668f-dvc6z\" (UID: \"43c40b45-9695-4d29-b627-c4ab23d1d6d0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dvc6z" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.084133 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5abe378d-2b00-4d15-af94-5141934fca47-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-xbb4t\" (UID: \"5abe378d-2b00-4d15-af94-5141934fca47\") " pod="openshift-authentication/oauth-openshift-558db77b4-xbb4t" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.084330 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/7e7db006-6ea5-44bb-89ad-0d6a6a4810ca-audit\") pod \"apiserver-76f77b778f-mm8n9\" (UID: \"7e7db006-6ea5-44bb-89ad-0d6a6a4810ca\") " pod="openshift-apiserver/apiserver-76f77b778f-mm8n9" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.084922 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e7db006-6ea5-44bb-89ad-0d6a6a4810ca-trusted-ca-bundle\") pod \"apiserver-76f77b778f-mm8n9\" (UID: \"7e7db006-6ea5-44bb-89ad-0d6a6a4810ca\") " pod="openshift-apiserver/apiserver-76f77b778f-mm8n9" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.084940 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/7e7db006-6ea5-44bb-89ad-0d6a6a4810ca-image-import-ca\") pod \"apiserver-76f77b778f-mm8n9\" (UID: \"7e7db006-6ea5-44bb-89ad-0d6a6a4810ca\") " pod="openshift-apiserver/apiserver-76f77b778f-mm8n9" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.085538 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5179459-d832-4419-96ce-44dd4f055e98-config\") pod \"route-controller-manager-6576b87f9c-9mm85\" (UID: \"b5179459-d832-4419-96ce-44dd4f055e98\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9mm85" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.086291 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a4c2111-a5d9-4567-91c6-59733a5d711f-config\") pod \"openshift-apiserver-operator-796bbdcf4f-4vqw6\" (UID: \"3a4c2111-a5d9-4567-91c6-59733a5d711f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4vqw6" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.086295 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b5179459-d832-4419-96ce-44dd4f055e98-client-ca\") pod \"route-controller-manager-6576b87f9c-9mm85\" (UID: \"b5179459-d832-4419-96ce-44dd4f055e98\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9mm85" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.086879 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5abe378d-2b00-4d15-af94-5141934fca47-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-xbb4t\" (UID: \"5abe378d-2b00-4d15-af94-5141934fca47\") " pod="openshift-authentication/oauth-openshift-558db77b4-xbb4t" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.089873 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5abe378d-2b00-4d15-af94-5141934fca47-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-xbb4t\" (UID: \"5abe378d-2b00-4d15-af94-5141934fca47\") " pod="openshift-authentication/oauth-openshift-558db77b4-xbb4t" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.090227 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-b5ljk"] Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.090283 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5abe378d-2b00-4d15-af94-5141934fca47-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-xbb4t\" (UID: \"5abe378d-2b00-4d15-af94-5141934fca47\") " pod="openshift-authentication/oauth-openshift-558db77b4-xbb4t" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.090300 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-vm4rv"] Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.090362 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-56p82"] Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.090811 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/36b6918f-fd99-4666-982d-7636ea1d9a1c-machine-approver-tls\") pod \"machine-approver-56656f9798-p8hj6\" (UID: \"36b6918f-fd99-4666-982d-7636ea1d9a1c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p8hj6" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.091341 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5abe378d-2b00-4d15-af94-5141934fca47-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-xbb4t\" (UID: \"5abe378d-2b00-4d15-af94-5141934fca47\") " pod="openshift-authentication/oauth-openshift-558db77b4-xbb4t" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.091580 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e7db006-6ea5-44bb-89ad-0d6a6a4810ca-serving-cert\") pod \"apiserver-76f77b778f-mm8n9\" (UID: \"7e7db006-6ea5-44bb-89ad-0d6a6a4810ca\") " pod="openshift-apiserver/apiserver-76f77b778f-mm8n9" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.091798 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5abe378d-2b00-4d15-af94-5141934fca47-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-xbb4t\" (UID: \"5abe378d-2b00-4d15-af94-5141934fca47\") " pod="openshift-authentication/oauth-openshift-558db77b4-xbb4t" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.091921 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7e7db006-6ea5-44bb-89ad-0d6a6a4810ca-etcd-client\") pod \"apiserver-76f77b778f-mm8n9\" (UID: \"7e7db006-6ea5-44bb-89ad-0d6a6a4810ca\") " pod="openshift-apiserver/apiserver-76f77b778f-mm8n9" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.092000 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5abe378d-2b00-4d15-af94-5141934fca47-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-xbb4t\" (UID: \"5abe378d-2b00-4d15-af94-5141934fca47\") " pod="openshift-authentication/oauth-openshift-558db77b4-xbb4t" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.092054 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7e7db006-6ea5-44bb-89ad-0d6a6a4810ca-audit-dir\") pod \"apiserver-76f77b778f-mm8n9\" (UID: \"7e7db006-6ea5-44bb-89ad-0d6a6a4810ca\") " pod="openshift-apiserver/apiserver-76f77b778f-mm8n9" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.092144 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7e7db006-6ea5-44bb-89ad-0d6a6a4810ca-audit-dir\") pod \"apiserver-76f77b778f-mm8n9\" (UID: \"7e7db006-6ea5-44bb-89ad-0d6a6a4810ca\") " pod="openshift-apiserver/apiserver-76f77b778f-mm8n9" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.092513 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/43c40b45-9695-4d29-b627-c4ab23d1d6d0-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-dvc6z\" (UID: \"43c40b45-9695-4d29-b627-c4ab23d1d6d0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dvc6z" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.092675 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5abe378d-2b00-4d15-af94-5141934fca47-audit-policies\") pod \"oauth-openshift-558db77b4-xbb4t\" (UID: \"5abe378d-2b00-4d15-af94-5141934fca47\") " pod="openshift-authentication/oauth-openshift-558db77b4-xbb4t" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.092861 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5179459-d832-4419-96ce-44dd4f055e98-serving-cert\") pod \"route-controller-manager-6576b87f9c-9mm85\" (UID: \"b5179459-d832-4419-96ce-44dd4f055e98\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9mm85" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.092955 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7e7db006-6ea5-44bb-89ad-0d6a6a4810ca-encryption-config\") pod \"apiserver-76f77b778f-mm8n9\" (UID: \"7e7db006-6ea5-44bb-89ad-0d6a6a4810ca\") " pod="openshift-apiserver/apiserver-76f77b778f-mm8n9" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.093393 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7e7db006-6ea5-44bb-89ad-0d6a6a4810ca-etcd-serving-ca\") pod \"apiserver-76f77b778f-mm8n9\" (UID: \"7e7db006-6ea5-44bb-89ad-0d6a6a4810ca\") " pod="openshift-apiserver/apiserver-76f77b778f-mm8n9" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.093468 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5abe378d-2b00-4d15-af94-5141934fca47-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-xbb4t\" (UID: \"5abe378d-2b00-4d15-af94-5141934fca47\") " pod="openshift-authentication/oauth-openshift-558db77b4-xbb4t" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.093794 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5abe378d-2b00-4d15-af94-5141934fca47-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-xbb4t\" (UID: \"5abe378d-2b00-4d15-af94-5141934fca47\") " pod="openshift-authentication/oauth-openshift-558db77b4-xbb4t" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.093943 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5abe378d-2b00-4d15-af94-5141934fca47-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-xbb4t\" (UID: \"5abe378d-2b00-4d15-af94-5141934fca47\") " pod="openshift-authentication/oauth-openshift-558db77b4-xbb4t" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.094027 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-9tdzd"] Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.097058 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7xzcq"] Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.100475 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5abe378d-2b00-4d15-af94-5141934fca47-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-xbb4t\" (UID: \"5abe378d-2b00-4d15-af94-5141934fca47\") " pod="openshift-authentication/oauth-openshift-558db77b4-xbb4t" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.100827 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a4c2111-a5d9-4567-91c6-59733a5d711f-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-4vqw6\" (UID: \"3a4c2111-a5d9-4567-91c6-59733a5d711f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4vqw6" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.101245 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7e7db006-6ea5-44bb-89ad-0d6a6a4810ca-etcd-client\") pod \"apiserver-76f77b778f-mm8n9\" (UID: \"7e7db006-6ea5-44bb-89ad-0d6a6a4810ca\") " pod="openshift-apiserver/apiserver-76f77b778f-mm8n9" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.101440 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5abe378d-2b00-4d15-af94-5141934fca47-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-xbb4t\" (UID: \"5abe378d-2b00-4d15-af94-5141934fca47\") " pod="openshift-authentication/oauth-openshift-558db77b4-xbb4t" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.102724 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4x4s2"] Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.102756 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5abe378d-2b00-4d15-af94-5141934fca47-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-xbb4t\" (UID: \"5abe378d-2b00-4d15-af94-5141934fca47\") " pod="openshift-authentication/oauth-openshift-558db77b4-xbb4t" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.103784 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29339595-f78vr"] Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.105333 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mjzm6"] Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.106823 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xpdm2"] Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.108278 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9klwx"] Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.109720 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-9w7qp"] Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.110617 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-9w7qp" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.111318 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-9w7qp"] Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.116743 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.124493 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-pmrm5"] Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.125304 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-pmrm5" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.132448 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-pmrm5"] Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.135740 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.163546 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.175259 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.195217 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.235300 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.255073 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.275866 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.295099 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.315428 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.335557 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.355802 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.376545 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.395924 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.415795 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.436134 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.456339 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.476156 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.496253 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.516180 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.535423 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.556415 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.575325 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.595353 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.616372 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.636348 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.656012 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.676013 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.695600 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.716566 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.736143 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.756115 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.775638 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.795779 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.815313 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.836495 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.856634 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.878486 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.895754 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.915687 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.936980 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.956108 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.975470 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Oct 13 17:26:34 crc kubenswrapper[4720]: I1013 17:26:34.996367 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Oct 13 17:26:35 crc kubenswrapper[4720]: I1013 17:26:35.016087 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Oct 13 17:26:35 crc kubenswrapper[4720]: I1013 17:26:35.035996 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Oct 13 17:26:35 crc kubenswrapper[4720]: I1013 17:26:35.054113 4720 request.go:700] Waited for 1.00956569s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dolm-operator-serviceaccount-dockercfg-rq7zk&limit=500&resourceVersion=0 Oct 13 17:26:35 crc kubenswrapper[4720]: I1013 17:26:35.056426 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Oct 13 17:26:35 crc kubenswrapper[4720]: I1013 17:26:35.076134 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Oct 13 17:26:35 crc kubenswrapper[4720]: I1013 17:26:35.097179 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Oct 13 17:26:35 crc kubenswrapper[4720]: I1013 17:26:35.116480 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Oct 13 17:26:35 crc kubenswrapper[4720]: I1013 17:26:35.136374 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Oct 13 17:26:35 crc kubenswrapper[4720]: I1013 17:26:35.156449 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Oct 13 17:26:35 crc kubenswrapper[4720]: I1013 17:26:35.176298 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Oct 13 17:26:35 crc kubenswrapper[4720]: I1013 17:26:35.196408 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Oct 13 17:26:35 crc kubenswrapper[4720]: I1013 17:26:35.215650 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Oct 13 17:26:35 crc kubenswrapper[4720]: I1013 17:26:35.235931 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Oct 13 17:26:35 crc kubenswrapper[4720]: I1013 17:26:35.256398 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Oct 13 17:26:35 crc kubenswrapper[4720]: I1013 17:26:35.275896 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 13 17:26:35 crc kubenswrapper[4720]: I1013 17:26:35.296291 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 13 17:26:35 crc kubenswrapper[4720]: I1013 17:26:35.317004 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Oct 13 17:26:35 crc kubenswrapper[4720]: I1013 17:26:35.336398 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Oct 13 17:26:35 crc kubenswrapper[4720]: I1013 17:26:35.356249 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Oct 13 17:26:35 crc kubenswrapper[4720]: I1013 17:26:35.376738 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Oct 13 17:26:35 crc kubenswrapper[4720]: I1013 17:26:35.404274 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Oct 13 17:26:35 crc kubenswrapper[4720]: I1013 17:26:35.416088 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Oct 13 17:26:35 crc kubenswrapper[4720]: I1013 17:26:35.436221 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Oct 13 17:26:35 crc kubenswrapper[4720]: I1013 17:26:35.457045 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Oct 13 17:26:35 crc kubenswrapper[4720]: I1013 17:26:35.475849 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Oct 13 17:26:35 crc kubenswrapper[4720]: I1013 17:26:35.496904 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Oct 13 17:26:35 crc kubenswrapper[4720]: I1013 17:26:35.515543 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Oct 13 17:26:35 crc kubenswrapper[4720]: I1013 17:26:35.534951 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Oct 13 17:26:35 crc kubenswrapper[4720]: I1013 17:26:35.555942 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Oct 13 17:26:35 crc kubenswrapper[4720]: I1013 17:26:35.575903 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Oct 13 17:26:35 crc kubenswrapper[4720]: I1013 17:26:35.596058 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Oct 13 17:26:35 crc kubenswrapper[4720]: I1013 17:26:35.617175 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Oct 13 17:26:35 crc kubenswrapper[4720]: I1013 17:26:35.636636 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Oct 13 17:26:35 crc kubenswrapper[4720]: I1013 17:26:35.656181 4720 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Oct 13 17:26:35 crc kubenswrapper[4720]: I1013 17:26:35.676502 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Oct 13 17:26:35 crc kubenswrapper[4720]: I1013 17:26:35.696288 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Oct 13 17:26:35 crc kubenswrapper[4720]: I1013 17:26:35.716094 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Oct 13 17:26:35 crc kubenswrapper[4720]: I1013 17:26:35.735652 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Oct 13 17:26:35 crc kubenswrapper[4720]: I1013 17:26:35.783035 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pm6tt\" (UniqueName: \"kubernetes.io/projected/5abe378d-2b00-4d15-af94-5141934fca47-kube-api-access-pm6tt\") pod \"oauth-openshift-558db77b4-xbb4t\" (UID: \"5abe378d-2b00-4d15-af94-5141934fca47\") " pod="openshift-authentication/oauth-openshift-558db77b4-xbb4t" Oct 13 17:26:35 crc kubenswrapper[4720]: I1013 17:26:35.800434 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zb6ms\" (UniqueName: \"kubernetes.io/projected/7e7db006-6ea5-44bb-89ad-0d6a6a4810ca-kube-api-access-zb6ms\") pod \"apiserver-76f77b778f-mm8n9\" (UID: \"7e7db006-6ea5-44bb-89ad-0d6a6a4810ca\") " pod="openshift-apiserver/apiserver-76f77b778f-mm8n9" Oct 13 17:26:35 crc kubenswrapper[4720]: I1013 17:26:35.822473 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krpdj\" (UniqueName: \"kubernetes.io/projected/43c40b45-9695-4d29-b627-c4ab23d1d6d0-kube-api-access-krpdj\") pod \"machine-api-operator-5694c8668f-dvc6z\" (UID: \"43c40b45-9695-4d29-b627-c4ab23d1d6d0\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dvc6z" Oct 13 17:26:35 crc kubenswrapper[4720]: I1013 17:26:35.839714 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndkph\" (UniqueName: \"kubernetes.io/projected/36b6918f-fd99-4666-982d-7636ea1d9a1c-kube-api-access-ndkph\") pod \"machine-approver-56656f9798-p8hj6\" (UID: \"36b6918f-fd99-4666-982d-7636ea1d9a1c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p8hj6" Oct 13 17:26:35 crc kubenswrapper[4720]: I1013 17:26:35.853176 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r28hp\" (UniqueName: \"kubernetes.io/projected/3a4c2111-a5d9-4567-91c6-59733a5d711f-kube-api-access-r28hp\") pod \"openshift-apiserver-operator-796bbdcf4f-4vqw6\" (UID: \"3a4c2111-a5d9-4567-91c6-59733a5d711f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4vqw6" Oct 13 17:26:35 crc kubenswrapper[4720]: I1013 17:26:35.883287 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzzrp\" (UniqueName: \"kubernetes.io/projected/b5179459-d832-4419-96ce-44dd4f055e98-kube-api-access-rzzrp\") pod \"route-controller-manager-6576b87f9c-9mm85\" (UID: \"b5179459-d832-4419-96ce-44dd4f055e98\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9mm85" Oct 13 17:26:35 crc kubenswrapper[4720]: I1013 17:26:35.896960 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Oct 13 17:26:35 crc kubenswrapper[4720]: I1013 17:26:35.917058 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Oct 13 17:26:35 crc kubenswrapper[4720]: I1013 17:26:35.936067 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-mm8n9" Oct 13 17:26:35 crc kubenswrapper[4720]: I1013 17:26:35.936861 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Oct 13 17:26:35 crc kubenswrapper[4720]: I1013 17:26:35.956112 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Oct 13 17:26:35 crc kubenswrapper[4720]: I1013 17:26:35.975299 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Oct 13 17:26:35 crc kubenswrapper[4720]: I1013 17:26:35.976561 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-dvc6z" Oct 13 17:26:35 crc kubenswrapper[4720]: I1013 17:26:35.993652 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9mm85" Oct 13 17:26:35 crc kubenswrapper[4720]: I1013 17:26:35.996897 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.016519 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.027353 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4vqw6" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.062372 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-xbb4t" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.080426 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p8hj6" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.113310 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f8b9591c-6d8d-4942-aa6c-f16b6b5f4992-trusted-ca\") pod \"ingress-operator-5b745b69d9-7vb4h\" (UID: \"f8b9591c-6d8d-4942-aa6c-f16b6b5f4992\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7vb4h" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.113342 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0ee94e27-af7c-49c7-a57f-8f1a18ba53e3-audit-policies\") pod \"apiserver-7bbb656c7d-mg7s9\" (UID: \"0ee94e27-af7c-49c7-a57f-8f1a18ba53e3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mg7s9" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.113362 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e04c241-5f40-47d5-8c67-e8092a483089-config\") pod \"controller-manager-879f6c89f-lwp58\" (UID: \"4e04c241-5f40-47d5-8c67-e8092a483089\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lwp58" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.113403 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/17cf8543-d064-4d3e-976f-2fbad57f2e56-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-jd5bf\" (UID: \"17cf8543-d064-4d3e-976f-2fbad57f2e56\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jd5bf" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.113422 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e84d82f4-b6d9-4285-908b-55fd27e0e4c3-etcd-service-ca\") pod \"etcd-operator-b45778765-qnvwg\" (UID: \"e84d82f4-b6d9-4285-908b-55fd27e0e4c3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qnvwg" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.113437 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e04c241-5f40-47d5-8c67-e8092a483089-serving-cert\") pod \"controller-manager-879f6c89f-lwp58\" (UID: \"4e04c241-5f40-47d5-8c67-e8092a483089\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lwp58" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.113461 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qj42n\" (UniqueName: \"kubernetes.io/projected/e84d82f4-b6d9-4285-908b-55fd27e0e4c3-kube-api-access-qj42n\") pod \"etcd-operator-b45778765-qnvwg\" (UID: \"e84d82f4-b6d9-4285-908b-55fd27e0e4c3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qnvwg" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.113486 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.113503 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/17cf8543-d064-4d3e-976f-2fbad57f2e56-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-jd5bf\" (UID: \"17cf8543-d064-4d3e-976f-2fbad57f2e56\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jd5bf" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.113528 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e84d82f4-b6d9-4285-908b-55fd27e0e4c3-etcd-ca\") pod \"etcd-operator-b45778765-qnvwg\" (UID: \"e84d82f4-b6d9-4285-908b-55fd27e0e4c3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qnvwg" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.113543 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b0af5887-2244-4dfb-8e2a-a66ac6bf6762-service-ca\") pod \"console-f9d7485db-5cfzh\" (UID: \"b0af5887-2244-4dfb-8e2a-a66ac6bf6762\") " pod="openshift-console/console-f9d7485db-5cfzh" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.113559 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-972nv\" (UniqueName: \"kubernetes.io/projected/b0af5887-2244-4dfb-8e2a-a66ac6bf6762-kube-api-access-972nv\") pod \"console-f9d7485db-5cfzh\" (UID: \"b0af5887-2244-4dfb-8e2a-a66ac6bf6762\") " pod="openshift-console/console-f9d7485db-5cfzh" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.113594 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2df17f0-8c2c-4ca1-afe3-7b82dcb27712-serving-cert\") pod \"openshift-config-operator-7777fb866f-pc8vw\" (UID: \"b2df17f0-8c2c-4ca1-afe3-7b82dcb27712\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pc8vw" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.113609 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2c0351ce-4eed-4d0e-b9a4-eb2746bf396a-installation-pull-secrets\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.113651 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b0af5887-2244-4dfb-8e2a-a66ac6bf6762-console-config\") pod \"console-f9d7485db-5cfzh\" (UID: \"b0af5887-2244-4dfb-8e2a-a66ac6bf6762\") " pod="openshift-console/console-f9d7485db-5cfzh" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.113668 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ee94e27-af7c-49c7-a57f-8f1a18ba53e3-serving-cert\") pod \"apiserver-7bbb656c7d-mg7s9\" (UID: \"0ee94e27-af7c-49c7-a57f-8f1a18ba53e3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mg7s9" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.113684 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08ee097f-0739-4c23-8ed9-696ded1864f2-config\") pod \"authentication-operator-69f744f599-8c7d6\" (UID: \"08ee097f-0739-4c23-8ed9-696ded1864f2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8c7d6" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.113700 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2c0351ce-4eed-4d0e-b9a4-eb2746bf396a-bound-sa-token\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.113745 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff6247fa-c7b7-4a78-9c52-48de3c488a6a-serving-cert\") pod \"console-operator-58897d9998-jrc4l\" (UID: \"ff6247fa-c7b7-4a78-9c52-48de3c488a6a\") " pod="openshift-console-operator/console-operator-58897d9998-jrc4l" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.113765 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0ee94e27-af7c-49c7-a57f-8f1a18ba53e3-encryption-config\") pod \"apiserver-7bbb656c7d-mg7s9\" (UID: \"0ee94e27-af7c-49c7-a57f-8f1a18ba53e3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mg7s9" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.113785 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnxnn\" (UniqueName: \"kubernetes.io/projected/2c0351ce-4eed-4d0e-b9a4-eb2746bf396a-kube-api-access-nnxnn\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:36 crc kubenswrapper[4720]: E1013 17:26:36.114126 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 17:26:36.614115907 +0000 UTC m=+142.071366039 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5xfl" (UID: "2c0351ce-4eed-4d0e-b9a4-eb2746bf396a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.114756 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b0af5887-2244-4dfb-8e2a-a66ac6bf6762-console-serving-cert\") pod \"console-f9d7485db-5cfzh\" (UID: \"b0af5887-2244-4dfb-8e2a-a66ac6bf6762\") " pod="openshift-console/console-f9d7485db-5cfzh" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.114789 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b0af5887-2244-4dfb-8e2a-a66ac6bf6762-oauth-serving-cert\") pod \"console-f9d7485db-5cfzh\" (UID: \"b0af5887-2244-4dfb-8e2a-a66ac6bf6762\") " pod="openshift-console/console-f9d7485db-5cfzh" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.115059 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e84d82f4-b6d9-4285-908b-55fd27e0e4c3-serving-cert\") pod \"etcd-operator-b45778765-qnvwg\" (UID: \"e84d82f4-b6d9-4285-908b-55fd27e0e4c3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qnvwg" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.115077 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ee94e27-af7c-49c7-a57f-8f1a18ba53e3-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-mg7s9\" (UID: \"0ee94e27-af7c-49c7-a57f-8f1a18ba53e3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mg7s9" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.115092 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0ee94e27-af7c-49c7-a57f-8f1a18ba53e3-audit-dir\") pod \"apiserver-7bbb656c7d-mg7s9\" (UID: \"0ee94e27-af7c-49c7-a57f-8f1a18ba53e3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mg7s9" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.115238 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff6247fa-c7b7-4a78-9c52-48de3c488a6a-config\") pod \"console-operator-58897d9998-jrc4l\" (UID: \"ff6247fa-c7b7-4a78-9c52-48de3c488a6a\") " pod="openshift-console-operator/console-operator-58897d9998-jrc4l" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.115266 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25ljb\" (UniqueName: \"kubernetes.io/projected/7277d186-5645-445d-aea3-37215cf98836-kube-api-access-25ljb\") pod \"cluster-samples-operator-665b6dd947-fvlfw\" (UID: \"7277d186-5645-445d-aea3-37215cf98836\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fvlfw" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.115313 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwhnv\" (UniqueName: \"kubernetes.io/projected/e33fd1d2-081b-4e68-ab37-623406daeaeb-kube-api-access-qwhnv\") pod \"downloads-7954f5f757-l8tzk\" (UID: \"e33fd1d2-081b-4e68-ab37-623406daeaeb\") " pod="openshift-console/downloads-7954f5f757-l8tzk" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.115337 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0ee94e27-af7c-49c7-a57f-8f1a18ba53e3-etcd-client\") pod \"apiserver-7bbb656c7d-mg7s9\" (UID: \"0ee94e27-af7c-49c7-a57f-8f1a18ba53e3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mg7s9" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.115358 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08ee097f-0739-4c23-8ed9-696ded1864f2-serving-cert\") pod \"authentication-operator-69f744f599-8c7d6\" (UID: \"08ee097f-0739-4c23-8ed9-696ded1864f2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8c7d6" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.115403 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps7xq\" (UniqueName: \"kubernetes.io/projected/08ee097f-0739-4c23-8ed9-696ded1864f2-kube-api-access-ps7xq\") pod \"authentication-operator-69f744f599-8c7d6\" (UID: \"08ee097f-0739-4c23-8ed9-696ded1864f2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8c7d6" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.115474 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mt28\" (UniqueName: \"kubernetes.io/projected/0ee94e27-af7c-49c7-a57f-8f1a18ba53e3-kube-api-access-8mt28\") pod \"apiserver-7bbb656c7d-mg7s9\" (UID: \"0ee94e27-af7c-49c7-a57f-8f1a18ba53e3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mg7s9" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.115516 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f8b9591c-6d8d-4942-aa6c-f16b6b5f4992-bound-sa-token\") pod \"ingress-operator-5b745b69d9-7vb4h\" (UID: \"f8b9591c-6d8d-4942-aa6c-f16b6b5f4992\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7vb4h" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.115533 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85w74\" (UniqueName: \"kubernetes.io/projected/f8b9591c-6d8d-4942-aa6c-f16b6b5f4992-kube-api-access-85w74\") pod \"ingress-operator-5b745b69d9-7vb4h\" (UID: \"f8b9591c-6d8d-4942-aa6c-f16b6b5f4992\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7vb4h" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.115712 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8d7b\" (UniqueName: \"kubernetes.io/projected/b2df17f0-8c2c-4ca1-afe3-7b82dcb27712-kube-api-access-x8d7b\") pod \"openshift-config-operator-7777fb866f-pc8vw\" (UID: \"b2df17f0-8c2c-4ca1-afe3-7b82dcb27712\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pc8vw" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.115772 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e84d82f4-b6d9-4285-908b-55fd27e0e4c3-config\") pod \"etcd-operator-b45778765-qnvwg\" (UID: \"e84d82f4-b6d9-4285-908b-55fd27e0e4c3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qnvwg" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.115812 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e04c241-5f40-47d5-8c67-e8092a483089-client-ca\") pod \"controller-manager-879f6c89f-lwp58\" (UID: \"4e04c241-5f40-47d5-8c67-e8092a483089\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lwp58" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.115858 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2c0351ce-4eed-4d0e-b9a4-eb2746bf396a-registry-certificates\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.115896 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f8b9591c-6d8d-4942-aa6c-f16b6b5f4992-metrics-tls\") pod \"ingress-operator-5b745b69d9-7vb4h\" (UID: \"f8b9591c-6d8d-4942-aa6c-f16b6b5f4992\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7vb4h" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.115928 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b0af5887-2244-4dfb-8e2a-a66ac6bf6762-console-oauth-config\") pod \"console-f9d7485db-5cfzh\" (UID: \"b0af5887-2244-4dfb-8e2a-a66ac6bf6762\") " pod="openshift-console/console-f9d7485db-5cfzh" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.115943 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e84d82f4-b6d9-4285-908b-55fd27e0e4c3-etcd-client\") pod \"etcd-operator-b45778765-qnvwg\" (UID: \"e84d82f4-b6d9-4285-908b-55fd27e0e4c3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qnvwg" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.115974 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xpzx\" (UniqueName: \"kubernetes.io/projected/4e04c241-5f40-47d5-8c67-e8092a483089-kube-api-access-9xpzx\") pod \"controller-manager-879f6c89f-lwp58\" (UID: \"4e04c241-5f40-47d5-8c67-e8092a483089\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lwp58" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.116017 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-569v4\" (UniqueName: \"kubernetes.io/projected/ff6247fa-c7b7-4a78-9c52-48de3c488a6a-kube-api-access-569v4\") pod \"console-operator-58897d9998-jrc4l\" (UID: \"ff6247fa-c7b7-4a78-9c52-48de3c488a6a\") " pod="openshift-console-operator/console-operator-58897d9998-jrc4l" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.116295 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ff6247fa-c7b7-4a78-9c52-48de3c488a6a-trusted-ca\") pod \"console-operator-58897d9998-jrc4l\" (UID: \"ff6247fa-c7b7-4a78-9c52-48de3c488a6a\") " pod="openshift-console-operator/console-operator-58897d9998-jrc4l" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.116328 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8qhp\" (UniqueName: \"kubernetes.io/projected/17cf8543-d064-4d3e-976f-2fbad57f2e56-kube-api-access-w8qhp\") pod \"cluster-image-registry-operator-dc59b4c8b-jd5bf\" (UID: \"17cf8543-d064-4d3e-976f-2fbad57f2e56\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jd5bf" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.116522 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0ee94e27-af7c-49c7-a57f-8f1a18ba53e3-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-mg7s9\" (UID: \"0ee94e27-af7c-49c7-a57f-8f1a18ba53e3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mg7s9" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.116762 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0af5887-2244-4dfb-8e2a-a66ac6bf6762-trusted-ca-bundle\") pod \"console-f9d7485db-5cfzh\" (UID: \"b0af5887-2244-4dfb-8e2a-a66ac6bf6762\") " pod="openshift-console/console-f9d7485db-5cfzh" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.116816 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/17cf8543-d064-4d3e-976f-2fbad57f2e56-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-jd5bf\" (UID: \"17cf8543-d064-4d3e-976f-2fbad57f2e56\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jd5bf" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.116836 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/b2df17f0-8c2c-4ca1-afe3-7b82dcb27712-available-featuregates\") pod \"openshift-config-operator-7777fb866f-pc8vw\" (UID: \"b2df17f0-8c2c-4ca1-afe3-7b82dcb27712\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pc8vw" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.121949 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7277d186-5645-445d-aea3-37215cf98836-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-fvlfw\" (UID: \"7277d186-5645-445d-aea3-37215cf98836\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fvlfw" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.130519 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2c0351ce-4eed-4d0e-b9a4-eb2746bf396a-trusted-ca\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.130936 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2c0351ce-4eed-4d0e-b9a4-eb2746bf396a-ca-trust-extracted\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.131462 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4e04c241-5f40-47d5-8c67-e8092a483089-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-lwp58\" (UID: \"4e04c241-5f40-47d5-8c67-e8092a483089\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lwp58" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.131866 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08ee097f-0739-4c23-8ed9-696ded1864f2-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-8c7d6\" (UID: \"08ee097f-0739-4c23-8ed9-696ded1864f2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8c7d6" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.132149 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2c0351ce-4eed-4d0e-b9a4-eb2746bf396a-registry-tls\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.132263 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08ee097f-0739-4c23-8ed9-696ded1864f2-service-ca-bundle\") pod \"authentication-operator-69f744f599-8c7d6\" (UID: \"08ee097f-0739-4c23-8ed9-696ded1864f2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8c7d6" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.220247 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-mm8n9"] Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.233926 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.234115 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f94b07c7-b111-450a-bd2a-0977944282a9-images\") pod \"machine-config-operator-74547568cd-b5ljk\" (UID: \"f94b07c7-b111-450a-bd2a-0977944282a9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b5ljk" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.234159 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7277d186-5645-445d-aea3-37215cf98836-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-fvlfw\" (UID: \"7277d186-5645-445d-aea3-37215cf98836\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fvlfw" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.234182 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztjds\" (UniqueName: \"kubernetes.io/projected/f94b07c7-b111-450a-bd2a-0977944282a9-kube-api-access-ztjds\") pod \"machine-config-operator-74547568cd-b5ljk\" (UID: \"f94b07c7-b111-450a-bd2a-0977944282a9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b5ljk" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.234231 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9bec609a-6eb7-479a-ae81-dd2d7acc1742-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-4x4s2\" (UID: \"9bec609a-6eb7-479a-ae81-dd2d7acc1742\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4x4s2" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.234249 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a6d69da-3074-4b30-898b-4bb2eea1fb75-config-volume\") pod \"collect-profiles-29339595-f78vr\" (UID: \"0a6d69da-3074-4b30-898b-4bb2eea1fb75\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339595-f78vr" Oct 13 17:26:36 crc kubenswrapper[4720]: E1013 17:26:36.234282 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 17:26:36.734253643 +0000 UTC m=+142.191503825 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.234324 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08ee097f-0739-4c23-8ed9-696ded1864f2-service-ca-bundle\") pod \"authentication-operator-69f744f599-8c7d6\" (UID: \"08ee097f-0739-4c23-8ed9-696ded1864f2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8c7d6" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.234372 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a13b5004-f884-4558-b38b-b3c8028a73d5-metrics-certs\") pod \"router-default-5444994796-vttl9\" (UID: \"a13b5004-f884-4558-b38b-b3c8028a73d5\") " pod="openshift-ingress/router-default-5444994796-vttl9" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.234397 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82e3c60a-715e-4dc6-8137-c79aa854de0a-config\") pod \"service-ca-operator-777779d784-9snlb\" (UID: \"82e3c60a-715e-4dc6-8137-c79aa854de0a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9snlb" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.234437 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f8b9591c-6d8d-4942-aa6c-f16b6b5f4992-trusted-ca\") pod \"ingress-operator-5b745b69d9-7vb4h\" (UID: \"f8b9591c-6d8d-4942-aa6c-f16b6b5f4992\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7vb4h" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.234464 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0ee94e27-af7c-49c7-a57f-8f1a18ba53e3-audit-policies\") pod \"apiserver-7bbb656c7d-mg7s9\" (UID: \"0ee94e27-af7c-49c7-a57f-8f1a18ba53e3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mg7s9" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.234482 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e04c241-5f40-47d5-8c67-e8092a483089-config\") pod \"controller-manager-879f6c89f-lwp58\" (UID: \"4e04c241-5f40-47d5-8c67-e8092a483089\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lwp58" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.234502 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e2f98220-0477-4a6d-8cf3-fa055d01bc3a-registration-dir\") pod \"csi-hostpathplugin-9tdzd\" (UID: \"e2f98220-0477-4a6d-8cf3-fa055d01bc3a\") " pod="hostpath-provisioner/csi-hostpathplugin-9tdzd" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.234520 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/913f12c1-e62e-474e-b44f-868c1de3309b-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-5kh7f\" (UID: \"913f12c1-e62e-474e-b44f-868c1de3309b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5kh7f" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.234548 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/17cf8543-d064-4d3e-976f-2fbad57f2e56-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-jd5bf\" (UID: \"17cf8543-d064-4d3e-976f-2fbad57f2e56\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jd5bf" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.234567 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e84d82f4-b6d9-4285-908b-55fd27e0e4c3-etcd-service-ca\") pod \"etcd-operator-b45778765-qnvwg\" (UID: \"e84d82f4-b6d9-4285-908b-55fd27e0e4c3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qnvwg" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.234602 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.234625 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b0af5887-2244-4dfb-8e2a-a66ac6bf6762-service-ca\") pod \"console-f9d7485db-5cfzh\" (UID: \"b0af5887-2244-4dfb-8e2a-a66ac6bf6762\") " pod="openshift-console/console-f9d7485db-5cfzh" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.234644 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82e3c60a-715e-4dc6-8137-c79aa854de0a-serving-cert\") pod \"service-ca-operator-777779d784-9snlb\" (UID: \"82e3c60a-715e-4dc6-8137-c79aa854de0a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9snlb" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.234666 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5b9cfa7f-e80a-42b8-b6f0-239165447812-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xpdm2\" (UID: \"5b9cfa7f-e80a-42b8-b6f0-239165447812\") " pod="openshift-marketplace/marketplace-operator-79b997595-xpdm2" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.234687 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/815faec7-4eeb-473a-b9ca-3ae41842aa02-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dl79l\" (UID: \"815faec7-4eeb-473a-b9ca-3ae41842aa02\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dl79l" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.234706 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3f98514a-d643-4723-8d83-cda6c55d4874-profile-collector-cert\") pod \"catalog-operator-68c6474976-7xzcq\" (UID: \"3f98514a-d643-4723-8d83-cda6c55d4874\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7xzcq" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.234728 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2df17f0-8c2c-4ca1-afe3-7b82dcb27712-serving-cert\") pod \"openshift-config-operator-7777fb866f-pc8vw\" (UID: \"b2df17f0-8c2c-4ca1-afe3-7b82dcb27712\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pc8vw" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.234750 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3f98514a-d643-4723-8d83-cda6c55d4874-srv-cert\") pod \"catalog-operator-68c6474976-7xzcq\" (UID: \"3f98514a-d643-4723-8d83-cda6c55d4874\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7xzcq" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.234770 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/e2f98220-0477-4a6d-8cf3-fa055d01bc3a-csi-data-dir\") pod \"csi-hostpathplugin-9tdzd\" (UID: \"e2f98220-0477-4a6d-8cf3-fa055d01bc3a\") " pod="hostpath-provisioner/csi-hostpathplugin-9tdzd" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.234788 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b641d3d3-3e84-4b58-ba79-e1f941260618-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-hjhns\" (UID: \"b641d3d3-3e84-4b58-ba79-e1f941260618\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hjhns" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.234816 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ee94e27-af7c-49c7-a57f-8f1a18ba53e3-serving-cert\") pod \"apiserver-7bbb656c7d-mg7s9\" (UID: \"0ee94e27-af7c-49c7-a57f-8f1a18ba53e3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mg7s9" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.234838 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a01c34e-55fd-4492-8fbe-c15dbb539271-config\") pod \"kube-apiserver-operator-766d6c64bb-mfblh\" (UID: \"3a01c34e-55fd-4492-8fbe-c15dbb539271\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mfblh" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.234857 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bec609a-6eb7-479a-ae81-dd2d7acc1742-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-4x4s2\" (UID: \"9bec609a-6eb7-479a-ae81-dd2d7acc1742\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4x4s2" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.234880 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/93c081dc-50a9-43b9-874c-0b7e46ebbbc9-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-vhgw6\" (UID: \"93c081dc-50a9-43b9-874c-0b7e46ebbbc9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vhgw6" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.234904 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scskp\" (UniqueName: \"kubernetes.io/projected/61e69b4f-7bcc-4ce4-9bd2-3200c0d0f883-kube-api-access-scskp\") pod \"service-ca-9c57cc56f-5tn27\" (UID: \"61e69b4f-7bcc-4ce4-9bd2-3200c0d0f883\") " pod="openshift-service-ca/service-ca-9c57cc56f-5tn27" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.234926 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrsq8\" (UniqueName: \"kubernetes.io/projected/913f12c1-e62e-474e-b44f-868c1de3309b-kube-api-access-lrsq8\") pod \"openshift-controller-manager-operator-756b6f6bc6-5kh7f\" (UID: \"913f12c1-e62e-474e-b44f-868c1de3309b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5kh7f" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.234948 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0ee94e27-af7c-49c7-a57f-8f1a18ba53e3-encryption-config\") pod \"apiserver-7bbb656c7d-mg7s9\" (UID: \"0ee94e27-af7c-49c7-a57f-8f1a18ba53e3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mg7s9" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.234964 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/e2f98220-0477-4a6d-8cf3-fa055d01bc3a-plugins-dir\") pod \"csi-hostpathplugin-9tdzd\" (UID: \"e2f98220-0477-4a6d-8cf3-fa055d01bc3a\") " pod="hostpath-provisioner/csi-hostpathplugin-9tdzd" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.234987 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnxnn\" (UniqueName: \"kubernetes.io/projected/2c0351ce-4eed-4d0e-b9a4-eb2746bf396a-kube-api-access-nnxnn\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.235004 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0ee94e27-af7c-49c7-a57f-8f1a18ba53e3-audit-dir\") pod \"apiserver-7bbb656c7d-mg7s9\" (UID: \"0ee94e27-af7c-49c7-a57f-8f1a18ba53e3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mg7s9" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.235028 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b0af5887-2244-4dfb-8e2a-a66ac6bf6762-console-serving-cert\") pod \"console-f9d7485db-5cfzh\" (UID: \"b0af5887-2244-4dfb-8e2a-a66ac6bf6762\") " pod="openshift-console/console-f9d7485db-5cfzh" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.235046 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ee94e27-af7c-49c7-a57f-8f1a18ba53e3-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-mg7s9\" (UID: \"0ee94e27-af7c-49c7-a57f-8f1a18ba53e3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mg7s9" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.235065 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e84d82f4-b6d9-4285-908b-55fd27e0e4c3-serving-cert\") pod \"etcd-operator-b45778765-qnvwg\" (UID: \"e84d82f4-b6d9-4285-908b-55fd27e0e4c3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qnvwg" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.235079 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08ee097f-0739-4c23-8ed9-696ded1864f2-service-ca-bundle\") pod \"authentication-operator-69f744f599-8c7d6\" (UID: \"08ee097f-0739-4c23-8ed9-696ded1864f2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8c7d6" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.235087 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clc22\" (UniqueName: \"kubernetes.io/projected/3f98514a-d643-4723-8d83-cda6c55d4874-kube-api-access-clc22\") pod \"catalog-operator-68c6474976-7xzcq\" (UID: \"3f98514a-d643-4723-8d83-cda6c55d4874\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7xzcq" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.235313 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0ee94e27-af7c-49c7-a57f-8f1a18ba53e3-audit-dir\") pod \"apiserver-7bbb656c7d-mg7s9\" (UID: \"0ee94e27-af7c-49c7-a57f-8f1a18ba53e3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mg7s9" Oct 13 17:26:36 crc kubenswrapper[4720]: E1013 17:26:36.235427 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 17:26:36.735418423 +0000 UTC m=+142.192668665 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5xfl" (UID: "2c0351ce-4eed-4d0e-b9a4-eb2746bf396a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.235835 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b0af5887-2244-4dfb-8e2a-a66ac6bf6762-service-ca\") pod \"console-f9d7485db-5cfzh\" (UID: \"b0af5887-2244-4dfb-8e2a-a66ac6bf6762\") " pod="openshift-console/console-f9d7485db-5cfzh" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.235906 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f8b9591c-6d8d-4942-aa6c-f16b6b5f4992-trusted-ca\") pod \"ingress-operator-5b745b69d9-7vb4h\" (UID: \"f8b9591c-6d8d-4942-aa6c-f16b6b5f4992\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7vb4h" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.237298 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0ee94e27-af7c-49c7-a57f-8f1a18ba53e3-audit-policies\") pod \"apiserver-7bbb656c7d-mg7s9\" (UID: \"0ee94e27-af7c-49c7-a57f-8f1a18ba53e3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mg7s9" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.237679 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwhnv\" (UniqueName: \"kubernetes.io/projected/e33fd1d2-081b-4e68-ab37-623406daeaeb-kube-api-access-qwhnv\") pod \"downloads-7954f5f757-l8tzk\" (UID: \"e33fd1d2-081b-4e68-ab37-623406daeaeb\") " pod="openshift-console/downloads-7954f5f757-l8tzk" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.237707 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08ee097f-0739-4c23-8ed9-696ded1864f2-serving-cert\") pod \"authentication-operator-69f744f599-8c7d6\" (UID: \"08ee097f-0739-4c23-8ed9-696ded1864f2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8c7d6" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.237728 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/b1bdb74a-39cf-48fb-a76c-a2b362136fad-certs\") pod \"machine-config-server-scfnf\" (UID: \"b1bdb74a-39cf-48fb-a76c-a2b362136fad\") " pod="openshift-machine-config-operator/machine-config-server-scfnf" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.237777 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d0e58b14-51d4-4b94-ba16-1058cde1dda1-metrics-tls\") pod \"dns-default-pmrm5\" (UID: \"d0e58b14-51d4-4b94-ba16-1058cde1dda1\") " pod="openshift-dns/dns-default-pmrm5" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.237809 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mt28\" (UniqueName: \"kubernetes.io/projected/0ee94e27-af7c-49c7-a57f-8f1a18ba53e3-kube-api-access-8mt28\") pod \"apiserver-7bbb656c7d-mg7s9\" (UID: \"0ee94e27-af7c-49c7-a57f-8f1a18ba53e3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mg7s9" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.237827 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ps7xq\" (UniqueName: \"kubernetes.io/projected/08ee097f-0739-4c23-8ed9-696ded1864f2-kube-api-access-ps7xq\") pod \"authentication-operator-69f744f599-8c7d6\" (UID: \"08ee097f-0739-4c23-8ed9-696ded1864f2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8c7d6" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.237847 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85w74\" (UniqueName: \"kubernetes.io/projected/f8b9591c-6d8d-4942-aa6c-f16b6b5f4992-kube-api-access-85w74\") pod \"ingress-operator-5b745b69d9-7vb4h\" (UID: \"f8b9591c-6d8d-4942-aa6c-f16b6b5f4992\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7vb4h" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.237866 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e84d82f4-b6d9-4285-908b-55fd27e0e4c3-config\") pod \"etcd-operator-b45778765-qnvwg\" (UID: \"e84d82f4-b6d9-4285-908b-55fd27e0e4c3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qnvwg" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.237918 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e04c241-5f40-47d5-8c67-e8092a483089-client-ca\") pod \"controller-manager-879f6c89f-lwp58\" (UID: \"4e04c241-5f40-47d5-8c67-e8092a483089\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lwp58" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.237937 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vxnb\" (UniqueName: \"kubernetes.io/projected/93c081dc-50a9-43b9-874c-0b7e46ebbbc9-kube-api-access-9vxnb\") pod \"package-server-manager-789f6589d5-vhgw6\" (UID: \"93c081dc-50a9-43b9-874c-0b7e46ebbbc9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vhgw6" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.237980 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f8b9591c-6d8d-4942-aa6c-f16b6b5f4992-metrics-tls\") pod \"ingress-operator-5b745b69d9-7vb4h\" (UID: \"f8b9591c-6d8d-4942-aa6c-f16b6b5f4992\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7vb4h" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.237997 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b0af5887-2244-4dfb-8e2a-a66ac6bf6762-console-oauth-config\") pod \"console-f9d7485db-5cfzh\" (UID: \"b0af5887-2244-4dfb-8e2a-a66ac6bf6762\") " pod="openshift-console/console-f9d7485db-5cfzh" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.238012 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xpzx\" (UniqueName: \"kubernetes.io/projected/4e04c241-5f40-47d5-8c67-e8092a483089-kube-api-access-9xpzx\") pod \"controller-manager-879f6c89f-lwp58\" (UID: \"4e04c241-5f40-47d5-8c67-e8092a483089\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lwp58" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.238030 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/61e69b4f-7bcc-4ce4-9bd2-3200c0d0f883-signing-key\") pod \"service-ca-9c57cc56f-5tn27\" (UID: \"61e69b4f-7bcc-4ce4-9bd2-3200c0d0f883\") " pod="openshift-service-ca/service-ca-9c57cc56f-5tn27" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.238047 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-569v4\" (UniqueName: \"kubernetes.io/projected/ff6247fa-c7b7-4a78-9c52-48de3c488a6a-kube-api-access-569v4\") pod \"console-operator-58897d9998-jrc4l\" (UID: \"ff6247fa-c7b7-4a78-9c52-48de3c488a6a\") " pod="openshift-console-operator/console-operator-58897d9998-jrc4l" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.238393 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ee94e27-af7c-49c7-a57f-8f1a18ba53e3-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-mg7s9\" (UID: \"0ee94e27-af7c-49c7-a57f-8f1a18ba53e3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mg7s9" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.238670 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e04c241-5f40-47d5-8c67-e8092a483089-config\") pod \"controller-manager-879f6c89f-lwp58\" (UID: \"4e04c241-5f40-47d5-8c67-e8092a483089\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lwp58" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.238660 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e84d82f4-b6d9-4285-908b-55fd27e0e4c3-config\") pod \"etcd-operator-b45778765-qnvwg\" (UID: \"e84d82f4-b6d9-4285-908b-55fd27e0e4c3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qnvwg" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.239239 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e84d82f4-b6d9-4285-908b-55fd27e0e4c3-etcd-service-ca\") pod \"etcd-operator-b45778765-qnvwg\" (UID: \"e84d82f4-b6d9-4285-908b-55fd27e0e4c3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qnvwg" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.239632 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5acdfefb-0861-4196-a2f9-59d1784025a8-srv-cert\") pod \"olm-operator-6b444d44fb-56p82\" (UID: \"5acdfefb-0861-4196-a2f9-59d1784025a8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-56p82" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.239799 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/0fb88719-7082-419e-8eb0-7eb7d3bf9719-tmpfs\") pod \"packageserver-d55dfcdfc-9klwx\" (UID: \"0fb88719-7082-419e-8eb0-7eb7d3bf9719\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9klwx" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.239933 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0af5887-2244-4dfb-8e2a-a66ac6bf6762-trusted-ca-bundle\") pod \"console-f9d7485db-5cfzh\" (UID: \"b0af5887-2244-4dfb-8e2a-a66ac6bf6762\") " pod="openshift-console/console-f9d7485db-5cfzh" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.241108 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/17cf8543-d064-4d3e-976f-2fbad57f2e56-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-jd5bf\" (UID: \"17cf8543-d064-4d3e-976f-2fbad57f2e56\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jd5bf" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.241314 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/b2df17f0-8c2c-4ca1-afe3-7b82dcb27712-available-featuregates\") pod \"openshift-config-operator-7777fb866f-pc8vw\" (UID: \"b2df17f0-8c2c-4ca1-afe3-7b82dcb27712\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pc8vw" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.241407 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ee94e27-af7c-49c7-a57f-8f1a18ba53e3-serving-cert\") pod \"apiserver-7bbb656c7d-mg7s9\" (UID: \"0ee94e27-af7c-49c7-a57f-8f1a18ba53e3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mg7s9" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.241417 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0fb88719-7082-419e-8eb0-7eb7d3bf9719-webhook-cert\") pod \"packageserver-d55dfcdfc-9klwx\" (UID: \"0fb88719-7082-419e-8eb0-7eb7d3bf9719\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9klwx" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.240455 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7277d186-5645-445d-aea3-37215cf98836-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-fvlfw\" (UID: \"7277d186-5645-445d-aea3-37215cf98836\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fvlfw" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.240078 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b0af5887-2244-4dfb-8e2a-a66ac6bf6762-console-serving-cert\") pod \"console-f9d7485db-5cfzh\" (UID: \"b0af5887-2244-4dfb-8e2a-a66ac6bf6762\") " pod="openshift-console/console-f9d7485db-5cfzh" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.241271 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0af5887-2244-4dfb-8e2a-a66ac6bf6762-trusted-ca-bundle\") pod \"console-f9d7485db-5cfzh\" (UID: \"b0af5887-2244-4dfb-8e2a-a66ac6bf6762\") " pod="openshift-console/console-f9d7485db-5cfzh" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.241698 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a01c34e-55fd-4492-8fbe-c15dbb539271-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-mfblh\" (UID: \"3a01c34e-55fd-4492-8fbe-c15dbb539271\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mfblh" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.241734 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/815faec7-4eeb-473a-b9ca-3ae41842aa02-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dl79l\" (UID: \"815faec7-4eeb-473a-b9ca-3ae41842aa02\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dl79l" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.241782 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psph8\" (UniqueName: \"kubernetes.io/projected/0fb88719-7082-419e-8eb0-7eb7d3bf9719-kube-api-access-psph8\") pod \"packageserver-d55dfcdfc-9klwx\" (UID: \"0fb88719-7082-419e-8eb0-7eb7d3bf9719\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9klwx" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.241804 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/e2f98220-0477-4a6d-8cf3-fa055d01bc3a-mountpoint-dir\") pod \"csi-hostpathplugin-9tdzd\" (UID: \"e2f98220-0477-4a6d-8cf3-fa055d01bc3a\") " pod="hostpath-provisioner/csi-hostpathplugin-9tdzd" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.241836 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkjqg\" (UniqueName: \"kubernetes.io/projected/90197382-1e9f-4823-a4e7-92fafeb46d66-kube-api-access-lkjqg\") pod \"dns-operator-744455d44c-jvfrr\" (UID: \"90197382-1e9f-4823-a4e7-92fafeb46d66\") " pod="openshift-dns-operator/dns-operator-744455d44c-jvfrr" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.241866 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2c0351ce-4eed-4d0e-b9a4-eb2746bf396a-trusted-ca\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.241909 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/a13b5004-f884-4558-b38b-b3c8028a73d5-stats-auth\") pod \"router-default-5444994796-vttl9\" (UID: \"a13b5004-f884-4558-b38b-b3c8028a73d5\") " pod="openshift-ingress/router-default-5444994796-vttl9" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.241934 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2c0351ce-4eed-4d0e-b9a4-eb2746bf396a-ca-trust-extracted\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.242016 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/b2df17f0-8c2c-4ca1-afe3-7b82dcb27712-available-featuregates\") pod \"openshift-config-operator-7777fb866f-pc8vw\" (UID: \"b2df17f0-8c2c-4ca1-afe3-7b82dcb27712\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pc8vw" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.242231 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4e04c241-5f40-47d5-8c67-e8092a483089-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-lwp58\" (UID: \"4e04c241-5f40-47d5-8c67-e8092a483089\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lwp58" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.242266 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08ee097f-0739-4c23-8ed9-696ded1864f2-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-8c7d6\" (UID: \"08ee097f-0739-4c23-8ed9-696ded1864f2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8c7d6" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.242299 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgtp8\" (UniqueName: \"kubernetes.io/projected/44e9378e-be31-4703-b23c-f7ffdbc89a2b-kube-api-access-zgtp8\") pod \"machine-config-controller-84d6567774-jw72g\" (UID: \"44e9378e-be31-4703-b23c-f7ffdbc89a2b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jw72g" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.242670 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2c0351ce-4eed-4d0e-b9a4-eb2746bf396a-ca-trust-extracted\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.243627 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2c0351ce-4eed-4d0e-b9a4-eb2746bf396a-registry-tls\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.243706 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2c0351ce-4eed-4d0e-b9a4-eb2746bf396a-trusted-ca\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.243727 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94c6e5cb-a6e1-453b-b6c2-0a4f397ced92-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-sxcr7\" (UID: \"94c6e5cb-a6e1-453b-b6c2-0a4f397ced92\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sxcr7" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.244430 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsgtw\" (UniqueName: \"kubernetes.io/projected/5b9cfa7f-e80a-42b8-b6f0-239165447812-kube-api-access-wsgtw\") pod \"marketplace-operator-79b997595-xpdm2\" (UID: \"5b9cfa7f-e80a-42b8-b6f0-239165447812\") " pod="openshift-marketplace/marketplace-operator-79b997595-xpdm2" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.244520 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d24nv\" (UniqueName: \"kubernetes.io/projected/d0e58b14-51d4-4b94-ba16-1058cde1dda1-kube-api-access-d24nv\") pod \"dns-default-pmrm5\" (UID: \"d0e58b14-51d4-4b94-ba16-1058cde1dda1\") " pod="openshift-dns/dns-default-pmrm5" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.244578 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/44e9378e-be31-4703-b23c-f7ffdbc89a2b-proxy-tls\") pod \"machine-config-controller-84d6567774-jw72g\" (UID: \"44e9378e-be31-4703-b23c-f7ffdbc89a2b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jw72g" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.244633 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e04c241-5f40-47d5-8c67-e8092a483089-serving-cert\") pod \"controller-manager-879f6c89f-lwp58\" (UID: \"4e04c241-5f40-47d5-8c67-e8092a483089\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lwp58" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.244682 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgqfm\" (UniqueName: \"kubernetes.io/projected/b1bdb74a-39cf-48fb-a76c-a2b362136fad-kube-api-access-vgqfm\") pod \"machine-config-server-scfnf\" (UID: \"b1bdb74a-39cf-48fb-a76c-a2b362136fad\") " pod="openshift-machine-config-operator/machine-config-server-scfnf" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.244811 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qj42n\" (UniqueName: \"kubernetes.io/projected/e84d82f4-b6d9-4285-908b-55fd27e0e4c3-kube-api-access-qj42n\") pod \"etcd-operator-b45778765-qnvwg\" (UID: \"e84d82f4-b6d9-4285-908b-55fd27e0e4c3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qnvwg" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.244855 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a6d69da-3074-4b30-898b-4bb2eea1fb75-secret-volume\") pod \"collect-profiles-29339595-f78vr\" (UID: \"0a6d69da-3074-4b30-898b-4bb2eea1fb75\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339595-f78vr" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.244908 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/17cf8543-d064-4d3e-976f-2fbad57f2e56-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-jd5bf\" (UID: \"17cf8543-d064-4d3e-976f-2fbad57f2e56\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jd5bf" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.244940 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f94b07c7-b111-450a-bd2a-0977944282a9-proxy-tls\") pod \"machine-config-operator-74547568cd-b5ljk\" (UID: \"f94b07c7-b111-450a-bd2a-0977944282a9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b5ljk" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.244964 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e84d82f4-b6d9-4285-908b-55fd27e0e4c3-etcd-ca\") pod \"etcd-operator-b45778765-qnvwg\" (UID: \"e84d82f4-b6d9-4285-908b-55fd27e0e4c3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qnvwg" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.244977 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08ee097f-0739-4c23-8ed9-696ded1864f2-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-8c7d6\" (UID: \"08ee097f-0739-4c23-8ed9-696ded1864f2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8c7d6" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.245021 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdgs5\" (UniqueName: \"kubernetes.io/projected/82e3c60a-715e-4dc6-8137-c79aa854de0a-kube-api-access-kdgs5\") pod \"service-ca-operator-777779d784-9snlb\" (UID: \"82e3c60a-715e-4dc6-8137-c79aa854de0a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9snlb" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.245050 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/815faec7-4eeb-473a-b9ca-3ae41842aa02-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dl79l\" (UID: \"815faec7-4eeb-473a-b9ca-3ae41842aa02\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dl79l" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.245072 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb7jn\" (UniqueName: \"kubernetes.io/projected/b641d3d3-3e84-4b58-ba79-e1f941260618-kube-api-access-vb7jn\") pod \"multus-admission-controller-857f4d67dd-hjhns\" (UID: \"b641d3d3-3e84-4b58-ba79-e1f941260618\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hjhns" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.245120 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-972nv\" (UniqueName: \"kubernetes.io/projected/b0af5887-2244-4dfb-8e2a-a66ac6bf6762-kube-api-access-972nv\") pod \"console-f9d7485db-5cfzh\" (UID: \"b0af5887-2244-4dfb-8e2a-a66ac6bf6762\") " pod="openshift-console/console-f9d7485db-5cfzh" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.245202 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/b1bdb74a-39cf-48fb-a76c-a2b362136fad-node-bootstrap-token\") pod \"machine-config-server-scfnf\" (UID: \"b1bdb74a-39cf-48fb-a76c-a2b362136fad\") " pod="openshift-machine-config-operator/machine-config-server-scfnf" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.245413 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/17cf8543-d064-4d3e-976f-2fbad57f2e56-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-jd5bf\" (UID: \"17cf8543-d064-4d3e-976f-2fbad57f2e56\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jd5bf" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.245426 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f8b9591c-6d8d-4942-aa6c-f16b6b5f4992-metrics-tls\") pod \"ingress-operator-5b745b69d9-7vb4h\" (UID: \"f8b9591c-6d8d-4942-aa6c-f16b6b5f4992\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7vb4h" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.245499 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e84d82f4-b6d9-4285-908b-55fd27e0e4c3-etcd-ca\") pod \"etcd-operator-b45778765-qnvwg\" (UID: \"e84d82f4-b6d9-4285-908b-55fd27e0e4c3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qnvwg" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.245828 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a13b5004-f884-4558-b38b-b3c8028a73d5-service-ca-bundle\") pod \"router-default-5444994796-vttl9\" (UID: \"a13b5004-f884-4558-b38b-b3c8028a73d5\") " pod="openshift-ingress/router-default-5444994796-vttl9" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.246165 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2c0351ce-4eed-4d0e-b9a4-eb2746bf396a-installation-pull-secrets\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.246303 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/cabe6b29-bccd-4995-ab54-b6cabc86f7bf-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-mjzm6\" (UID: \"cabe6b29-bccd-4995-ab54-b6cabc86f7bf\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mjzm6" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.246412 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2c0351ce-4eed-4d0e-b9a4-eb2746bf396a-registry-tls\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.246413 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/90197382-1e9f-4823-a4e7-92fafeb46d66-metrics-tls\") pod \"dns-operator-744455d44c-jvfrr\" (UID: \"90197382-1e9f-4823-a4e7-92fafeb46d66\") " pod="openshift-dns-operator/dns-operator-744455d44c-jvfrr" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.246470 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b0af5887-2244-4dfb-8e2a-a66ac6bf6762-console-config\") pod \"console-f9d7485db-5cfzh\" (UID: \"b0af5887-2244-4dfb-8e2a-a66ac6bf6762\") " pod="openshift-console/console-f9d7485db-5cfzh" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.246490 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bvc8\" (UniqueName: \"kubernetes.io/projected/94c6e5cb-a6e1-453b-b6c2-0a4f397ced92-kube-api-access-6bvc8\") pod \"kube-storage-version-migrator-operator-b67b599dd-sxcr7\" (UID: \"94c6e5cb-a6e1-453b-b6c2-0a4f397ced92\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sxcr7" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.246511 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08ee097f-0739-4c23-8ed9-696ded1864f2-config\") pod \"authentication-operator-69f744f599-8c7d6\" (UID: \"08ee097f-0739-4c23-8ed9-696ded1864f2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8c7d6" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.246530 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f94b07c7-b111-450a-bd2a-0977944282a9-auth-proxy-config\") pod \"machine-config-operator-74547568cd-b5ljk\" (UID: \"f94b07c7-b111-450a-bd2a-0977944282a9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b5ljk" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.246757 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0fb88719-7082-419e-8eb0-7eb7d3bf9719-apiservice-cert\") pod \"packageserver-d55dfcdfc-9klwx\" (UID: \"0fb88719-7082-419e-8eb0-7eb7d3bf9719\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9klwx" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.246786 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2c0351ce-4eed-4d0e-b9a4-eb2746bf396a-bound-sa-token\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.246806 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94c6e5cb-a6e1-453b-b6c2-0a4f397ced92-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-sxcr7\" (UID: \"94c6e5cb-a6e1-453b-b6c2-0a4f397ced92\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sxcr7" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.246831 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff6247fa-c7b7-4a78-9c52-48de3c488a6a-serving-cert\") pod \"console-operator-58897d9998-jrc4l\" (UID: \"ff6247fa-c7b7-4a78-9c52-48de3c488a6a\") " pod="openshift-console-operator/console-operator-58897d9998-jrc4l" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.246851 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfg9v\" (UniqueName: \"kubernetes.io/projected/fcbfa827-1469-4afb-b411-c6015d3e3195-kube-api-access-zfg9v\") pod \"ingress-canary-9w7qp\" (UID: \"fcbfa827-1469-4afb-b411-c6015d3e3195\") " pod="openshift-ingress-canary/ingress-canary-9w7qp" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.246867 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b0af5887-2244-4dfb-8e2a-a66ac6bf6762-oauth-serving-cert\") pod \"console-f9d7485db-5cfzh\" (UID: \"b0af5887-2244-4dfb-8e2a-a66ac6bf6762\") " pod="openshift-console/console-f9d7485db-5cfzh" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.246882 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/61e69b4f-7bcc-4ce4-9bd2-3200c0d0f883-signing-cabundle\") pod \"service-ca-9c57cc56f-5tn27\" (UID: \"61e69b4f-7bcc-4ce4-9bd2-3200c0d0f883\") " pod="openshift-service-ca/service-ca-9c57cc56f-5tn27" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.246899 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/44e9378e-be31-4703-b23c-f7ffdbc89a2b-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-jw72g\" (UID: \"44e9378e-be31-4703-b23c-f7ffdbc89a2b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jw72g" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.246916 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a01c34e-55fd-4492-8fbe-c15dbb539271-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-mfblh\" (UID: \"3a01c34e-55fd-4492-8fbe-c15dbb539271\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mfblh" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.246931 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5b9cfa7f-e80a-42b8-b6f0-239165447812-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xpdm2\" (UID: \"5b9cfa7f-e80a-42b8-b6f0-239165447812\") " pod="openshift-marketplace/marketplace-operator-79b997595-xpdm2" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.246957 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff6247fa-c7b7-4a78-9c52-48de3c488a6a-config\") pod \"console-operator-58897d9998-jrc4l\" (UID: \"ff6247fa-c7b7-4a78-9c52-48de3c488a6a\") " pod="openshift-console-operator/console-operator-58897d9998-jrc4l" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.246974 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25ljb\" (UniqueName: \"kubernetes.io/projected/7277d186-5645-445d-aea3-37215cf98836-kube-api-access-25ljb\") pod \"cluster-samples-operator-665b6dd947-fvlfw\" (UID: \"7277d186-5645-445d-aea3-37215cf98836\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fvlfw" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.246992 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq9vw\" (UniqueName: \"kubernetes.io/projected/e2f98220-0477-4a6d-8cf3-fa055d01bc3a-kube-api-access-xq9vw\") pod \"csi-hostpathplugin-9tdzd\" (UID: \"e2f98220-0477-4a6d-8cf3-fa055d01bc3a\") " pod="hostpath-provisioner/csi-hostpathplugin-9tdzd" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.247010 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0ee94e27-af7c-49c7-a57f-8f1a18ba53e3-etcd-client\") pod \"apiserver-7bbb656c7d-mg7s9\" (UID: \"0ee94e27-af7c-49c7-a57f-8f1a18ba53e3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mg7s9" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.247027 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/913f12c1-e62e-474e-b44f-868c1de3309b-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-5kh7f\" (UID: \"913f12c1-e62e-474e-b44f-868c1de3309b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5kh7f" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.247047 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxkbr\" (UniqueName: \"kubernetes.io/projected/5acdfefb-0861-4196-a2f9-59d1784025a8-kube-api-access-zxkbr\") pod \"olm-operator-6b444d44fb-56p82\" (UID: \"5acdfefb-0861-4196-a2f9-59d1784025a8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-56p82" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.247063 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-977kx\" (UniqueName: \"kubernetes.io/projected/cabe6b29-bccd-4995-ab54-b6cabc86f7bf-kube-api-access-977kx\") pod \"control-plane-machine-set-operator-78cbb6b69f-mjzm6\" (UID: \"cabe6b29-bccd-4995-ab54-b6cabc86f7bf\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mjzm6" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.247079 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bec609a-6eb7-479a-ae81-dd2d7acc1742-config\") pod \"kube-controller-manager-operator-78b949d7b-4x4s2\" (UID: \"9bec609a-6eb7-479a-ae81-dd2d7acc1742\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4x4s2" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.247101 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f8b9591c-6d8d-4942-aa6c-f16b6b5f4992-bound-sa-token\") pod \"ingress-operator-5b745b69d9-7vb4h\" (UID: \"f8b9591c-6d8d-4942-aa6c-f16b6b5f4992\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7vb4h" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.247118 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8d7b\" (UniqueName: \"kubernetes.io/projected/b2df17f0-8c2c-4ca1-afe3-7b82dcb27712-kube-api-access-x8d7b\") pod \"openshift-config-operator-7777fb866f-pc8vw\" (UID: \"b2df17f0-8c2c-4ca1-afe3-7b82dcb27712\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pc8vw" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.247135 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fcbfa827-1469-4afb-b411-c6015d3e3195-cert\") pod \"ingress-canary-9w7qp\" (UID: \"fcbfa827-1469-4afb-b411-c6015d3e3195\") " pod="openshift-ingress-canary/ingress-canary-9w7qp" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.247150 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d0e58b14-51d4-4b94-ba16-1058cde1dda1-config-volume\") pod \"dns-default-pmrm5\" (UID: \"d0e58b14-51d4-4b94-ba16-1058cde1dda1\") " pod="openshift-dns/dns-default-pmrm5" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.247156 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08ee097f-0739-4c23-8ed9-696ded1864f2-config\") pod \"authentication-operator-69f744f599-8c7d6\" (UID: \"08ee097f-0739-4c23-8ed9-696ded1864f2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8c7d6" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.247179 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2c0351ce-4eed-4d0e-b9a4-eb2746bf396a-registry-certificates\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.247209 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e84d82f4-b6d9-4285-908b-55fd27e0e4c3-etcd-client\") pod \"etcd-operator-b45778765-qnvwg\" (UID: \"e84d82f4-b6d9-4285-908b-55fd27e0e4c3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qnvwg" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.247226 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/a13b5004-f884-4558-b38b-b3c8028a73d5-default-certificate\") pod \"router-default-5444994796-vttl9\" (UID: \"a13b5004-f884-4558-b38b-b3c8028a73d5\") " pod="openshift-ingress/router-default-5444994796-vttl9" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.247241 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5acdfefb-0861-4196-a2f9-59d1784025a8-profile-collector-cert\") pod \"olm-operator-6b444d44fb-56p82\" (UID: \"5acdfefb-0861-4196-a2f9-59d1784025a8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-56p82" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.247258 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5s65\" (UniqueName: \"kubernetes.io/projected/0a6d69da-3074-4b30-898b-4bb2eea1fb75-kube-api-access-d5s65\") pod \"collect-profiles-29339595-f78vr\" (UID: \"0a6d69da-3074-4b30-898b-4bb2eea1fb75\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339595-f78vr" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.247279 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ff6247fa-c7b7-4a78-9c52-48de3c488a6a-trusted-ca\") pod \"console-operator-58897d9998-jrc4l\" (UID: \"ff6247fa-c7b7-4a78-9c52-48de3c488a6a\") " pod="openshift-console-operator/console-operator-58897d9998-jrc4l" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.247295 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8qhp\" (UniqueName: \"kubernetes.io/projected/17cf8543-d064-4d3e-976f-2fbad57f2e56-kube-api-access-w8qhp\") pod \"cluster-image-registry-operator-dc59b4c8b-jd5bf\" (UID: \"17cf8543-d064-4d3e-976f-2fbad57f2e56\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jd5bf" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.247313 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kszbh\" (UniqueName: \"kubernetes.io/projected/a13b5004-f884-4558-b38b-b3c8028a73d5-kube-api-access-kszbh\") pod \"router-default-5444994796-vttl9\" (UID: \"a13b5004-f884-4558-b38b-b3c8028a73d5\") " pod="openshift-ingress/router-default-5444994796-vttl9" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.247330 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98nln\" (UniqueName: \"kubernetes.io/projected/599353db-9adf-4393-9920-4de023b156c4-kube-api-access-98nln\") pod \"migrator-59844c95c7-vm4rv\" (UID: \"599353db-9adf-4393-9920-4de023b156c4\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vm4rv" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.247346 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0ee94e27-af7c-49c7-a57f-8f1a18ba53e3-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-mg7s9\" (UID: \"0ee94e27-af7c-49c7-a57f-8f1a18ba53e3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mg7s9" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.247362 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e2f98220-0477-4a6d-8cf3-fa055d01bc3a-socket-dir\") pod \"csi-hostpathplugin-9tdzd\" (UID: \"e2f98220-0477-4a6d-8cf3-fa055d01bc3a\") " pod="hostpath-provisioner/csi-hostpathplugin-9tdzd" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.247393 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e84d82f4-b6d9-4285-908b-55fd27e0e4c3-serving-cert\") pod \"etcd-operator-b45778765-qnvwg\" (UID: \"e84d82f4-b6d9-4285-908b-55fd27e0e4c3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qnvwg" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.247444 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b0af5887-2244-4dfb-8e2a-a66ac6bf6762-console-config\") pod \"console-f9d7485db-5cfzh\" (UID: \"b0af5887-2244-4dfb-8e2a-a66ac6bf6762\") " pod="openshift-console/console-f9d7485db-5cfzh" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.247475 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08ee097f-0739-4c23-8ed9-696ded1864f2-serving-cert\") pod \"authentication-operator-69f744f599-8c7d6\" (UID: \"08ee097f-0739-4c23-8ed9-696ded1864f2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8c7d6" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.248101 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0ee94e27-af7c-49c7-a57f-8f1a18ba53e3-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-mg7s9\" (UID: \"0ee94e27-af7c-49c7-a57f-8f1a18ba53e3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mg7s9" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.248177 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b0af5887-2244-4dfb-8e2a-a66ac6bf6762-oauth-serving-cert\") pod \"console-f9d7485db-5cfzh\" (UID: \"b0af5887-2244-4dfb-8e2a-a66ac6bf6762\") " pod="openshift-console/console-f9d7485db-5cfzh" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.248277 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff6247fa-c7b7-4a78-9c52-48de3c488a6a-config\") pod \"console-operator-58897d9998-jrc4l\" (UID: \"ff6247fa-c7b7-4a78-9c52-48de3c488a6a\") " pod="openshift-console-operator/console-operator-58897d9998-jrc4l" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.248529 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ff6247fa-c7b7-4a78-9c52-48de3c488a6a-trusted-ca\") pod \"console-operator-58897d9998-jrc4l\" (UID: \"ff6247fa-c7b7-4a78-9c52-48de3c488a6a\") " pod="openshift-console-operator/console-operator-58897d9998-jrc4l" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.249162 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2c0351ce-4eed-4d0e-b9a4-eb2746bf396a-registry-certificates\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.249817 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0ee94e27-af7c-49c7-a57f-8f1a18ba53e3-encryption-config\") pod \"apiserver-7bbb656c7d-mg7s9\" (UID: \"0ee94e27-af7c-49c7-a57f-8f1a18ba53e3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mg7s9" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.250114 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/17cf8543-d064-4d3e-976f-2fbad57f2e56-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-jd5bf\" (UID: \"17cf8543-d064-4d3e-976f-2fbad57f2e56\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jd5bf" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.250146 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2df17f0-8c2c-4ca1-afe3-7b82dcb27712-serving-cert\") pod \"openshift-config-operator-7777fb866f-pc8vw\" (UID: \"b2df17f0-8c2c-4ca1-afe3-7b82dcb27712\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pc8vw" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.250304 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e04c241-5f40-47d5-8c67-e8092a483089-serving-cert\") pod \"controller-manager-879f6c89f-lwp58\" (UID: \"4e04c241-5f40-47d5-8c67-e8092a483089\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lwp58" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.250452 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4e04c241-5f40-47d5-8c67-e8092a483089-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-lwp58\" (UID: \"4e04c241-5f40-47d5-8c67-e8092a483089\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lwp58" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.251328 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff6247fa-c7b7-4a78-9c52-48de3c488a6a-serving-cert\") pod \"console-operator-58897d9998-jrc4l\" (UID: \"ff6247fa-c7b7-4a78-9c52-48de3c488a6a\") " pod="openshift-console-operator/console-operator-58897d9998-jrc4l" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.251534 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e04c241-5f40-47d5-8c67-e8092a483089-client-ca\") pod \"controller-manager-879f6c89f-lwp58\" (UID: \"4e04c241-5f40-47d5-8c67-e8092a483089\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lwp58" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.251989 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e84d82f4-b6d9-4285-908b-55fd27e0e4c3-etcd-client\") pod \"etcd-operator-b45778765-qnvwg\" (UID: \"e84d82f4-b6d9-4285-908b-55fd27e0e4c3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qnvwg" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.252008 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b0af5887-2244-4dfb-8e2a-a66ac6bf6762-console-oauth-config\") pod \"console-f9d7485db-5cfzh\" (UID: \"b0af5887-2244-4dfb-8e2a-a66ac6bf6762\") " pod="openshift-console/console-f9d7485db-5cfzh" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.252377 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2c0351ce-4eed-4d0e-b9a4-eb2746bf396a-installation-pull-secrets\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.252920 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0ee94e27-af7c-49c7-a57f-8f1a18ba53e3-etcd-client\") pod \"apiserver-7bbb656c7d-mg7s9\" (UID: \"0ee94e27-af7c-49c7-a57f-8f1a18ba53e3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mg7s9" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.302686 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnxnn\" (UniqueName: \"kubernetes.io/projected/2c0351ce-4eed-4d0e-b9a4-eb2746bf396a-kube-api-access-nnxnn\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.312763 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mt28\" (UniqueName: \"kubernetes.io/projected/0ee94e27-af7c-49c7-a57f-8f1a18ba53e3-kube-api-access-8mt28\") pod \"apiserver-7bbb656c7d-mg7s9\" (UID: \"0ee94e27-af7c-49c7-a57f-8f1a18ba53e3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mg7s9" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.331787 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwhnv\" (UniqueName: \"kubernetes.io/projected/e33fd1d2-081b-4e68-ab37-623406daeaeb-kube-api-access-qwhnv\") pod \"downloads-7954f5f757-l8tzk\" (UID: \"e33fd1d2-081b-4e68-ab37-623406daeaeb\") " pod="openshift-console/downloads-7954f5f757-l8tzk" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.338140 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4vqw6"] Oct 13 17:26:36 crc kubenswrapper[4720]: W1013 17:26:36.343182 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a4c2111_a5d9_4567_91c6_59733a5d711f.slice/crio-1c7f4d732793265c6ca916638ae8598830910e0d217d02d82cbea55a89d2c693 WatchSource:0}: Error finding container 1c7f4d732793265c6ca916638ae8598830910e0d217d02d82cbea55a89d2c693: Status 404 returned error can't find the container with id 1c7f4d732793265c6ca916638ae8598830910e0d217d02d82cbea55a89d2c693 Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.348030 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.348369 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f94b07c7-b111-450a-bd2a-0977944282a9-proxy-tls\") pod \"machine-config-operator-74547568cd-b5ljk\" (UID: \"f94b07c7-b111-450a-bd2a-0977944282a9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b5ljk" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.348396 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdgs5\" (UniqueName: \"kubernetes.io/projected/82e3c60a-715e-4dc6-8137-c79aa854de0a-kube-api-access-kdgs5\") pod \"service-ca-operator-777779d784-9snlb\" (UID: \"82e3c60a-715e-4dc6-8137-c79aa854de0a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9snlb" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.348415 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/815faec7-4eeb-473a-b9ca-3ae41842aa02-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dl79l\" (UID: \"815faec7-4eeb-473a-b9ca-3ae41842aa02\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dl79l" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.348437 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vb7jn\" (UniqueName: \"kubernetes.io/projected/b641d3d3-3e84-4b58-ba79-e1f941260618-kube-api-access-vb7jn\") pod \"multus-admission-controller-857f4d67dd-hjhns\" (UID: \"b641d3d3-3e84-4b58-ba79-e1f941260618\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hjhns" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.348460 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/b1bdb74a-39cf-48fb-a76c-a2b362136fad-node-bootstrap-token\") pod \"machine-config-server-scfnf\" (UID: \"b1bdb74a-39cf-48fb-a76c-a2b362136fad\") " pod="openshift-machine-config-operator/machine-config-server-scfnf" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.348483 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a13b5004-f884-4558-b38b-b3c8028a73d5-service-ca-bundle\") pod \"router-default-5444994796-vttl9\" (UID: \"a13b5004-f884-4558-b38b-b3c8028a73d5\") " pod="openshift-ingress/router-default-5444994796-vttl9" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.348501 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/cabe6b29-bccd-4995-ab54-b6cabc86f7bf-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-mjzm6\" (UID: \"cabe6b29-bccd-4995-ab54-b6cabc86f7bf\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mjzm6" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.348539 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/90197382-1e9f-4823-a4e7-92fafeb46d66-metrics-tls\") pod \"dns-operator-744455d44c-jvfrr\" (UID: \"90197382-1e9f-4823-a4e7-92fafeb46d66\") " pod="openshift-dns-operator/dns-operator-744455d44c-jvfrr" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.348557 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bvc8\" (UniqueName: \"kubernetes.io/projected/94c6e5cb-a6e1-453b-b6c2-0a4f397ced92-kube-api-access-6bvc8\") pod \"kube-storage-version-migrator-operator-b67b599dd-sxcr7\" (UID: \"94c6e5cb-a6e1-453b-b6c2-0a4f397ced92\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sxcr7" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.348581 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f94b07c7-b111-450a-bd2a-0977944282a9-auth-proxy-config\") pod \"machine-config-operator-74547568cd-b5ljk\" (UID: \"f94b07c7-b111-450a-bd2a-0977944282a9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b5ljk" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.348606 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94c6e5cb-a6e1-453b-b6c2-0a4f397ced92-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-sxcr7\" (UID: \"94c6e5cb-a6e1-453b-b6c2-0a4f397ced92\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sxcr7" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.348624 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0fb88719-7082-419e-8eb0-7eb7d3bf9719-apiservice-cert\") pod \"packageserver-d55dfcdfc-9klwx\" (UID: \"0fb88719-7082-419e-8eb0-7eb7d3bf9719\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9klwx" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.348652 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfg9v\" (UniqueName: \"kubernetes.io/projected/fcbfa827-1469-4afb-b411-c6015d3e3195-kube-api-access-zfg9v\") pod \"ingress-canary-9w7qp\" (UID: \"fcbfa827-1469-4afb-b411-c6015d3e3195\") " pod="openshift-ingress-canary/ingress-canary-9w7qp" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.348675 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5b9cfa7f-e80a-42b8-b6f0-239165447812-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xpdm2\" (UID: \"5b9cfa7f-e80a-42b8-b6f0-239165447812\") " pod="openshift-marketplace/marketplace-operator-79b997595-xpdm2" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.348696 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/61e69b4f-7bcc-4ce4-9bd2-3200c0d0f883-signing-cabundle\") pod \"service-ca-9c57cc56f-5tn27\" (UID: \"61e69b4f-7bcc-4ce4-9bd2-3200c0d0f883\") " pod="openshift-service-ca/service-ca-9c57cc56f-5tn27" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.348714 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/44e9378e-be31-4703-b23c-f7ffdbc89a2b-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-jw72g\" (UID: \"44e9378e-be31-4703-b23c-f7ffdbc89a2b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jw72g" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.348732 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a01c34e-55fd-4492-8fbe-c15dbb539271-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-mfblh\" (UID: \"3a01c34e-55fd-4492-8fbe-c15dbb539271\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mfblh" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.348758 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xq9vw\" (UniqueName: \"kubernetes.io/projected/e2f98220-0477-4a6d-8cf3-fa055d01bc3a-kube-api-access-xq9vw\") pod \"csi-hostpathplugin-9tdzd\" (UID: \"e2f98220-0477-4a6d-8cf3-fa055d01bc3a\") " pod="hostpath-provisioner/csi-hostpathplugin-9tdzd" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.348779 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/913f12c1-e62e-474e-b44f-868c1de3309b-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-5kh7f\" (UID: \"913f12c1-e62e-474e-b44f-868c1de3309b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5kh7f" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.348797 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxkbr\" (UniqueName: \"kubernetes.io/projected/5acdfefb-0861-4196-a2f9-59d1784025a8-kube-api-access-zxkbr\") pod \"olm-operator-6b444d44fb-56p82\" (UID: \"5acdfefb-0861-4196-a2f9-59d1784025a8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-56p82" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.348825 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-977kx\" (UniqueName: \"kubernetes.io/projected/cabe6b29-bccd-4995-ab54-b6cabc86f7bf-kube-api-access-977kx\") pod \"control-plane-machine-set-operator-78cbb6b69f-mjzm6\" (UID: \"cabe6b29-bccd-4995-ab54-b6cabc86f7bf\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mjzm6" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.348842 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fcbfa827-1469-4afb-b411-c6015d3e3195-cert\") pod \"ingress-canary-9w7qp\" (UID: \"fcbfa827-1469-4afb-b411-c6015d3e3195\") " pod="openshift-ingress-canary/ingress-canary-9w7qp" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.348859 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d0e58b14-51d4-4b94-ba16-1058cde1dda1-config-volume\") pod \"dns-default-pmrm5\" (UID: \"d0e58b14-51d4-4b94-ba16-1058cde1dda1\") " pod="openshift-dns/dns-default-pmrm5" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.348876 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bec609a-6eb7-479a-ae81-dd2d7acc1742-config\") pod \"kube-controller-manager-operator-78b949d7b-4x4s2\" (UID: \"9bec609a-6eb7-479a-ae81-dd2d7acc1742\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4x4s2" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.348915 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/a13b5004-f884-4558-b38b-b3c8028a73d5-default-certificate\") pod \"router-default-5444994796-vttl9\" (UID: \"a13b5004-f884-4558-b38b-b3c8028a73d5\") " pod="openshift-ingress/router-default-5444994796-vttl9" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.348932 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5acdfefb-0861-4196-a2f9-59d1784025a8-profile-collector-cert\") pod \"olm-operator-6b444d44fb-56p82\" (UID: \"5acdfefb-0861-4196-a2f9-59d1784025a8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-56p82" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.348951 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5s65\" (UniqueName: \"kubernetes.io/projected/0a6d69da-3074-4b30-898b-4bb2eea1fb75-kube-api-access-d5s65\") pod \"collect-profiles-29339595-f78vr\" (UID: \"0a6d69da-3074-4b30-898b-4bb2eea1fb75\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339595-f78vr" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.348970 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kszbh\" (UniqueName: \"kubernetes.io/projected/a13b5004-f884-4558-b38b-b3c8028a73d5-kube-api-access-kszbh\") pod \"router-default-5444994796-vttl9\" (UID: \"a13b5004-f884-4558-b38b-b3c8028a73d5\") " pod="openshift-ingress/router-default-5444994796-vttl9" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.348988 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98nln\" (UniqueName: \"kubernetes.io/projected/599353db-9adf-4393-9920-4de023b156c4-kube-api-access-98nln\") pod \"migrator-59844c95c7-vm4rv\" (UID: \"599353db-9adf-4393-9920-4de023b156c4\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vm4rv" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.349013 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e2f98220-0477-4a6d-8cf3-fa055d01bc3a-socket-dir\") pod \"csi-hostpathplugin-9tdzd\" (UID: \"e2f98220-0477-4a6d-8cf3-fa055d01bc3a\") " pod="hostpath-provisioner/csi-hostpathplugin-9tdzd" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.349029 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f94b07c7-b111-450a-bd2a-0977944282a9-images\") pod \"machine-config-operator-74547568cd-b5ljk\" (UID: \"f94b07c7-b111-450a-bd2a-0977944282a9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b5ljk" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.349046 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztjds\" (UniqueName: \"kubernetes.io/projected/f94b07c7-b111-450a-bd2a-0977944282a9-kube-api-access-ztjds\") pod \"machine-config-operator-74547568cd-b5ljk\" (UID: \"f94b07c7-b111-450a-bd2a-0977944282a9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b5ljk" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.349077 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9bec609a-6eb7-479a-ae81-dd2d7acc1742-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-4x4s2\" (UID: \"9bec609a-6eb7-479a-ae81-dd2d7acc1742\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4x4s2" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.349101 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a6d69da-3074-4b30-898b-4bb2eea1fb75-config-volume\") pod \"collect-profiles-29339595-f78vr\" (UID: \"0a6d69da-3074-4b30-898b-4bb2eea1fb75\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339595-f78vr" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.349119 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a13b5004-f884-4558-b38b-b3c8028a73d5-metrics-certs\") pod \"router-default-5444994796-vttl9\" (UID: \"a13b5004-f884-4558-b38b-b3c8028a73d5\") " pod="openshift-ingress/router-default-5444994796-vttl9" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.349136 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82e3c60a-715e-4dc6-8137-c79aa854de0a-config\") pod \"service-ca-operator-777779d784-9snlb\" (UID: \"82e3c60a-715e-4dc6-8137-c79aa854de0a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9snlb" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.349153 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e2f98220-0477-4a6d-8cf3-fa055d01bc3a-registration-dir\") pod \"csi-hostpathplugin-9tdzd\" (UID: \"e2f98220-0477-4a6d-8cf3-fa055d01bc3a\") " pod="hostpath-provisioner/csi-hostpathplugin-9tdzd" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.349172 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/913f12c1-e62e-474e-b44f-868c1de3309b-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-5kh7f\" (UID: \"913f12c1-e62e-474e-b44f-868c1de3309b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5kh7f" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.349291 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82e3c60a-715e-4dc6-8137-c79aa854de0a-serving-cert\") pod \"service-ca-operator-777779d784-9snlb\" (UID: \"82e3c60a-715e-4dc6-8137-c79aa854de0a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9snlb" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.349339 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5b9cfa7f-e80a-42b8-b6f0-239165447812-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xpdm2\" (UID: \"5b9cfa7f-e80a-42b8-b6f0-239165447812\") " pod="openshift-marketplace/marketplace-operator-79b997595-xpdm2" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.349358 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/815faec7-4eeb-473a-b9ca-3ae41842aa02-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dl79l\" (UID: \"815faec7-4eeb-473a-b9ca-3ae41842aa02\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dl79l" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.349376 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3f98514a-d643-4723-8d83-cda6c55d4874-profile-collector-cert\") pod \"catalog-operator-68c6474976-7xzcq\" (UID: \"3f98514a-d643-4723-8d83-cda6c55d4874\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7xzcq" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.349394 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3f98514a-d643-4723-8d83-cda6c55d4874-srv-cert\") pod \"catalog-operator-68c6474976-7xzcq\" (UID: \"3f98514a-d643-4723-8d83-cda6c55d4874\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7xzcq" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.349412 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/e2f98220-0477-4a6d-8cf3-fa055d01bc3a-csi-data-dir\") pod \"csi-hostpathplugin-9tdzd\" (UID: \"e2f98220-0477-4a6d-8cf3-fa055d01bc3a\") " pod="hostpath-provisioner/csi-hostpathplugin-9tdzd" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.349428 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a01c34e-55fd-4492-8fbe-c15dbb539271-config\") pod \"kube-apiserver-operator-766d6c64bb-mfblh\" (UID: \"3a01c34e-55fd-4492-8fbe-c15dbb539271\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mfblh" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.349445 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bec609a-6eb7-479a-ae81-dd2d7acc1742-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-4x4s2\" (UID: \"9bec609a-6eb7-479a-ae81-dd2d7acc1742\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4x4s2" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.349463 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b641d3d3-3e84-4b58-ba79-e1f941260618-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-hjhns\" (UID: \"b641d3d3-3e84-4b58-ba79-e1f941260618\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hjhns" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.349483 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/93c081dc-50a9-43b9-874c-0b7e46ebbbc9-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-vhgw6\" (UID: \"93c081dc-50a9-43b9-874c-0b7e46ebbbc9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vhgw6" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.349501 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scskp\" (UniqueName: \"kubernetes.io/projected/61e69b4f-7bcc-4ce4-9bd2-3200c0d0f883-kube-api-access-scskp\") pod \"service-ca-9c57cc56f-5tn27\" (UID: \"61e69b4f-7bcc-4ce4-9bd2-3200c0d0f883\") " pod="openshift-service-ca/service-ca-9c57cc56f-5tn27" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.349520 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrsq8\" (UniqueName: \"kubernetes.io/projected/913f12c1-e62e-474e-b44f-868c1de3309b-kube-api-access-lrsq8\") pod \"openshift-controller-manager-operator-756b6f6bc6-5kh7f\" (UID: \"913f12c1-e62e-474e-b44f-868c1de3309b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5kh7f" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.349538 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/e2f98220-0477-4a6d-8cf3-fa055d01bc3a-plugins-dir\") pod \"csi-hostpathplugin-9tdzd\" (UID: \"e2f98220-0477-4a6d-8cf3-fa055d01bc3a\") " pod="hostpath-provisioner/csi-hostpathplugin-9tdzd" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.349558 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clc22\" (UniqueName: \"kubernetes.io/projected/3f98514a-d643-4723-8d83-cda6c55d4874-kube-api-access-clc22\") pod \"catalog-operator-68c6474976-7xzcq\" (UID: \"3f98514a-d643-4723-8d83-cda6c55d4874\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7xzcq" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.349578 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/b1bdb74a-39cf-48fb-a76c-a2b362136fad-certs\") pod \"machine-config-server-scfnf\" (UID: \"b1bdb74a-39cf-48fb-a76c-a2b362136fad\") " pod="openshift-machine-config-operator/machine-config-server-scfnf" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.349596 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d0e58b14-51d4-4b94-ba16-1058cde1dda1-metrics-tls\") pod \"dns-default-pmrm5\" (UID: \"d0e58b14-51d4-4b94-ba16-1058cde1dda1\") " pod="openshift-dns/dns-default-pmrm5" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.349624 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vxnb\" (UniqueName: \"kubernetes.io/projected/93c081dc-50a9-43b9-874c-0b7e46ebbbc9-kube-api-access-9vxnb\") pod \"package-server-manager-789f6589d5-vhgw6\" (UID: \"93c081dc-50a9-43b9-874c-0b7e46ebbbc9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vhgw6" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.349654 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/61e69b4f-7bcc-4ce4-9bd2-3200c0d0f883-signing-key\") pod \"service-ca-9c57cc56f-5tn27\" (UID: \"61e69b4f-7bcc-4ce4-9bd2-3200c0d0f883\") " pod="openshift-service-ca/service-ca-9c57cc56f-5tn27" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.349681 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5acdfefb-0861-4196-a2f9-59d1784025a8-srv-cert\") pod \"olm-operator-6b444d44fb-56p82\" (UID: \"5acdfefb-0861-4196-a2f9-59d1784025a8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-56p82" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.349698 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/0fb88719-7082-419e-8eb0-7eb7d3bf9719-tmpfs\") pod \"packageserver-d55dfcdfc-9klwx\" (UID: \"0fb88719-7082-419e-8eb0-7eb7d3bf9719\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9klwx" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.349718 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0fb88719-7082-419e-8eb0-7eb7d3bf9719-webhook-cert\") pod \"packageserver-d55dfcdfc-9klwx\" (UID: \"0fb88719-7082-419e-8eb0-7eb7d3bf9719\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9klwx" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.349736 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a01c34e-55fd-4492-8fbe-c15dbb539271-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-mfblh\" (UID: \"3a01c34e-55fd-4492-8fbe-c15dbb539271\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mfblh" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.349753 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/815faec7-4eeb-473a-b9ca-3ae41842aa02-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dl79l\" (UID: \"815faec7-4eeb-473a-b9ca-3ae41842aa02\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dl79l" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.349771 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psph8\" (UniqueName: \"kubernetes.io/projected/0fb88719-7082-419e-8eb0-7eb7d3bf9719-kube-api-access-psph8\") pod \"packageserver-d55dfcdfc-9klwx\" (UID: \"0fb88719-7082-419e-8eb0-7eb7d3bf9719\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9klwx" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.349799 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/e2f98220-0477-4a6d-8cf3-fa055d01bc3a-mountpoint-dir\") pod \"csi-hostpathplugin-9tdzd\" (UID: \"e2f98220-0477-4a6d-8cf3-fa055d01bc3a\") " pod="hostpath-provisioner/csi-hostpathplugin-9tdzd" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.349816 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkjqg\" (UniqueName: \"kubernetes.io/projected/90197382-1e9f-4823-a4e7-92fafeb46d66-kube-api-access-lkjqg\") pod \"dns-operator-744455d44c-jvfrr\" (UID: \"90197382-1e9f-4823-a4e7-92fafeb46d66\") " pod="openshift-dns-operator/dns-operator-744455d44c-jvfrr" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.349834 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/a13b5004-f884-4558-b38b-b3c8028a73d5-stats-auth\") pod \"router-default-5444994796-vttl9\" (UID: \"a13b5004-f884-4558-b38b-b3c8028a73d5\") " pod="openshift-ingress/router-default-5444994796-vttl9" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.349851 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgtp8\" (UniqueName: \"kubernetes.io/projected/44e9378e-be31-4703-b23c-f7ffdbc89a2b-kube-api-access-zgtp8\") pod \"machine-config-controller-84d6567774-jw72g\" (UID: \"44e9378e-be31-4703-b23c-f7ffdbc89a2b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jw72g" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.349870 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94c6e5cb-a6e1-453b-b6c2-0a4f397ced92-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-sxcr7\" (UID: \"94c6e5cb-a6e1-453b-b6c2-0a4f397ced92\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sxcr7" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.349896 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsgtw\" (UniqueName: \"kubernetes.io/projected/5b9cfa7f-e80a-42b8-b6f0-239165447812-kube-api-access-wsgtw\") pod \"marketplace-operator-79b997595-xpdm2\" (UID: \"5b9cfa7f-e80a-42b8-b6f0-239165447812\") " pod="openshift-marketplace/marketplace-operator-79b997595-xpdm2" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.349912 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d24nv\" (UniqueName: \"kubernetes.io/projected/d0e58b14-51d4-4b94-ba16-1058cde1dda1-kube-api-access-d24nv\") pod \"dns-default-pmrm5\" (UID: \"d0e58b14-51d4-4b94-ba16-1058cde1dda1\") " pod="openshift-dns/dns-default-pmrm5" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.349941 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/44e9378e-be31-4703-b23c-f7ffdbc89a2b-proxy-tls\") pod \"machine-config-controller-84d6567774-jw72g\" (UID: \"44e9378e-be31-4703-b23c-f7ffdbc89a2b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jw72g" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.349960 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgqfm\" (UniqueName: \"kubernetes.io/projected/b1bdb74a-39cf-48fb-a76c-a2b362136fad-kube-api-access-vgqfm\") pod \"machine-config-server-scfnf\" (UID: \"b1bdb74a-39cf-48fb-a76c-a2b362136fad\") " pod="openshift-machine-config-operator/machine-config-server-scfnf" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.349983 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a6d69da-3074-4b30-898b-4bb2eea1fb75-secret-volume\") pod \"collect-profiles-29339595-f78vr\" (UID: \"0a6d69da-3074-4b30-898b-4bb2eea1fb75\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339595-f78vr" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.350657 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/e2f98220-0477-4a6d-8cf3-fa055d01bc3a-csi-data-dir\") pod \"csi-hostpathplugin-9tdzd\" (UID: \"e2f98220-0477-4a6d-8cf3-fa055d01bc3a\") " pod="hostpath-provisioner/csi-hostpathplugin-9tdzd" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.351383 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82e3c60a-715e-4dc6-8137-c79aa854de0a-config\") pod \"service-ca-operator-777779d784-9snlb\" (UID: \"82e3c60a-715e-4dc6-8137-c79aa854de0a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9snlb" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.351736 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e2f98220-0477-4a6d-8cf3-fa055d01bc3a-registration-dir\") pod \"csi-hostpathplugin-9tdzd\" (UID: \"e2f98220-0477-4a6d-8cf3-fa055d01bc3a\") " pod="hostpath-provisioner/csi-hostpathplugin-9tdzd" Oct 13 17:26:36 crc kubenswrapper[4720]: E1013 17:26:36.352624 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 17:26:36.852592774 +0000 UTC m=+142.309842906 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.352871 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/913f12c1-e62e-474e-b44f-868c1de3309b-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-5kh7f\" (UID: \"913f12c1-e62e-474e-b44f-868c1de3309b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5kh7f" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.353699 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/e2f98220-0477-4a6d-8cf3-fa055d01bc3a-mountpoint-dir\") pod \"csi-hostpathplugin-9tdzd\" (UID: \"e2f98220-0477-4a6d-8cf3-fa055d01bc3a\") " pod="hostpath-provisioner/csi-hostpathplugin-9tdzd" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.354003 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e2f98220-0477-4a6d-8cf3-fa055d01bc3a-socket-dir\") pod \"csi-hostpathplugin-9tdzd\" (UID: \"e2f98220-0477-4a6d-8cf3-fa055d01bc3a\") " pod="hostpath-provisioner/csi-hostpathplugin-9tdzd" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.354931 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9bec609a-6eb7-479a-ae81-dd2d7acc1742-config\") pod \"kube-controller-manager-operator-78b949d7b-4x4s2\" (UID: \"9bec609a-6eb7-479a-ae81-dd2d7acc1742\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4x4s2" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.355066 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d0e58b14-51d4-4b94-ba16-1058cde1dda1-config-volume\") pod \"dns-default-pmrm5\" (UID: \"d0e58b14-51d4-4b94-ba16-1058cde1dda1\") " pod="openshift-dns/dns-default-pmrm5" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.355854 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/0fb88719-7082-419e-8eb0-7eb7d3bf9719-tmpfs\") pod \"packageserver-d55dfcdfc-9klwx\" (UID: \"0fb88719-7082-419e-8eb0-7eb7d3bf9719\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9klwx" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.356085 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f94b07c7-b111-450a-bd2a-0977944282a9-images\") pod \"machine-config-operator-74547568cd-b5ljk\" (UID: \"f94b07c7-b111-450a-bd2a-0977944282a9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b5ljk" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.356821 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a6d69da-3074-4b30-898b-4bb2eea1fb75-secret-volume\") pod \"collect-profiles-29339595-f78vr\" (UID: \"0a6d69da-3074-4b30-898b-4bb2eea1fb75\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339595-f78vr" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.357015 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d0e58b14-51d4-4b94-ba16-1058cde1dda1-metrics-tls\") pod \"dns-default-pmrm5\" (UID: \"d0e58b14-51d4-4b94-ba16-1058cde1dda1\") " pod="openshift-dns/dns-default-pmrm5" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.357507 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a13b5004-f884-4558-b38b-b3c8028a73d5-service-ca-bundle\") pod \"router-default-5444994796-vttl9\" (UID: \"a13b5004-f884-4558-b38b-b3c8028a73d5\") " pod="openshift-ingress/router-default-5444994796-vttl9" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.357772 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/b1bdb74a-39cf-48fb-a76c-a2b362136fad-certs\") pod \"machine-config-server-scfnf\" (UID: \"b1bdb74a-39cf-48fb-a76c-a2b362136fad\") " pod="openshift-machine-config-operator/machine-config-server-scfnf" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.357826 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a6d69da-3074-4b30-898b-4bb2eea1fb75-config-volume\") pod \"collect-profiles-29339595-f78vr\" (UID: \"0a6d69da-3074-4b30-898b-4bb2eea1fb75\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339595-f78vr" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.357966 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/815faec7-4eeb-473a-b9ca-3ae41842aa02-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dl79l\" (UID: \"815faec7-4eeb-473a-b9ca-3ae41842aa02\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dl79l" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.358158 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a01c34e-55fd-4492-8fbe-c15dbb539271-config\") pod \"kube-apiserver-operator-766d6c64bb-mfblh\" (UID: \"3a01c34e-55fd-4492-8fbe-c15dbb539271\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mfblh" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.359015 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xpzx\" (UniqueName: \"kubernetes.io/projected/4e04c241-5f40-47d5-8c67-e8092a483089-kube-api-access-9xpzx\") pod \"controller-manager-879f6c89f-lwp58\" (UID: \"4e04c241-5f40-47d5-8c67-e8092a483089\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lwp58" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.359315 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/44e9378e-be31-4703-b23c-f7ffdbc89a2b-proxy-tls\") pod \"machine-config-controller-84d6567774-jw72g\" (UID: \"44e9378e-be31-4703-b23c-f7ffdbc89a2b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jw72g" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.359415 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/a13b5004-f884-4558-b38b-b3c8028a73d5-default-certificate\") pod \"router-default-5444994796-vttl9\" (UID: \"a13b5004-f884-4558-b38b-b3c8028a73d5\") " pod="openshift-ingress/router-default-5444994796-vttl9" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.355924 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f94b07c7-b111-450a-bd2a-0977944282a9-auth-proxy-config\") pod \"machine-config-operator-74547568cd-b5ljk\" (UID: \"f94b07c7-b111-450a-bd2a-0977944282a9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b5ljk" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.359595 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/b1bdb74a-39cf-48fb-a76c-a2b362136fad-node-bootstrap-token\") pod \"machine-config-server-scfnf\" (UID: \"b1bdb74a-39cf-48fb-a76c-a2b362136fad\") " pod="openshift-machine-config-operator/machine-config-server-scfnf" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.359616 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0fb88719-7082-419e-8eb0-7eb7d3bf9719-apiservice-cert\") pod \"packageserver-d55dfcdfc-9klwx\" (UID: \"0fb88719-7082-419e-8eb0-7eb7d3bf9719\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9klwx" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.359859 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9bec609a-6eb7-479a-ae81-dd2d7acc1742-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-4x4s2\" (UID: \"9bec609a-6eb7-479a-ae81-dd2d7acc1742\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4x4s2" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.360044 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/90197382-1e9f-4823-a4e7-92fafeb46d66-metrics-tls\") pod \"dns-operator-744455d44c-jvfrr\" (UID: \"90197382-1e9f-4823-a4e7-92fafeb46d66\") " pod="openshift-dns-operator/dns-operator-744455d44c-jvfrr" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.360503 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5acdfefb-0861-4196-a2f9-59d1784025a8-srv-cert\") pod \"olm-operator-6b444d44fb-56p82\" (UID: \"5acdfefb-0861-4196-a2f9-59d1784025a8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-56p82" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.360552 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f94b07c7-b111-450a-bd2a-0977944282a9-proxy-tls\") pod \"machine-config-operator-74547568cd-b5ljk\" (UID: \"f94b07c7-b111-450a-bd2a-0977944282a9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b5ljk" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.360818 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5b9cfa7f-e80a-42b8-b6f0-239165447812-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xpdm2\" (UID: \"5b9cfa7f-e80a-42b8-b6f0-239165447812\") " pod="openshift-marketplace/marketplace-operator-79b997595-xpdm2" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.360899 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/e2f98220-0477-4a6d-8cf3-fa055d01bc3a-plugins-dir\") pod \"csi-hostpathplugin-9tdzd\" (UID: \"e2f98220-0477-4a6d-8cf3-fa055d01bc3a\") " pod="hostpath-provisioner/csi-hostpathplugin-9tdzd" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.360926 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/61e69b4f-7bcc-4ce4-9bd2-3200c0d0f883-signing-key\") pod \"service-ca-9c57cc56f-5tn27\" (UID: \"61e69b4f-7bcc-4ce4-9bd2-3200c0d0f883\") " pod="openshift-service-ca/service-ca-9c57cc56f-5tn27" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.361568 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/44e9378e-be31-4703-b23c-f7ffdbc89a2b-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-jw72g\" (UID: \"44e9378e-be31-4703-b23c-f7ffdbc89a2b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jw72g" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.361622 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3f98514a-d643-4723-8d83-cda6c55d4874-profile-collector-cert\") pod \"catalog-operator-68c6474976-7xzcq\" (UID: \"3f98514a-d643-4723-8d83-cda6c55d4874\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7xzcq" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.362343 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a01c34e-55fd-4492-8fbe-c15dbb539271-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-mfblh\" (UID: \"3a01c34e-55fd-4492-8fbe-c15dbb539271\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mfblh" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.362392 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fcbfa827-1469-4afb-b411-c6015d3e3195-cert\") pod \"ingress-canary-9w7qp\" (UID: \"fcbfa827-1469-4afb-b411-c6015d3e3195\") " pod="openshift-ingress-canary/ingress-canary-9w7qp" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.363533 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a13b5004-f884-4558-b38b-b3c8028a73d5-metrics-certs\") pod \"router-default-5444994796-vttl9\" (UID: \"a13b5004-f884-4558-b38b-b3c8028a73d5\") " pod="openshift-ingress/router-default-5444994796-vttl9" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.364782 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5acdfefb-0861-4196-a2f9-59d1784025a8-profile-collector-cert\") pod \"olm-operator-6b444d44fb-56p82\" (UID: \"5acdfefb-0861-4196-a2f9-59d1784025a8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-56p82" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.365007 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/913f12c1-e62e-474e-b44f-868c1de3309b-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-5kh7f\" (UID: \"913f12c1-e62e-474e-b44f-868c1de3309b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5kh7f" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.365353 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94c6e5cb-a6e1-453b-b6c2-0a4f397ced92-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-sxcr7\" (UID: \"94c6e5cb-a6e1-453b-b6c2-0a4f397ced92\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sxcr7" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.367032 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/61e69b4f-7bcc-4ce4-9bd2-3200c0d0f883-signing-cabundle\") pod \"service-ca-9c57cc56f-5tn27\" (UID: \"61e69b4f-7bcc-4ce4-9bd2-3200c0d0f883\") " pod="openshift-service-ca/service-ca-9c57cc56f-5tn27" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.367316 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3f98514a-d643-4723-8d83-cda6c55d4874-srv-cert\") pod \"catalog-operator-68c6474976-7xzcq\" (UID: \"3f98514a-d643-4723-8d83-cda6c55d4874\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7xzcq" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.367491 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b641d3d3-3e84-4b58-ba79-e1f941260618-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-hjhns\" (UID: \"b641d3d3-3e84-4b58-ba79-e1f941260618\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hjhns" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.367622 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82e3c60a-715e-4dc6-8137-c79aa854de0a-serving-cert\") pod \"service-ca-operator-777779d784-9snlb\" (UID: \"82e3c60a-715e-4dc6-8137-c79aa854de0a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9snlb" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.367773 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/a13b5004-f884-4558-b38b-b3c8028a73d5-stats-auth\") pod \"router-default-5444994796-vttl9\" (UID: \"a13b5004-f884-4558-b38b-b3c8028a73d5\") " pod="openshift-ingress/router-default-5444994796-vttl9" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.367930 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/cabe6b29-bccd-4995-ab54-b6cabc86f7bf-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-mjzm6\" (UID: \"cabe6b29-bccd-4995-ab54-b6cabc86f7bf\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mjzm6" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.368258 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0fb88719-7082-419e-8eb0-7eb7d3bf9719-webhook-cert\") pod \"packageserver-d55dfcdfc-9klwx\" (UID: \"0fb88719-7082-419e-8eb0-7eb7d3bf9719\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9klwx" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.370163 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/815faec7-4eeb-473a-b9ca-3ae41842aa02-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dl79l\" (UID: \"815faec7-4eeb-473a-b9ca-3ae41842aa02\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dl79l" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.372131 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ps7xq\" (UniqueName: \"kubernetes.io/projected/08ee097f-0739-4c23-8ed9-696ded1864f2-kube-api-access-ps7xq\") pod \"authentication-operator-69f744f599-8c7d6\" (UID: \"08ee097f-0739-4c23-8ed9-696ded1864f2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8c7d6" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.373816 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94c6e5cb-a6e1-453b-b6c2-0a4f397ced92-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-sxcr7\" (UID: \"94c6e5cb-a6e1-453b-b6c2-0a4f397ced92\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sxcr7" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.374786 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/93c081dc-50a9-43b9-874c-0b7e46ebbbc9-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-vhgw6\" (UID: \"93c081dc-50a9-43b9-874c-0b7e46ebbbc9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vhgw6" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.379673 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5b9cfa7f-e80a-42b8-b6f0-239165447812-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xpdm2\" (UID: \"5b9cfa7f-e80a-42b8-b6f0-239165447812\") " pod="openshift-marketplace/marketplace-operator-79b997595-xpdm2" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.382311 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-xbb4t"] Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.390546 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-lwp58" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.394386 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85w74\" (UniqueName: \"kubernetes.io/projected/f8b9591c-6d8d-4942-aa6c-f16b6b5f4992-kube-api-access-85w74\") pod \"ingress-operator-5b745b69d9-7vb4h\" (UID: \"f8b9591c-6d8d-4942-aa6c-f16b6b5f4992\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7vb4h" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.413437 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mg7s9" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.440996 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-8c7d6" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.441063 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-569v4\" (UniqueName: \"kubernetes.io/projected/ff6247fa-c7b7-4a78-9c52-48de3c488a6a-kube-api-access-569v4\") pod \"console-operator-58897d9998-jrc4l\" (UID: \"ff6247fa-c7b7-4a78-9c52-48de3c488a6a\") " pod="openshift-console-operator/console-operator-58897d9998-jrc4l" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.453428 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:36 crc kubenswrapper[4720]: E1013 17:26:36.454036 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 17:26:36.954019373 +0000 UTC m=+142.411269505 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5xfl" (UID: "2c0351ce-4eed-4d0e-b9a4-eb2746bf396a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.454753 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-jrc4l" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.466058 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qj42n\" (UniqueName: \"kubernetes.io/projected/e84d82f4-b6d9-4285-908b-55fd27e0e4c3-kube-api-access-qj42n\") pod \"etcd-operator-b45778765-qnvwg\" (UID: \"e84d82f4-b6d9-4285-908b-55fd27e0e4c3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qnvwg" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.472075 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/17cf8543-d064-4d3e-976f-2fbad57f2e56-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-jd5bf\" (UID: \"17cf8543-d064-4d3e-976f-2fbad57f2e56\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jd5bf" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.485458 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-972nv\" (UniqueName: \"kubernetes.io/projected/b0af5887-2244-4dfb-8e2a-a66ac6bf6762-kube-api-access-972nv\") pod \"console-f9d7485db-5cfzh\" (UID: \"b0af5887-2244-4dfb-8e2a-a66ac6bf6762\") " pod="openshift-console/console-f9d7485db-5cfzh" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.492053 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-l8tzk" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.500872 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8d7b\" (UniqueName: \"kubernetes.io/projected/b2df17f0-8c2c-4ca1-afe3-7b82dcb27712-kube-api-access-x8d7b\") pod \"openshift-config-operator-7777fb866f-pc8vw\" (UID: \"b2df17f0-8c2c-4ca1-afe3-7b82dcb27712\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pc8vw" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.507081 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-9mm85"] Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.514559 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-dvc6z"] Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.516246 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25ljb\" (UniqueName: \"kubernetes.io/projected/7277d186-5645-445d-aea3-37215cf98836-kube-api-access-25ljb\") pod \"cluster-samples-operator-665b6dd947-fvlfw\" (UID: \"7277d186-5645-445d-aea3-37215cf98836\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fvlfw" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.534921 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f8b9591c-6d8d-4942-aa6c-f16b6b5f4992-bound-sa-token\") pod \"ingress-operator-5b745b69d9-7vb4h\" (UID: \"f8b9591c-6d8d-4942-aa6c-f16b6b5f4992\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7vb4h" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.536383 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-qnvwg" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.542635 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7vb4h" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.554952 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 17:26:36 crc kubenswrapper[4720]: E1013 17:26:36.556468 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 17:26:37.056448128 +0000 UTC m=+142.513698260 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.556499 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8qhp\" (UniqueName: \"kubernetes.io/projected/17cf8543-d064-4d3e-976f-2fbad57f2e56-kube-api-access-w8qhp\") pod \"cluster-image-registry-operator-dc59b4c8b-jd5bf\" (UID: \"17cf8543-d064-4d3e-976f-2fbad57f2e56\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jd5bf" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.577050 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2c0351ce-4eed-4d0e-b9a4-eb2746bf396a-bound-sa-token\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:36 crc kubenswrapper[4720]: W1013 17:26:36.586611 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43c40b45_9695_4d29_b627_c4ab23d1d6d0.slice/crio-92c4daa84f99036bb3be34a9e2cf9888c3b0f7e2b5e276c21d031a1322e471c0 WatchSource:0}: Error finding container 92c4daa84f99036bb3be34a9e2cf9888c3b0f7e2b5e276c21d031a1322e471c0: Status 404 returned error can't find the container with id 92c4daa84f99036bb3be34a9e2cf9888c3b0f7e2b5e276c21d031a1322e471c0 Oct 13 17:26:36 crc kubenswrapper[4720]: W1013 17:26:36.589383 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5179459_d832_4419_96ce_44dd4f055e98.slice/crio-57d90ad53a9b92a3250ce3dced9722cc50815065a429ee1a039079c7aff76c29 WatchSource:0}: Error finding container 57d90ad53a9b92a3250ce3dced9722cc50815065a429ee1a039079c7aff76c29: Status 404 returned error can't find the container with id 57d90ad53a9b92a3250ce3dced9722cc50815065a429ee1a039079c7aff76c29 Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.611957 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psph8\" (UniqueName: \"kubernetes.io/projected/0fb88719-7082-419e-8eb0-7eb7d3bf9719-kube-api-access-psph8\") pod \"packageserver-d55dfcdfc-9klwx\" (UID: \"0fb88719-7082-419e-8eb0-7eb7d3bf9719\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9klwx" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.633206 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/815faec7-4eeb-473a-b9ca-3ae41842aa02-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dl79l\" (UID: \"815faec7-4eeb-473a-b9ca-3ae41842aa02\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dl79l" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.645108 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-mg7s9"] Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.650732 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdgs5\" (UniqueName: \"kubernetes.io/projected/82e3c60a-715e-4dc6-8137-c79aa854de0a-kube-api-access-kdgs5\") pod \"service-ca-operator-777779d784-9snlb\" (UID: \"82e3c60a-715e-4dc6-8137-c79aa854de0a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9snlb" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.658614 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:36 crc kubenswrapper[4720]: E1013 17:26:36.659213 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 17:26:37.159198551 +0000 UTC m=+142.616448683 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5xfl" (UID: "2c0351ce-4eed-4d0e-b9a4-eb2746bf396a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.675726 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vxnb\" (UniqueName: \"kubernetes.io/projected/93c081dc-50a9-43b9-874c-0b7e46ebbbc9-kube-api-access-9vxnb\") pod \"package-server-manager-789f6589d5-vhgw6\" (UID: \"93c081dc-50a9-43b9-874c-0b7e46ebbbc9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vhgw6" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.696059 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9klwx" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.704806 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-8c7d6"] Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.704856 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-9snlb" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.713232 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xq9vw\" (UniqueName: \"kubernetes.io/projected/e2f98220-0477-4a6d-8cf3-fa055d01bc3a-kube-api-access-xq9vw\") pod \"csi-hostpathplugin-9tdzd\" (UID: \"e2f98220-0477-4a6d-8cf3-fa055d01bc3a\") " pod="hostpath-provisioner/csi-hostpathplugin-9tdzd" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.726693 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-5cfzh" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.735084 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5s65\" (UniqueName: \"kubernetes.io/projected/0a6d69da-3074-4b30-898b-4bb2eea1fb75-kube-api-access-d5s65\") pod \"collect-profiles-29339595-f78vr\" (UID: \"0a6d69da-3074-4b30-898b-4bb2eea1fb75\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339595-f78vr" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.735846 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-9tdzd" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.739064 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsgtw\" (UniqueName: \"kubernetes.io/projected/5b9cfa7f-e80a-42b8-b6f0-239165447812-kube-api-access-wsgtw\") pod \"marketplace-operator-79b997595-xpdm2\" (UID: \"5b9cfa7f-e80a-42b8-b6f0-239165447812\") " pod="openshift-marketplace/marketplace-operator-79b997595-xpdm2" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.756112 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kszbh\" (UniqueName: \"kubernetes.io/projected/a13b5004-f884-4558-b38b-b3c8028a73d5-kube-api-access-kszbh\") pod \"router-default-5444994796-vttl9\" (UID: \"a13b5004-f884-4558-b38b-b3c8028a73d5\") " pod="openshift-ingress/router-default-5444994796-vttl9" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.756117 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fvlfw" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.760097 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 17:26:36 crc kubenswrapper[4720]: E1013 17:26:36.760926 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 17:26:37.260908387 +0000 UTC m=+142.718158519 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.778969 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pc8vw" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.779276 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jd5bf" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.780982 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vb7jn\" (UniqueName: \"kubernetes.io/projected/b641d3d3-3e84-4b58-ba79-e1f941260618-kube-api-access-vb7jn\") pod \"multus-admission-controller-857f4d67dd-hjhns\" (UID: \"b641d3d3-3e84-4b58-ba79-e1f941260618\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-hjhns" Oct 13 17:26:36 crc kubenswrapper[4720]: W1013 17:26:36.794741 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08ee097f_0739_4c23_8ed9_696ded1864f2.slice/crio-845799da83e0827c9c406ca94a2b5cc8a3ecb95bb0189a4e1735371935c55019 WatchSource:0}: Error finding container 845799da83e0827c9c406ca94a2b5cc8a3ecb95bb0189a4e1735371935c55019: Status 404 returned error can't find the container with id 845799da83e0827c9c406ca94a2b5cc8a3ecb95bb0189a4e1735371935c55019 Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.811658 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d24nv\" (UniqueName: \"kubernetes.io/projected/d0e58b14-51d4-4b94-ba16-1058cde1dda1-kube-api-access-d24nv\") pod \"dns-default-pmrm5\" (UID: \"d0e58b14-51d4-4b94-ba16-1058cde1dda1\") " pod="openshift-dns/dns-default-pmrm5" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.814411 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98nln\" (UniqueName: \"kubernetes.io/projected/599353db-9adf-4393-9920-4de023b156c4-kube-api-access-98nln\") pod \"migrator-59844c95c7-vm4rv\" (UID: \"599353db-9adf-4393-9920-4de023b156c4\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vm4rv" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.842223 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkjqg\" (UniqueName: \"kubernetes.io/projected/90197382-1e9f-4823-a4e7-92fafeb46d66-kube-api-access-lkjqg\") pod \"dns-operator-744455d44c-jvfrr\" (UID: \"90197382-1e9f-4823-a4e7-92fafeb46d66\") " pod="openshift-dns-operator/dns-operator-744455d44c-jvfrr" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.859649 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-qnvwg"] Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.861623 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dl79l" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.863250 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:36 crc kubenswrapper[4720]: E1013 17:26:36.863805 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 17:26:37.363791964 +0000 UTC m=+142.821042086 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5xfl" (UID: "2c0351ce-4eed-4d0e-b9a4-eb2746bf396a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.868095 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-jvfrr" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.868519 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxkbr\" (UniqueName: \"kubernetes.io/projected/5acdfefb-0861-4196-a2f9-59d1784025a8-kube-api-access-zxkbr\") pod \"olm-operator-6b444d44fb-56p82\" (UID: \"5acdfefb-0861-4196-a2f9-59d1784025a8\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-56p82" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.869529 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-l8tzk"] Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.873864 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgtp8\" (UniqueName: \"kubernetes.io/projected/44e9378e-be31-4703-b23c-f7ffdbc89a2b-kube-api-access-zgtp8\") pod \"machine-config-controller-84d6567774-jw72g\" (UID: \"44e9378e-be31-4703-b23c-f7ffdbc89a2b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jw72g" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.875045 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jw72g" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.897277 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-977kx\" (UniqueName: \"kubernetes.io/projected/cabe6b29-bccd-4995-ab54-b6cabc86f7bf-kube-api-access-977kx\") pod \"control-plane-machine-set-operator-78cbb6b69f-mjzm6\" (UID: \"cabe6b29-bccd-4995-ab54-b6cabc86f7bf\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mjzm6" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.911532 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-hjhns" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.912153 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-vttl9" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.919472 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-56p82" Oct 13 17:26:36 crc kubenswrapper[4720]: W1013 17:26:36.927425 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode84d82f4_b6d9_4285_908b_55fd27e0e4c3.slice/crio-4ae92ddf0754f6312913c3bf04e08de17927a907a9ccfec5959a83896bdb018e WatchSource:0}: Error finding container 4ae92ddf0754f6312913c3bf04e08de17927a907a9ccfec5959a83896bdb018e: Status 404 returned error can't find the container with id 4ae92ddf0754f6312913c3bf04e08de17927a907a9ccfec5959a83896bdb018e Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.929946 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p8hj6" event={"ID":"36b6918f-fd99-4666-982d-7636ea1d9a1c","Type":"ContainerStarted","Data":"4fd96a2febe965a89c61b7ebb2529870eaf66f017fc8de85136207adc085710d"} Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.930051 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p8hj6" event={"ID":"36b6918f-fd99-4666-982d-7636ea1d9a1c","Type":"ContainerStarted","Data":"e224a372fcd66ccd2005f86ccf4fafec828a18b7809b5a3b216a6693dab3cbd2"} Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.932523 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bvc8\" (UniqueName: \"kubernetes.io/projected/94c6e5cb-a6e1-453b-b6c2-0a4f397ced92-kube-api-access-6bvc8\") pod \"kube-storage-version-migrator-operator-b67b599dd-sxcr7\" (UID: \"94c6e5cb-a6e1-453b-b6c2-0a4f397ced92\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sxcr7" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.932610 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-xbb4t" event={"ID":"5abe378d-2b00-4d15-af94-5141934fca47","Type":"ContainerStarted","Data":"2acdbaa1aab485da3720f131327fb20dbda869b22199fb4c8f86d8c5b518897e"} Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.932653 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-xbb4t" event={"ID":"5abe378d-2b00-4d15-af94-5141934fca47","Type":"ContainerStarted","Data":"3abf26304a2065a4747b662d067d3f80e47e9be38d84e3f4244da8b9eb7b63d4"} Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.932979 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-xbb4t" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.937907 4720 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-xbb4t container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.17:6443/healthz\": dial tcp 10.217.0.17:6443: connect: connection refused" start-of-body= Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.937984 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-xbb4t" podUID="5abe378d-2b00-4d15-af94-5141934fca47" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.17:6443/healthz\": dial tcp 10.217.0.17:6443: connect: connection refused" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.938973 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztjds\" (UniqueName: \"kubernetes.io/projected/f94b07c7-b111-450a-bd2a-0977944282a9-kube-api-access-ztjds\") pod \"machine-config-operator-74547568cd-b5ljk\" (UID: \"f94b07c7-b111-450a-bd2a-0977944282a9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b5ljk" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.940884 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-8c7d6" event={"ID":"08ee097f-0739-4c23-8ed9-696ded1864f2","Type":"ContainerStarted","Data":"845799da83e0827c9c406ca94a2b5cc8a3ecb95bb0189a4e1735371935c55019"} Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.941471 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vm4rv" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.944074 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mg7s9" event={"ID":"0ee94e27-af7c-49c7-a57f-8f1a18ba53e3","Type":"ContainerStarted","Data":"9e03ecc25fd66e84b84cc863aafb8cc164a5b14536d9f15b357c8dcff7d4e008"} Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.950047 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9bec609a-6eb7-479a-ae81-dd2d7acc1742-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-4x4s2\" (UID: \"9bec609a-6eb7-479a-ae81-dd2d7acc1742\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4x4s2" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.951228 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4vqw6" event={"ID":"3a4c2111-a5d9-4567-91c6-59733a5d711f","Type":"ContainerStarted","Data":"05a9a057249364d1dd29ed79d89d25445d3161f1000234d91df3bb643b1ccabe"} Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.951270 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4vqw6" event={"ID":"3a4c2111-a5d9-4567-91c6-59733a5d711f","Type":"ContainerStarted","Data":"1c7f4d732793265c6ca916638ae8598830910e0d217d02d82cbea55a89d2c693"} Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.954482 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vhgw6" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.966489 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 17:26:36 crc kubenswrapper[4720]: E1013 17:26:36.966904 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 17:26:37.466877825 +0000 UTC m=+142.924127957 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.968128 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mjzm6" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.970268 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-dvc6z" event={"ID":"43c40b45-9695-4d29-b627-c4ab23d1d6d0","Type":"ContainerStarted","Data":"57360566d65634e3d45ebf9b810eeff9fec977bff0212783f6553a88dba8376a"} Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.970353 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9mm85" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.970369 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-dvc6z" event={"ID":"43c40b45-9695-4d29-b627-c4ab23d1d6d0","Type":"ContainerStarted","Data":"92c4daa84f99036bb3be34a9e2cf9888c3b0f7e2b5e276c21d031a1322e471c0"} Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.970380 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9mm85" event={"ID":"b5179459-d832-4419-96ce-44dd4f055e98","Type":"ContainerStarted","Data":"431b55886192faf024aea2c290a237484e8569559655af6299b96304d1b7e7bb"} Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.970391 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9mm85" event={"ID":"b5179459-d832-4419-96ce-44dd4f055e98","Type":"ContainerStarted","Data":"57d90ad53a9b92a3250ce3dced9722cc50815065a429ee1a039079c7aff76c29"} Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.979879 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29339595-f78vr" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.985595 4720 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-9mm85 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.985667 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9mm85" podUID="b5179459-d832-4419-96ce-44dd4f055e98" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.986048 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xpdm2" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.986313 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a01c34e-55fd-4492-8fbe-c15dbb539271-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-mfblh\" (UID: \"3a01c34e-55fd-4492-8fbe-c15dbb539271\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mfblh" Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.994762 4720 generic.go:334] "Generic (PLEG): container finished" podID="7e7db006-6ea5-44bb-89ad-0d6a6a4810ca" containerID="8d646e934bac6b73e83d0c76426cbfbc94097d6f693b928670f48b3d35642c86" exitCode=0 Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.994813 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-mm8n9" event={"ID":"7e7db006-6ea5-44bb-89ad-0d6a6a4810ca","Type":"ContainerDied","Data":"8d646e934bac6b73e83d0c76426cbfbc94097d6f693b928670f48b3d35642c86"} Oct 13 17:26:36 crc kubenswrapper[4720]: I1013 17:26:36.994842 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-mm8n9" event={"ID":"7e7db006-6ea5-44bb-89ad-0d6a6a4810ca","Type":"ContainerStarted","Data":"95b30fdf5686d9fd1ceb18d7afd556f1ad930197117aa2dcd06806b056e0d58b"} Oct 13 17:26:37 crc kubenswrapper[4720]: I1013 17:26:37.002377 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-7vb4h"] Oct 13 17:26:37 crc kubenswrapper[4720]: I1013 17:26:37.010723 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-jrc4l"] Oct 13 17:26:37 crc kubenswrapper[4720]: I1013 17:26:37.013581 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfg9v\" (UniqueName: \"kubernetes.io/projected/fcbfa827-1469-4afb-b411-c6015d3e3195-kube-api-access-zfg9v\") pod \"ingress-canary-9w7qp\" (UID: \"fcbfa827-1469-4afb-b411-c6015d3e3195\") " pod="openshift-ingress-canary/ingress-canary-9w7qp" Oct 13 17:26:37 crc kubenswrapper[4720]: I1013 17:26:37.015443 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lwp58"] Oct 13 17:26:37 crc kubenswrapper[4720]: I1013 17:26:37.015851 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scskp\" (UniqueName: \"kubernetes.io/projected/61e69b4f-7bcc-4ce4-9bd2-3200c0d0f883-kube-api-access-scskp\") pod \"service-ca-9c57cc56f-5tn27\" (UID: \"61e69b4f-7bcc-4ce4-9bd2-3200c0d0f883\") " pod="openshift-service-ca/service-ca-9c57cc56f-5tn27" Oct 13 17:26:37 crc kubenswrapper[4720]: I1013 17:26:37.032949 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgqfm\" (UniqueName: \"kubernetes.io/projected/b1bdb74a-39cf-48fb-a76c-a2b362136fad-kube-api-access-vgqfm\") pod \"machine-config-server-scfnf\" (UID: \"b1bdb74a-39cf-48fb-a76c-a2b362136fad\") " pod="openshift-machine-config-operator/machine-config-server-scfnf" Oct 13 17:26:37 crc kubenswrapper[4720]: I1013 17:26:37.046626 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-scfnf" Oct 13 17:26:37 crc kubenswrapper[4720]: I1013 17:26:37.063068 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-9w7qp" Oct 13 17:26:37 crc kubenswrapper[4720]: I1013 17:26:37.064162 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrsq8\" (UniqueName: \"kubernetes.io/projected/913f12c1-e62e-474e-b44f-868c1de3309b-kube-api-access-lrsq8\") pod \"openshift-controller-manager-operator-756b6f6bc6-5kh7f\" (UID: \"913f12c1-e62e-474e-b44f-868c1de3309b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5kh7f" Oct 13 17:26:37 crc kubenswrapper[4720]: I1013 17:26:37.065118 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-pmrm5" Oct 13 17:26:37 crc kubenswrapper[4720]: I1013 17:26:37.068730 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:37 crc kubenswrapper[4720]: E1013 17:26:37.070682 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 17:26:37.570667825 +0000 UTC m=+143.027917957 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5xfl" (UID: "2c0351ce-4eed-4d0e-b9a4-eb2746bf396a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:37 crc kubenswrapper[4720]: I1013 17:26:37.083621 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clc22\" (UniqueName: \"kubernetes.io/projected/3f98514a-d643-4723-8d83-cda6c55d4874-kube-api-access-clc22\") pod \"catalog-operator-68c6474976-7xzcq\" (UID: \"3f98514a-d643-4723-8d83-cda6c55d4874\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7xzcq" Oct 13 17:26:37 crc kubenswrapper[4720]: I1013 17:26:37.147925 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sxcr7" Oct 13 17:26:37 crc kubenswrapper[4720]: I1013 17:26:37.155836 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5kh7f" Oct 13 17:26:37 crc kubenswrapper[4720]: I1013 17:26:37.170671 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 17:26:37 crc kubenswrapper[4720]: E1013 17:26:37.171302 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 17:26:37.671279103 +0000 UTC m=+143.128529235 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:37 crc kubenswrapper[4720]: I1013 17:26:37.183529 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:37 crc kubenswrapper[4720]: I1013 17:26:37.183786 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4x4s2" Oct 13 17:26:37 crc kubenswrapper[4720]: E1013 17:26:37.183984 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 17:26:37.683967327 +0000 UTC m=+143.141217459 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5xfl" (UID: "2c0351ce-4eed-4d0e-b9a4-eb2746bf396a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:37 crc kubenswrapper[4720]: I1013 17:26:37.187230 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b5ljk" Oct 13 17:26:37 crc kubenswrapper[4720]: I1013 17:26:37.193709 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mfblh" Oct 13 17:26:37 crc kubenswrapper[4720]: I1013 17:26:37.226174 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7xzcq" Oct 13 17:26:37 crc kubenswrapper[4720]: I1013 17:26:37.227297 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9klwx"] Oct 13 17:26:37 crc kubenswrapper[4720]: I1013 17:26:37.284988 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 17:26:37 crc kubenswrapper[4720]: E1013 17:26:37.285958 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 17:26:37.78593249 +0000 UTC m=+143.243182612 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:37 crc kubenswrapper[4720]: I1013 17:26:37.318015 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-5tn27" Oct 13 17:26:37 crc kubenswrapper[4720]: W1013 17:26:37.319443 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0fb88719_7082_419e_8eb0_7eb7d3bf9719.slice/crio-a6ab3d752d506111fa02ac4a9a3424ef58da22fee809797544b32d1f2003bb4c WatchSource:0}: Error finding container a6ab3d752d506111fa02ac4a9a3424ef58da22fee809797544b32d1f2003bb4c: Status 404 returned error can't find the container with id a6ab3d752d506111fa02ac4a9a3424ef58da22fee809797544b32d1f2003bb4c Oct 13 17:26:37 crc kubenswrapper[4720]: I1013 17:26:37.348168 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-9tdzd"] Oct 13 17:26:37 crc kubenswrapper[4720]: I1013 17:26:37.352360 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-9snlb"] Oct 13 17:26:37 crc kubenswrapper[4720]: I1013 17:26:37.388941 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:37 crc kubenswrapper[4720]: E1013 17:26:37.389402 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 17:26:37.88938438 +0000 UTC m=+143.346634512 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5xfl" (UID: "2c0351ce-4eed-4d0e-b9a4-eb2746bf396a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:37 crc kubenswrapper[4720]: I1013 17:26:37.416885 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9mm85" podStartSLOduration=117.416870102 podStartE2EDuration="1m57.416870102s" podCreationTimestamp="2025-10-13 17:24:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:26:37.415783334 +0000 UTC m=+142.873033466" watchObservedRunningTime="2025-10-13 17:26:37.416870102 +0000 UTC m=+142.874120234" Oct 13 17:26:37 crc kubenswrapper[4720]: I1013 17:26:37.418962 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-56p82"] Oct 13 17:26:37 crc kubenswrapper[4720]: I1013 17:26:37.492609 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 17:26:37 crc kubenswrapper[4720]: I1013 17:26:37.492831 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4vqw6" podStartSLOduration=118.492817131 podStartE2EDuration="1m58.492817131s" podCreationTimestamp="2025-10-13 17:24:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:26:37.448300105 +0000 UTC m=+142.905550237" watchObservedRunningTime="2025-10-13 17:26:37.492817131 +0000 UTC m=+142.950067263" Oct 13 17:26:37 crc kubenswrapper[4720]: E1013 17:26:37.492928 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 17:26:37.992915353 +0000 UTC m=+143.450165485 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:37 crc kubenswrapper[4720]: I1013 17:26:37.493965 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:37 crc kubenswrapper[4720]: E1013 17:26:37.494811 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 17:26:37.994792461 +0000 UTC m=+143.452042593 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5xfl" (UID: "2c0351ce-4eed-4d0e-b9a4-eb2746bf396a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:37 crc kubenswrapper[4720]: W1013 17:26:37.495297 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5acdfefb_0861_4196_a2f9_59d1784025a8.slice/crio-3221264f8dc7167af3d5daedd02b97e215d2eeb725d97e4a71ce99598e89f704 WatchSource:0}: Error finding container 3221264f8dc7167af3d5daedd02b97e215d2eeb725d97e4a71ce99598e89f704: Status 404 returned error can't find the container with id 3221264f8dc7167af3d5daedd02b97e215d2eeb725d97e4a71ce99598e89f704 Oct 13 17:26:37 crc kubenswrapper[4720]: I1013 17:26:37.495336 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jd5bf"] Oct 13 17:26:37 crc kubenswrapper[4720]: I1013 17:26:37.567207 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-vm4rv"] Oct 13 17:26:37 crc kubenswrapper[4720]: I1013 17:26:37.608415 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 17:26:37 crc kubenswrapper[4720]: E1013 17:26:37.608832 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 17:26:38.108818572 +0000 UTC m=+143.566068704 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:37 crc kubenswrapper[4720]: I1013 17:26:37.622049 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fvlfw"] Oct 13 17:26:37 crc kubenswrapper[4720]: I1013 17:26:37.635900 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-pc8vw"] Oct 13 17:26:37 crc kubenswrapper[4720]: I1013 17:26:37.638357 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vhgw6"] Oct 13 17:26:37 crc kubenswrapper[4720]: I1013 17:26:37.659699 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-xbb4t" podStartSLOduration=118.6596807 podStartE2EDuration="1m58.6596807s" podCreationTimestamp="2025-10-13 17:24:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:26:37.657273749 +0000 UTC m=+143.114523901" watchObservedRunningTime="2025-10-13 17:26:37.6596807 +0000 UTC m=+143.116930832" Oct 13 17:26:37 crc kubenswrapper[4720]: I1013 17:26:37.671311 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-5cfzh"] Oct 13 17:26:37 crc kubenswrapper[4720]: I1013 17:26:37.695614 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-jvfrr"] Oct 13 17:26:37 crc kubenswrapper[4720]: I1013 17:26:37.712651 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:37 crc kubenswrapper[4720]: E1013 17:26:37.713868 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 17:26:38.213851522 +0000 UTC m=+143.671101654 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5xfl" (UID: "2c0351ce-4eed-4d0e-b9a4-eb2746bf396a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:37 crc kubenswrapper[4720]: I1013 17:26:37.817884 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 17:26:37 crc kubenswrapper[4720]: E1013 17:26:37.818203 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 17:26:38.318171605 +0000 UTC m=+143.775421737 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:37 crc kubenswrapper[4720]: I1013 17:26:37.934133 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:37 crc kubenswrapper[4720]: E1013 17:26:37.934957 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 17:26:38.434925596 +0000 UTC m=+143.892175728 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5xfl" (UID: "2c0351ce-4eed-4d0e-b9a4-eb2746bf396a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:37 crc kubenswrapper[4720]: I1013 17:26:37.965871 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-hjhns"] Oct 13 17:26:37 crc kubenswrapper[4720]: I1013 17:26:37.975979 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mjzm6"] Oct 13 17:26:37 crc kubenswrapper[4720]: I1013 17:26:37.979174 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29339595-f78vr"] Oct 13 17:26:38 crc kubenswrapper[4720]: I1013 17:26:38.015098 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-jw72g"] Oct 13 17:26:38 crc kubenswrapper[4720]: I1013 17:26:38.036824 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 17:26:38 crc kubenswrapper[4720]: E1013 17:26:38.037621 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 17:26:38.537589896 +0000 UTC m=+143.994840028 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:38 crc kubenswrapper[4720]: I1013 17:26:38.043157 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xpdm2"] Oct 13 17:26:38 crc kubenswrapper[4720]: I1013 17:26:38.049014 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dl79l"] Oct 13 17:26:38 crc kubenswrapper[4720]: I1013 17:26:38.051900 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vhgw6" event={"ID":"93c081dc-50a9-43b9-874c-0b7e46ebbbc9","Type":"ContainerStarted","Data":"d8fa6e532b18620974200dc52dee29f95f3b8e854832063b02871662e4611eb8"} Oct 13 17:26:38 crc kubenswrapper[4720]: I1013 17:26:38.053154 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5kh7f"] Oct 13 17:26:38 crc kubenswrapper[4720]: I1013 17:26:38.063677 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sxcr7"] Oct 13 17:26:38 crc kubenswrapper[4720]: I1013 17:26:38.072140 4720 generic.go:334] "Generic (PLEG): container finished" podID="0ee94e27-af7c-49c7-a57f-8f1a18ba53e3" containerID="a25843943566ef4f1aa7032839b91d30f3396767beefd951dc123cf834b7eda1" exitCode=0 Oct 13 17:26:38 crc kubenswrapper[4720]: I1013 17:26:38.072311 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mg7s9" event={"ID":"0ee94e27-af7c-49c7-a57f-8f1a18ba53e3","Type":"ContainerDied","Data":"a25843943566ef4f1aa7032839b91d30f3396767beefd951dc123cf834b7eda1"} Oct 13 17:26:38 crc kubenswrapper[4720]: I1013 17:26:38.074686 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fvlfw" event={"ID":"7277d186-5645-445d-aea3-37215cf98836","Type":"ContainerStarted","Data":"d2b5403f6f7398b30069e790609284d3afe90625d1644cdb75272a6f1779c408"} Oct 13 17:26:38 crc kubenswrapper[4720]: I1013 17:26:38.075991 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-lwp58" event={"ID":"4e04c241-5f40-47d5-8c67-e8092a483089","Type":"ContainerStarted","Data":"f1669c13e5dfd672bbc672f667f0ff730dc37e0d117b4379c35825185c02dcda"} Oct 13 17:26:38 crc kubenswrapper[4720]: I1013 17:26:38.078364 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-vttl9" event={"ID":"a13b5004-f884-4558-b38b-b3c8028a73d5","Type":"ContainerStarted","Data":"05dab573c6e2129c0205df2967cee9e7983a47804b856997fabec99076e84e9d"} Oct 13 17:26:38 crc kubenswrapper[4720]: I1013 17:26:38.080455 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-jvfrr" event={"ID":"90197382-1e9f-4823-a4e7-92fafeb46d66","Type":"ContainerStarted","Data":"d686b64c3b836343a39f4bbb48c61b77cf1183695c1125a9aa2b72eef1897a01"} Oct 13 17:26:38 crc kubenswrapper[4720]: I1013 17:26:38.092123 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-mm8n9" event={"ID":"7e7db006-6ea5-44bb-89ad-0d6a6a4810ca","Type":"ContainerStarted","Data":"1b903bd6b32abdee6c43ceac3b3a8d2ce135f56893f11f6fb341224244f63562"} Oct 13 17:26:38 crc kubenswrapper[4720]: I1013 17:26:38.107986 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-qnvwg" event={"ID":"e84d82f4-b6d9-4285-908b-55fd27e0e4c3","Type":"ContainerStarted","Data":"f63390090be182daa9dbab3e2005ee9a7161e1734cdba1d0306bf4b2da93a0cc"} Oct 13 17:26:38 crc kubenswrapper[4720]: I1013 17:26:38.108058 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-qnvwg" event={"ID":"e84d82f4-b6d9-4285-908b-55fd27e0e4c3","Type":"ContainerStarted","Data":"4ae92ddf0754f6312913c3bf04e08de17927a907a9ccfec5959a83896bdb018e"} Oct 13 17:26:38 crc kubenswrapper[4720]: W1013 17:26:38.108788 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a6d69da_3074_4b30_898b_4bb2eea1fb75.slice/crio-38f706bf13060c14021de43434fb358d4b0561b2d45d668d2ef483b998a3e303 WatchSource:0}: Error finding container 38f706bf13060c14021de43434fb358d4b0561b2d45d668d2ef483b998a3e303: Status 404 returned error can't find the container with id 38f706bf13060c14021de43434fb358d4b0561b2d45d668d2ef483b998a3e303 Oct 13 17:26:38 crc kubenswrapper[4720]: I1013 17:26:38.139757 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:38 crc kubenswrapper[4720]: E1013 17:26:38.141694 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 17:26:38.641673263 +0000 UTC m=+144.098923625 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5xfl" (UID: "2c0351ce-4eed-4d0e-b9a4-eb2746bf396a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:38 crc kubenswrapper[4720]: I1013 17:26:38.147861 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-8c7d6" event={"ID":"08ee097f-0739-4c23-8ed9-696ded1864f2","Type":"ContainerStarted","Data":"fd1249e4191436ee40db4f0c356c37c8847689afa499bfafbf5de71548cf35ab"} Oct 13 17:26:38 crc kubenswrapper[4720]: I1013 17:26:38.185025 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p8hj6" event={"ID":"36b6918f-fd99-4666-982d-7636ea1d9a1c","Type":"ContainerStarted","Data":"ea8a0f680269832fcdffe81fb6de98d5b5c571e3700a9b7f4a85d4761ecd44df"} Oct 13 17:26:38 crc kubenswrapper[4720]: I1013 17:26:38.190176 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-56p82" event={"ID":"5acdfefb-0861-4196-a2f9-59d1784025a8","Type":"ContainerStarted","Data":"3221264f8dc7167af3d5daedd02b97e215d2eeb725d97e4a71ce99598e89f704"} Oct 13 17:26:38 crc kubenswrapper[4720]: I1013 17:26:38.193445 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9klwx" event={"ID":"0fb88719-7082-419e-8eb0-7eb7d3bf9719","Type":"ContainerStarted","Data":"a6ab3d752d506111fa02ac4a9a3424ef58da22fee809797544b32d1f2003bb4c"} Oct 13 17:26:38 crc kubenswrapper[4720]: I1013 17:26:38.232937 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9tdzd" event={"ID":"e2f98220-0477-4a6d-8cf3-fa055d01bc3a","Type":"ContainerStarted","Data":"dd9c17817b37cc9a08655b4f2afa62baa8e0bff58f16a78f9f67ff4ad1c75463"} Oct 13 17:26:38 crc kubenswrapper[4720]: I1013 17:26:38.240763 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 17:26:38 crc kubenswrapper[4720]: E1013 17:26:38.247245 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 17:26:38.747178056 +0000 UTC m=+144.204428188 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:38 crc kubenswrapper[4720]: I1013 17:26:38.287697 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-dvc6z" event={"ID":"43c40b45-9695-4d29-b627-c4ab23d1d6d0","Type":"ContainerStarted","Data":"b3bc4f7b386ede5d5d81009b15a78da6cfe7cea8c2fa23bde08f368eea7ca375"} Oct 13 17:26:38 crc kubenswrapper[4720]: I1013 17:26:38.309119 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-5cfzh" event={"ID":"b0af5887-2244-4dfb-8e2a-a66ac6bf6762","Type":"ContainerStarted","Data":"109189c6bf8a97d8ba71c641e9b6b9561511f1a45bff64bac6b11db07ba971c9"} Oct 13 17:26:38 crc kubenswrapper[4720]: I1013 17:26:38.346217 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:38 crc kubenswrapper[4720]: E1013 17:26:38.347156 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 17:26:38.847133048 +0000 UTC m=+144.304383180 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5xfl" (UID: "2c0351ce-4eed-4d0e-b9a4-eb2746bf396a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:38 crc kubenswrapper[4720]: I1013 17:26:38.351159 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-jrc4l" event={"ID":"ff6247fa-c7b7-4a78-9c52-48de3c488a6a","Type":"ContainerStarted","Data":"49f5ddcbd6ffa5ce1305c7206aabe731ba03bd7790cce82c38368465e86cf34e"} Oct 13 17:26:38 crc kubenswrapper[4720]: I1013 17:26:38.359355 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vm4rv" event={"ID":"599353db-9adf-4393-9920-4de023b156c4","Type":"ContainerStarted","Data":"d65e4f94ccf6b8b1a0f7da5354bec2381227b860226ec2a83a2775e0836e4dbb"} Oct 13 17:26:38 crc kubenswrapper[4720]: I1013 17:26:38.366828 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-9snlb" event={"ID":"82e3c60a-715e-4dc6-8137-c79aa854de0a","Type":"ContainerStarted","Data":"a95aaac8c39a1da2b363471ef40a18d9cddc998e73aacedcef2af39163ec7ca2"} Oct 13 17:26:38 crc kubenswrapper[4720]: I1013 17:26:38.372419 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-b5ljk"] Oct 13 17:26:38 crc kubenswrapper[4720]: I1013 17:26:38.390309 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-l8tzk" event={"ID":"e33fd1d2-081b-4e68-ab37-623406daeaeb","Type":"ContainerStarted","Data":"11e3181b6a24ed02ca797d6c6570eccf7c3beef51d0d2fea329a27b5c3f8abc2"} Oct 13 17:26:38 crc kubenswrapper[4720]: I1013 17:26:38.390365 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-l8tzk" event={"ID":"e33fd1d2-081b-4e68-ab37-623406daeaeb","Type":"ContainerStarted","Data":"44ccde61d3a4e8975b3a84c0f929e275b37d053cc0bb1984b5f3626063270e18"} Oct 13 17:26:38 crc kubenswrapper[4720]: I1013 17:26:38.394294 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-l8tzk" Oct 13 17:26:38 crc kubenswrapper[4720]: I1013 17:26:38.399271 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-pmrm5"] Oct 13 17:26:38 crc kubenswrapper[4720]: I1013 17:26:38.407311 4720 patch_prober.go:28] interesting pod/downloads-7954f5f757-l8tzk container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Oct 13 17:26:38 crc kubenswrapper[4720]: I1013 17:26:38.407363 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-l8tzk" podUID="e33fd1d2-081b-4e68-ab37-623406daeaeb" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Oct 13 17:26:38 crc kubenswrapper[4720]: I1013 17:26:38.422890 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-scfnf" event={"ID":"b1bdb74a-39cf-48fb-a76c-a2b362136fad","Type":"ContainerStarted","Data":"0f4736c8dc4bd69d986c4e479568267efa3428cfeae8c18cd1a2018c3544219b"} Oct 13 17:26:38 crc kubenswrapper[4720]: I1013 17:26:38.429787 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mfblh"] Oct 13 17:26:38 crc kubenswrapper[4720]: I1013 17:26:38.447647 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4x4s2"] Oct 13 17:26:38 crc kubenswrapper[4720]: I1013 17:26:38.448337 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 17:26:38 crc kubenswrapper[4720]: E1013 17:26:38.448566 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 17:26:38.948551327 +0000 UTC m=+144.405801459 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:38 crc kubenswrapper[4720]: I1013 17:26:38.448626 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:38 crc kubenswrapper[4720]: E1013 17:26:38.448971 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 17:26:38.948959327 +0000 UTC m=+144.406209459 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5xfl" (UID: "2c0351ce-4eed-4d0e-b9a4-eb2746bf396a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:38 crc kubenswrapper[4720]: I1013 17:26:38.457031 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jd5bf" event={"ID":"17cf8543-d064-4d3e-976f-2fbad57f2e56","Type":"ContainerStarted","Data":"3b2e6c8de63a7e85d721794595a0c413826b1b1deb558cc19abc5b3c52de9b66"} Oct 13 17:26:38 crc kubenswrapper[4720]: I1013 17:26:38.460224 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7xzcq"] Oct 13 17:26:38 crc kubenswrapper[4720]: I1013 17:26:38.461708 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7vb4h" event={"ID":"f8b9591c-6d8d-4942-aa6c-f16b6b5f4992","Type":"ContainerStarted","Data":"0ee557641c0d0bb3c58de2032de8b99ac7a265913995a3fe7cca5563372fd74e"} Oct 13 17:26:38 crc kubenswrapper[4720]: W1013 17:26:38.466270 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a01c34e_55fd_4492_8fbe_c15dbb539271.slice/crio-94d8977a5e1d958716cd2de88ff5110b8048be91ac2fdfb177ce3754ee3d189a WatchSource:0}: Error finding container 94d8977a5e1d958716cd2de88ff5110b8048be91ac2fdfb177ce3754ee3d189a: Status 404 returned error can't find the container with id 94d8977a5e1d958716cd2de88ff5110b8048be91ac2fdfb177ce3754ee3d189a Oct 13 17:26:38 crc kubenswrapper[4720]: I1013 17:26:38.470751 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pc8vw" event={"ID":"b2df17f0-8c2c-4ca1-afe3-7b82dcb27712","Type":"ContainerStarted","Data":"2b0c1733566e3cb037a059155f7e7fde9d092d35c9d27ba67aa22b0d2a414528"} Oct 13 17:26:38 crc kubenswrapper[4720]: I1013 17:26:38.484216 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9mm85" Oct 13 17:26:38 crc kubenswrapper[4720]: I1013 17:26:38.496332 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-xbb4t" Oct 13 17:26:38 crc kubenswrapper[4720]: I1013 17:26:38.502450 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-9w7qp"] Oct 13 17:26:38 crc kubenswrapper[4720]: I1013 17:26:38.552747 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 17:26:38 crc kubenswrapper[4720]: E1013 17:26:38.553767 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 17:26:39.053743792 +0000 UTC m=+144.510993924 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:38 crc kubenswrapper[4720]: I1013 17:26:38.613977 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-8c7d6" podStartSLOduration=119.613950749 podStartE2EDuration="1m59.613950749s" podCreationTimestamp="2025-10-13 17:24:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:26:38.60967061 +0000 UTC m=+144.066920742" watchObservedRunningTime="2025-10-13 17:26:38.613950749 +0000 UTC m=+144.071200881" Oct 13 17:26:38 crc kubenswrapper[4720]: I1013 17:26:38.642543 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-5tn27"] Oct 13 17:26:38 crc kubenswrapper[4720]: I1013 17:26:38.652963 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-qnvwg" podStartSLOduration=119.652944235 podStartE2EDuration="1m59.652944235s" podCreationTimestamp="2025-10-13 17:24:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:26:38.651931519 +0000 UTC m=+144.109181651" watchObservedRunningTime="2025-10-13 17:26:38.652944235 +0000 UTC m=+144.110194367" Oct 13 17:26:38 crc kubenswrapper[4720]: I1013 17:26:38.657166 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:38 crc kubenswrapper[4720]: E1013 17:26:38.657533 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 17:26:39.157521101 +0000 UTC m=+144.614771233 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5xfl" (UID: "2c0351ce-4eed-4d0e-b9a4-eb2746bf396a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:38 crc kubenswrapper[4720]: I1013 17:26:38.758230 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 17:26:38 crc kubenswrapper[4720]: E1013 17:26:38.758370 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 17:26:39.258346735 +0000 UTC m=+144.715596867 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:38 crc kubenswrapper[4720]: I1013 17:26:38.758795 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:38 crc kubenswrapper[4720]: E1013 17:26:38.759084 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 17:26:39.259077834 +0000 UTC m=+144.716327966 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5xfl" (UID: "2c0351ce-4eed-4d0e-b9a4-eb2746bf396a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:38 crc kubenswrapper[4720]: I1013 17:26:38.772788 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-dvc6z" podStartSLOduration=118.772748233 podStartE2EDuration="1m58.772748233s" podCreationTimestamp="2025-10-13 17:24:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:26:38.771403018 +0000 UTC m=+144.228653150" watchObservedRunningTime="2025-10-13 17:26:38.772748233 +0000 UTC m=+144.229998365" Oct 13 17:26:38 crc kubenswrapper[4720]: I1013 17:26:38.773286 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-p8hj6" podStartSLOduration=119.773282256 podStartE2EDuration="1m59.773282256s" podCreationTimestamp="2025-10-13 17:24:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:26:38.729385616 +0000 UTC m=+144.186635738" watchObservedRunningTime="2025-10-13 17:26:38.773282256 +0000 UTC m=+144.230532388" Oct 13 17:26:38 crc kubenswrapper[4720]: I1013 17:26:38.859745 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 17:26:38 crc kubenswrapper[4720]: E1013 17:26:38.860042 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 17:26:39.360029381 +0000 UTC m=+144.817279513 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:38 crc kubenswrapper[4720]: I1013 17:26:38.889639 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-l8tzk" podStartSLOduration=119.889621256 podStartE2EDuration="1m59.889621256s" podCreationTimestamp="2025-10-13 17:24:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:26:38.889264717 +0000 UTC m=+144.346514849" watchObservedRunningTime="2025-10-13 17:26:38.889621256 +0000 UTC m=+144.346871388" Oct 13 17:26:38 crc kubenswrapper[4720]: I1013 17:26:38.961691 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:38 crc kubenswrapper[4720]: E1013 17:26:38.962068 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 17:26:39.462051165 +0000 UTC m=+144.919301347 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5xfl" (UID: "2c0351ce-4eed-4d0e-b9a4-eb2746bf396a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:39 crc kubenswrapper[4720]: I1013 17:26:39.072662 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 17:26:39 crc kubenswrapper[4720]: E1013 17:26:39.073004 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 17:26:39.572989727 +0000 UTC m=+145.030239859 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:39 crc kubenswrapper[4720]: I1013 17:26:39.174265 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:39 crc kubenswrapper[4720]: E1013 17:26:39.174625 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 17:26:39.674604591 +0000 UTC m=+145.131854723 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5xfl" (UID: "2c0351ce-4eed-4d0e-b9a4-eb2746bf396a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:39 crc kubenswrapper[4720]: I1013 17:26:39.278181 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 17:26:39 crc kubenswrapper[4720]: E1013 17:26:39.278548 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 17:26:39.778534834 +0000 UTC m=+145.235784966 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:39 crc kubenswrapper[4720]: I1013 17:26:39.388969 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:39 crc kubenswrapper[4720]: E1013 17:26:39.389315 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 17:26:39.889304092 +0000 UTC m=+145.346554214 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5xfl" (UID: "2c0351ce-4eed-4d0e-b9a4-eb2746bf396a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:39 crc kubenswrapper[4720]: I1013 17:26:39.489514 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 17:26:39 crc kubenswrapper[4720]: E1013 17:26:39.489890 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 17:26:39.989876199 +0000 UTC m=+145.447126331 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:39 crc kubenswrapper[4720]: I1013 17:26:39.490097 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:39 crc kubenswrapper[4720]: E1013 17:26:39.497459 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 17:26:39.997446242 +0000 UTC m=+145.454696374 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5xfl" (UID: "2c0351ce-4eed-4d0e-b9a4-eb2746bf396a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:39 crc kubenswrapper[4720]: I1013 17:26:39.512335 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-vttl9" event={"ID":"a13b5004-f884-4558-b38b-b3c8028a73d5","Type":"ContainerStarted","Data":"0bd971620c3d20b83d1887112ec36afcac3edaaff75ec1134d18dad9bf53fc19"} Oct 13 17:26:39 crc kubenswrapper[4720]: I1013 17:26:39.568871 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-5cfzh" event={"ID":"b0af5887-2244-4dfb-8e2a-a66ac6bf6762","Type":"ContainerStarted","Data":"d09626cc28c6245e7b640ba479f51568cced22328b6223d2a977bf58fb9e334a"} Oct 13 17:26:39 crc kubenswrapper[4720]: I1013 17:26:39.575389 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vm4rv" event={"ID":"599353db-9adf-4393-9920-4de023b156c4","Type":"ContainerStarted","Data":"6c374df24ec062f6a732d8e5723effe0f71584adf6757ea3f9ea696f9acc03b1"} Oct 13 17:26:39 crc kubenswrapper[4720]: I1013 17:26:39.579842 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-jrc4l" event={"ID":"ff6247fa-c7b7-4a78-9c52-48de3c488a6a","Type":"ContainerStarted","Data":"48931f176d8b43b6ee9a21152f9c94cb64e4d4ad49c5289efe80fa6da4085e7d"} Oct 13 17:26:39 crc kubenswrapper[4720]: I1013 17:26:39.580692 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-jrc4l" Oct 13 17:26:39 crc kubenswrapper[4720]: I1013 17:26:39.581647 4720 patch_prober.go:28] interesting pod/console-operator-58897d9998-jrc4l container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/readyz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Oct 13 17:26:39 crc kubenswrapper[4720]: I1013 17:26:39.581677 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-jrc4l" podUID="ff6247fa-c7b7-4a78-9c52-48de3c488a6a" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/readyz\": dial tcp 10.217.0.16:8443: connect: connection refused" Oct 13 17:26:39 crc kubenswrapper[4720]: I1013 17:26:39.597974 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 17:26:39 crc kubenswrapper[4720]: E1013 17:26:39.599036 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 17:26:40.099021935 +0000 UTC m=+145.556272067 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:39 crc kubenswrapper[4720]: I1013 17:26:39.602362 4720 generic.go:334] "Generic (PLEG): container finished" podID="b2df17f0-8c2c-4ca1-afe3-7b82dcb27712" containerID="af0225a1e9533f3935670edf2994a8406350ed614eb33e4854314fcd10f2c873" exitCode=0 Oct 13 17:26:39 crc kubenswrapper[4720]: I1013 17:26:39.602440 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pc8vw" event={"ID":"b2df17f0-8c2c-4ca1-afe3-7b82dcb27712","Type":"ContainerDied","Data":"af0225a1e9533f3935670edf2994a8406350ed614eb33e4854314fcd10f2c873"} Oct 13 17:26:39 crc kubenswrapper[4720]: I1013 17:26:39.615214 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-56p82" event={"ID":"5acdfefb-0861-4196-a2f9-59d1784025a8","Type":"ContainerStarted","Data":"32c2a9efe1def9451dc16b13cf82670dec6485e719b9c5182238fe34c41ca131"} Oct 13 17:26:39 crc kubenswrapper[4720]: I1013 17:26:39.615952 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-56p82" Oct 13 17:26:39 crc kubenswrapper[4720]: I1013 17:26:39.627883 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-56p82" Oct 13 17:26:39 crc kubenswrapper[4720]: I1013 17:26:39.681589 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5kh7f" event={"ID":"913f12c1-e62e-474e-b44f-868c1de3309b","Type":"ContainerStarted","Data":"b93f736678cfb888f7025c1c355749c484acf55c2f56e851a215f841ca0ecd7c"} Oct 13 17:26:39 crc kubenswrapper[4720]: I1013 17:26:39.681642 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5kh7f" event={"ID":"913f12c1-e62e-474e-b44f-868c1de3309b","Type":"ContainerStarted","Data":"c07eed4a50134e857fd0f37696e1d50fbf835e442416db7a66464ef769217610"} Oct 13 17:26:39 crc kubenswrapper[4720]: I1013 17:26:39.687436 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-pmrm5" event={"ID":"d0e58b14-51d4-4b94-ba16-1058cde1dda1","Type":"ContainerStarted","Data":"be45e24435156ce13fae047153a712cc6b82e6f3a309d2763e684828f04577a4"} Oct 13 17:26:39 crc kubenswrapper[4720]: I1013 17:26:39.693417 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-56p82" podStartSLOduration=119.693403494 podStartE2EDuration="1m59.693403494s" podCreationTimestamp="2025-10-13 17:24:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:26:39.66855646 +0000 UTC m=+145.125806592" watchObservedRunningTime="2025-10-13 17:26:39.693403494 +0000 UTC m=+145.150653626" Oct 13 17:26:39 crc kubenswrapper[4720]: I1013 17:26:39.694374 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-5cfzh" podStartSLOduration=120.694367779 podStartE2EDuration="2m0.694367779s" podCreationTimestamp="2025-10-13 17:24:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:26:39.690702736 +0000 UTC m=+145.147952868" watchObservedRunningTime="2025-10-13 17:26:39.694367779 +0000 UTC m=+145.151617911" Oct 13 17:26:39 crc kubenswrapper[4720]: I1013 17:26:39.699453 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:39 crc kubenswrapper[4720]: E1013 17:26:39.701321 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 17:26:40.201305456 +0000 UTC m=+145.658555588 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5xfl" (UID: "2c0351ce-4eed-4d0e-b9a4-eb2746bf396a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:39 crc kubenswrapper[4720]: I1013 17:26:39.714262 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b5ljk" event={"ID":"f94b07c7-b111-450a-bd2a-0977944282a9","Type":"ContainerStarted","Data":"f883a6de27aca404cce85bacfcf13d4c275580b897f9b126d81076ba57685742"} Oct 13 17:26:39 crc kubenswrapper[4720]: I1013 17:26:39.714676 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b5ljk" event={"ID":"f94b07c7-b111-450a-bd2a-0977944282a9","Type":"ContainerStarted","Data":"4a28a791bec62615a48bce3bc609f6ae333cf85b7007f5cb4d705cb52032e986"} Oct 13 17:26:39 crc kubenswrapper[4720]: I1013 17:26:39.720965 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-jvfrr" event={"ID":"90197382-1e9f-4823-a4e7-92fafeb46d66","Type":"ContainerStarted","Data":"847f5a13d452146161e6b982241d29c6f193ff30fa8264afcc4dfc74aea15079"} Oct 13 17:26:39 crc kubenswrapper[4720]: I1013 17:26:39.741447 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9klwx" event={"ID":"0fb88719-7082-419e-8eb0-7eb7d3bf9719","Type":"ContainerStarted","Data":"51c57bacacf6e875262d93e97b6d01ae7e240136d9f003ea92d6916ddd78ca22"} Oct 13 17:26:39 crc kubenswrapper[4720]: I1013 17:26:39.742323 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9klwx" Oct 13 17:26:39 crc kubenswrapper[4720]: I1013 17:26:39.763385 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jw72g" event={"ID":"44e9378e-be31-4703-b23c-f7ffdbc89a2b","Type":"ContainerStarted","Data":"27cb7dd4975da6f4f890ace1f590e6b2313678405e93a54c5c036da288b7eb44"} Oct 13 17:26:39 crc kubenswrapper[4720]: I1013 17:26:39.777706 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-jrc4l" podStartSLOduration=120.777690176 podStartE2EDuration="2m0.777690176s" podCreationTimestamp="2025-10-13 17:24:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:26:39.742557039 +0000 UTC m=+145.199807171" watchObservedRunningTime="2025-10-13 17:26:39.777690176 +0000 UTC m=+145.234940308" Oct 13 17:26:39 crc kubenswrapper[4720]: I1013 17:26:39.786270 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-scfnf" event={"ID":"b1bdb74a-39cf-48fb-a76c-a2b362136fad","Type":"ContainerStarted","Data":"6bca0b7b488ea49c8d73f36a836fdcfdf9559c85a7695d154f6bf4d8d5e8d203"} Oct 13 17:26:39 crc kubenswrapper[4720]: I1013 17:26:39.803113 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 17:26:39 crc kubenswrapper[4720]: E1013 17:26:39.816349 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 17:26:40.316317412 +0000 UTC m=+145.773567544 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:39 crc kubenswrapper[4720]: I1013 17:26:39.818395 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-mm8n9" event={"ID":"7e7db006-6ea5-44bb-89ad-0d6a6a4810ca","Type":"ContainerStarted","Data":"6c4fcd5e35393c030b65f80d8f865c28949cad9967820cb6939ebb851ca959b3"} Oct 13 17:26:39 crc kubenswrapper[4720]: I1013 17:26:39.819009 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-vttl9" podStartSLOduration=119.818996131 podStartE2EDuration="1m59.818996131s" podCreationTimestamp="2025-10-13 17:24:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:26:39.778953448 +0000 UTC m=+145.236203570" watchObservedRunningTime="2025-10-13 17:26:39.818996131 +0000 UTC m=+145.276246263" Oct 13 17:26:39 crc kubenswrapper[4720]: I1013 17:26:39.821946 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mfblh" event={"ID":"3a01c34e-55fd-4492-8fbe-c15dbb539271","Type":"ContainerStarted","Data":"94d8977a5e1d958716cd2de88ff5110b8048be91ac2fdfb177ce3754ee3d189a"} Oct 13 17:26:39 crc kubenswrapper[4720]: I1013 17:26:39.824832 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-9snlb" event={"ID":"82e3c60a-715e-4dc6-8137-c79aa854de0a","Type":"ContainerStarted","Data":"8dd154db8018cff3b4542d6bc35374595e47b272fb24b4a4ecebe74647674231"} Oct 13 17:26:39 crc kubenswrapper[4720]: I1013 17:26:39.825647 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dl79l" event={"ID":"815faec7-4eeb-473a-b9ca-3ae41842aa02","Type":"ContainerStarted","Data":"1e33a27800eb9837906e6a7f42f8a3eeeb1bd8009308527f246a8a26adba3655"} Oct 13 17:26:39 crc kubenswrapper[4720]: I1013 17:26:39.826842 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mjzm6" event={"ID":"cabe6b29-bccd-4995-ab54-b6cabc86f7bf","Type":"ContainerStarted","Data":"2ee8daf2c5bb51d39cf74df70fd509f210c8b08270ba53364123faef47b6b998"} Oct 13 17:26:39 crc kubenswrapper[4720]: I1013 17:26:39.828444 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29339595-f78vr" event={"ID":"0a6d69da-3074-4b30-898b-4bb2eea1fb75","Type":"ContainerStarted","Data":"38f706bf13060c14021de43434fb358d4b0561b2d45d668d2ef483b998a3e303"} Oct 13 17:26:39 crc kubenswrapper[4720]: I1013 17:26:39.850459 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jd5bf" event={"ID":"17cf8543-d064-4d3e-976f-2fbad57f2e56","Type":"ContainerStarted","Data":"230c6da7a95815e50c963ab6e92ebef800d727dde6b2ffb48780086814a73433"} Oct 13 17:26:39 crc kubenswrapper[4720]: I1013 17:26:39.882498 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-hjhns" event={"ID":"b641d3d3-3e84-4b58-ba79-e1f941260618","Type":"ContainerStarted","Data":"4c1be14642ce22d2a8994a004bd6ff64207dd0e3372b96134c0b8d6e3575e6c9"} Oct 13 17:26:39 crc kubenswrapper[4720]: I1013 17:26:39.886735 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-9w7qp" event={"ID":"fcbfa827-1469-4afb-b411-c6015d3e3195","Type":"ContainerStarted","Data":"35a3b56512118eec69299208e534215433f00cf80298dbaec81b18037ca61e85"} Oct 13 17:26:39 crc kubenswrapper[4720]: I1013 17:26:39.889133 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sxcr7" event={"ID":"94c6e5cb-a6e1-453b-b6c2-0a4f397ced92","Type":"ContainerStarted","Data":"622beece4d54243d053cc1b625a13c7a4f2c8457ef0252e40c9e4aa2882bcc10"} Oct 13 17:26:39 crc kubenswrapper[4720]: I1013 17:26:39.889164 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sxcr7" event={"ID":"94c6e5cb-a6e1-453b-b6c2-0a4f397ced92","Type":"ContainerStarted","Data":"98a86df0cac6754f9b9b924d29ecf7acc2dcb1c41cf1b5d0b3bbceeb3942cfad"} Oct 13 17:26:39 crc kubenswrapper[4720]: I1013 17:26:39.890744 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7xzcq" event={"ID":"3f98514a-d643-4723-8d83-cda6c55d4874","Type":"ContainerStarted","Data":"7b428c9611177b57a3c86ccb8877c2b54717b8118ba967f0b890a331ba45f9de"} Oct 13 17:26:39 crc kubenswrapper[4720]: I1013 17:26:39.891393 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7xzcq" Oct 13 17:26:39 crc kubenswrapper[4720]: I1013 17:26:39.892708 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7vb4h" event={"ID":"f8b9591c-6d8d-4942-aa6c-f16b6b5f4992","Type":"ContainerStarted","Data":"cf5cf9a232b4a4d1efa2836271d2f6f92407986581d7e153f1e0e5a4eed5a1f9"} Oct 13 17:26:39 crc kubenswrapper[4720]: I1013 17:26:39.895168 4720 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-7xzcq container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Oct 13 17:26:39 crc kubenswrapper[4720]: I1013 17:26:39.895289 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7xzcq" podUID="3f98514a-d643-4723-8d83-cda6c55d4874" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Oct 13 17:26:39 crc kubenswrapper[4720]: I1013 17:26:39.907997 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:39 crc kubenswrapper[4720]: E1013 17:26:39.911226 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 17:26:40.411214405 +0000 UTC m=+145.868464537 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5xfl" (UID: "2c0351ce-4eed-4d0e-b9a4-eb2746bf396a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:39 crc kubenswrapper[4720]: I1013 17:26:39.911470 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4x4s2" event={"ID":"9bec609a-6eb7-479a-ae81-dd2d7acc1742","Type":"ContainerStarted","Data":"8373c6ddb6a4a575ac07cda3e21d2cad3e432d92cc8970973c2a2fd2516154ab"} Oct 13 17:26:39 crc kubenswrapper[4720]: I1013 17:26:39.913433 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-vttl9" Oct 13 17:26:39 crc kubenswrapper[4720]: I1013 17:26:39.917310 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-5tn27" event={"ID":"61e69b4f-7bcc-4ce4-9bd2-3200c0d0f883","Type":"ContainerStarted","Data":"35d62edb3bf1404996998dfbaeb12a3ef84b1753cfba4bfd7ac4c36052c8d9c8"} Oct 13 17:26:39 crc kubenswrapper[4720]: I1013 17:26:39.919845 4720 patch_prober.go:28] interesting pod/router-default-5444994796-vttl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 13 17:26:39 crc kubenswrapper[4720]: [-]has-synced failed: reason withheld Oct 13 17:26:39 crc kubenswrapper[4720]: [+]process-running ok Oct 13 17:26:39 crc kubenswrapper[4720]: healthz check failed Oct 13 17:26:39 crc kubenswrapper[4720]: I1013 17:26:39.919895 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vttl9" podUID="a13b5004-f884-4558-b38b-b3c8028a73d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 13 17:26:39 crc kubenswrapper[4720]: I1013 17:26:39.939386 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-lwp58" event={"ID":"4e04c241-5f40-47d5-8c67-e8092a483089","Type":"ContainerStarted","Data":"b65588c90735197c422709ea288feba94a226ea0e2752a9af9203b106c328fd1"} Oct 13 17:26:39 crc kubenswrapper[4720]: I1013 17:26:39.940921 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-lwp58" Oct 13 17:26:39 crc kubenswrapper[4720]: I1013 17:26:39.948872 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-lwp58" Oct 13 17:26:39 crc kubenswrapper[4720]: I1013 17:26:39.968099 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9klwx" podStartSLOduration=119.968080556 podStartE2EDuration="1m59.968080556s" podCreationTimestamp="2025-10-13 17:24:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:26:39.901870456 +0000 UTC m=+145.359120588" watchObservedRunningTime="2025-10-13 17:26:39.968080556 +0000 UTC m=+145.425330678" Oct 13 17:26:39 crc kubenswrapper[4720]: I1013 17:26:39.968210 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-scfnf" podStartSLOduration=6.968205339 podStartE2EDuration="6.968205339s" podCreationTimestamp="2025-10-13 17:26:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:26:39.967975753 +0000 UTC m=+145.425225885" watchObservedRunningTime="2025-10-13 17:26:39.968205339 +0000 UTC m=+145.425455471" Oct 13 17:26:39 crc kubenswrapper[4720]: I1013 17:26:39.979059 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vhgw6" event={"ID":"93c081dc-50a9-43b9-874c-0b7e46ebbbc9","Type":"ContainerStarted","Data":"ea8c7c25c7cdccd4589f1a0cf1aeabdc2e3488561513060d41b8893562ab2f6e"} Oct 13 17:26:39 crc kubenswrapper[4720]: I1013 17:26:39.979519 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vhgw6" Oct 13 17:26:39 crc kubenswrapper[4720]: I1013 17:26:39.982284 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fvlfw" event={"ID":"7277d186-5645-445d-aea3-37215cf98836","Type":"ContainerStarted","Data":"0bd813db0e02f9ade19bb210d65f04044e3332dbca9ed5b799710491ddb053ad"} Oct 13 17:26:40 crc kubenswrapper[4720]: I1013 17:26:40.015308 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 17:26:40 crc kubenswrapper[4720]: E1013 17:26:40.016733 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 17:26:40.516712298 +0000 UTC m=+145.973962430 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:40 crc kubenswrapper[4720]: I1013 17:26:40.052479 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-5tn27" podStartSLOduration=120.0524637 podStartE2EDuration="2m0.0524637s" podCreationTimestamp="2025-10-13 17:24:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:26:40.051696101 +0000 UTC m=+145.508946233" watchObservedRunningTime="2025-10-13 17:26:40.0524637 +0000 UTC m=+145.509713832" Oct 13 17:26:40 crc kubenswrapper[4720]: I1013 17:26:40.057582 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xpdm2" event={"ID":"5b9cfa7f-e80a-42b8-b6f0-239165447812","Type":"ContainerStarted","Data":"d903ac75d4bb20e2c8d8bdbf3c6afe828dd45ea657c0f7c6ef565f941a5e1515"} Oct 13 17:26:40 crc kubenswrapper[4720]: I1013 17:26:40.057617 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-xpdm2" Oct 13 17:26:40 crc kubenswrapper[4720]: I1013 17:26:40.057628 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xpdm2" event={"ID":"5b9cfa7f-e80a-42b8-b6f0-239165447812","Type":"ContainerStarted","Data":"e9464d85a814f7037d9ce53de7cdb05e479434ca1fb41dec8620b72efc11a98b"} Oct 13 17:26:40 crc kubenswrapper[4720]: I1013 17:26:40.060923 4720 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-xpdm2 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Oct 13 17:26:40 crc kubenswrapper[4720]: I1013 17:26:40.060968 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-xpdm2" podUID="5b9cfa7f-e80a-42b8-b6f0-239165447812" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Oct 13 17:26:40 crc kubenswrapper[4720]: I1013 17:26:40.061054 4720 patch_prober.go:28] interesting pod/downloads-7954f5f757-l8tzk container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Oct 13 17:26:40 crc kubenswrapper[4720]: I1013 17:26:40.061071 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-l8tzk" podUID="e33fd1d2-081b-4e68-ab37-623406daeaeb" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Oct 13 17:26:40 crc kubenswrapper[4720]: I1013 17:26:40.072549 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5kh7f" podStartSLOduration=121.072528332 podStartE2EDuration="2m1.072528332s" podCreationTimestamp="2025-10-13 17:24:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:26:40.014820139 +0000 UTC m=+145.472070271" watchObservedRunningTime="2025-10-13 17:26:40.072528332 +0000 UTC m=+145.529778464" Oct 13 17:26:40 crc kubenswrapper[4720]: I1013 17:26:40.133995 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:40 crc kubenswrapper[4720]: E1013 17:26:40.137224 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 17:26:40.637209824 +0000 UTC m=+146.094459956 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5xfl" (UID: "2c0351ce-4eed-4d0e-b9a4-eb2746bf396a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:40 crc kubenswrapper[4720]: I1013 17:26:40.150570 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-sxcr7" podStartSLOduration=120.150551324 podStartE2EDuration="2m0.150551324s" podCreationTimestamp="2025-10-13 17:24:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:26:40.105640668 +0000 UTC m=+145.562890800" watchObservedRunningTime="2025-10-13 17:26:40.150551324 +0000 UTC m=+145.607801456" Oct 13 17:26:40 crc kubenswrapper[4720]: I1013 17:26:40.154428 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-mm8n9" podStartSLOduration=121.154415853 podStartE2EDuration="2m1.154415853s" podCreationTimestamp="2025-10-13 17:24:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:26:40.145736941 +0000 UTC m=+145.602987073" watchObservedRunningTime="2025-10-13 17:26:40.154415853 +0000 UTC m=+145.611665985" Oct 13 17:26:40 crc kubenswrapper[4720]: I1013 17:26:40.190585 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-jd5bf" podStartSLOduration=121.190549425 podStartE2EDuration="2m1.190549425s" podCreationTimestamp="2025-10-13 17:24:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:26:40.171914139 +0000 UTC m=+145.629164281" watchObservedRunningTime="2025-10-13 17:26:40.190549425 +0000 UTC m=+145.647799557" Oct 13 17:26:40 crc kubenswrapper[4720]: I1013 17:26:40.234853 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 17:26:40 crc kubenswrapper[4720]: E1013 17:26:40.236125 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 17:26:40.736110078 +0000 UTC m=+146.193360210 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:40 crc kubenswrapper[4720]: I1013 17:26:40.281921 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fvlfw" podStartSLOduration=121.281893137 podStartE2EDuration="2m1.281893137s" podCreationTimestamp="2025-10-13 17:24:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:26:40.221507755 +0000 UTC m=+145.678757887" watchObservedRunningTime="2025-10-13 17:26:40.281893137 +0000 UTC m=+145.739143269" Oct 13 17:26:40 crc kubenswrapper[4720]: I1013 17:26:40.284389 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-9snlb" podStartSLOduration=120.28438134 podStartE2EDuration="2m0.28438134s" podCreationTimestamp="2025-10-13 17:24:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:26:40.280576703 +0000 UTC m=+145.737826835" watchObservedRunningTime="2025-10-13 17:26:40.28438134 +0000 UTC m=+145.741631472" Oct 13 17:26:40 crc kubenswrapper[4720]: I1013 17:26:40.323921 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-lwp58" podStartSLOduration=121.323902109 podStartE2EDuration="2m1.323902109s" podCreationTimestamp="2025-10-13 17:24:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:26:40.323048507 +0000 UTC m=+145.780298639" watchObservedRunningTime="2025-10-13 17:26:40.323902109 +0000 UTC m=+145.781152241" Oct 13 17:26:40 crc kubenswrapper[4720]: I1013 17:26:40.340867 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:40 crc kubenswrapper[4720]: E1013 17:26:40.341550 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 17:26:40.841534319 +0000 UTC m=+146.298784451 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5xfl" (UID: "2c0351ce-4eed-4d0e-b9a4-eb2746bf396a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:40 crc kubenswrapper[4720]: I1013 17:26:40.342552 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9klwx" Oct 13 17:26:40 crc kubenswrapper[4720]: I1013 17:26:40.360487 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vhgw6" podStartSLOduration=120.360460442 podStartE2EDuration="2m0.360460442s" podCreationTimestamp="2025-10-13 17:24:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:26:40.359892318 +0000 UTC m=+145.817142450" watchObservedRunningTime="2025-10-13 17:26:40.360460442 +0000 UTC m=+145.817710574" Oct 13 17:26:40 crc kubenswrapper[4720]: I1013 17:26:40.436610 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-9w7qp" podStartSLOduration=7.436584986 podStartE2EDuration="7.436584986s" podCreationTimestamp="2025-10-13 17:26:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:26:40.395755273 +0000 UTC m=+145.853005395" watchObservedRunningTime="2025-10-13 17:26:40.436584986 +0000 UTC m=+145.893835118" Oct 13 17:26:40 crc kubenswrapper[4720]: I1013 17:26:40.439526 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7vb4h" podStartSLOduration=121.43951502 podStartE2EDuration="2m1.43951502s" podCreationTimestamp="2025-10-13 17:24:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:26:40.435850207 +0000 UTC m=+145.893100339" watchObservedRunningTime="2025-10-13 17:26:40.43951502 +0000 UTC m=+145.896765152" Oct 13 17:26:40 crc kubenswrapper[4720]: I1013 17:26:40.441686 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 17:26:40 crc kubenswrapper[4720]: E1013 17:26:40.442621 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 17:26:40.942598469 +0000 UTC m=+146.399848601 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:40 crc kubenswrapper[4720]: I1013 17:26:40.442837 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:40 crc kubenswrapper[4720]: E1013 17:26:40.443346 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 17:26:40.943337868 +0000 UTC m=+146.400588000 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5xfl" (UID: "2c0351ce-4eed-4d0e-b9a4-eb2746bf396a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:40 crc kubenswrapper[4720]: I1013 17:26:40.464302 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mjzm6" podStartSLOduration=120.464287503 podStartE2EDuration="2m0.464287503s" podCreationTimestamp="2025-10-13 17:24:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:26:40.463250666 +0000 UTC m=+145.920500798" watchObservedRunningTime="2025-10-13 17:26:40.464287503 +0000 UTC m=+145.921537635" Oct 13 17:26:40 crc kubenswrapper[4720]: I1013 17:26:40.513215 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mg7s9" podStartSLOduration=120.513179471 podStartE2EDuration="2m0.513179471s" podCreationTimestamp="2025-10-13 17:24:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:26:40.512006591 +0000 UTC m=+145.969256723" watchObservedRunningTime="2025-10-13 17:26:40.513179471 +0000 UTC m=+145.970429603" Oct 13 17:26:40 crc kubenswrapper[4720]: I1013 17:26:40.514002 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29339595-f78vr" podStartSLOduration=121.513995402 podStartE2EDuration="2m1.513995402s" podCreationTimestamp="2025-10-13 17:24:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:26:40.486958592 +0000 UTC m=+145.944208724" watchObservedRunningTime="2025-10-13 17:26:40.513995402 +0000 UTC m=+145.971245534" Oct 13 17:26:40 crc kubenswrapper[4720]: I1013 17:26:40.544050 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 17:26:40 crc kubenswrapper[4720]: E1013 17:26:40.547000 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 17:26:41.046950673 +0000 UTC m=+146.504200805 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:40 crc kubenswrapper[4720]: I1013 17:26:40.646868 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:40 crc kubenswrapper[4720]: E1013 17:26:40.647446 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 17:26:41.147435278 +0000 UTC m=+146.604685410 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5xfl" (UID: "2c0351ce-4eed-4d0e-b9a4-eb2746bf396a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:40 crc kubenswrapper[4720]: I1013 17:26:40.649887 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7xzcq" podStartSLOduration=120.64986835 podStartE2EDuration="2m0.64986835s" podCreationTimestamp="2025-10-13 17:24:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:26:40.642582224 +0000 UTC m=+146.099832356" watchObservedRunningTime="2025-10-13 17:26:40.64986835 +0000 UTC m=+146.107118482" Oct 13 17:26:40 crc kubenswrapper[4720]: I1013 17:26:40.714418 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-xpdm2" podStartSLOduration=120.714400738 podStartE2EDuration="2m0.714400738s" podCreationTimestamp="2025-10-13 17:24:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:26:40.70863764 +0000 UTC m=+146.165887772" watchObservedRunningTime="2025-10-13 17:26:40.714400738 +0000 UTC m=+146.171650870" Oct 13 17:26:40 crc kubenswrapper[4720]: I1013 17:26:40.748524 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 17:26:40 crc kubenswrapper[4720]: E1013 17:26:40.748761 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 17:26:41.248747094 +0000 UTC m=+146.705997226 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:40 crc kubenswrapper[4720]: I1013 17:26:40.851879 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:40 crc kubenswrapper[4720]: E1013 17:26:40.852205 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 17:26:41.352178305 +0000 UTC m=+146.809428437 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5xfl" (UID: "2c0351ce-4eed-4d0e-b9a4-eb2746bf396a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:40 crc kubenswrapper[4720]: I1013 17:26:40.924642 4720 patch_prober.go:28] interesting pod/router-default-5444994796-vttl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 13 17:26:40 crc kubenswrapper[4720]: [-]has-synced failed: reason withheld Oct 13 17:26:40 crc kubenswrapper[4720]: [+]process-running ok Oct 13 17:26:40 crc kubenswrapper[4720]: healthz check failed Oct 13 17:26:40 crc kubenswrapper[4720]: I1013 17:26:40.924726 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vttl9" podUID="a13b5004-f884-4558-b38b-b3c8028a73d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 13 17:26:40 crc kubenswrapper[4720]: I1013 17:26:40.936748 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-mm8n9" Oct 13 17:26:40 crc kubenswrapper[4720]: I1013 17:26:40.936852 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-mm8n9" Oct 13 17:26:40 crc kubenswrapper[4720]: I1013 17:26:40.953317 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 17:26:40 crc kubenswrapper[4720]: E1013 17:26:40.953825 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 17:26:41.453797939 +0000 UTC m=+146.911048071 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:40 crc kubenswrapper[4720]: I1013 17:26:40.953993 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:40 crc kubenswrapper[4720]: E1013 17:26:40.954424 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 17:26:41.454404204 +0000 UTC m=+146.911654336 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5xfl" (UID: "2c0351ce-4eed-4d0e-b9a4-eb2746bf396a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:41 crc kubenswrapper[4720]: I1013 17:26:41.055897 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 17:26:41 crc kubenswrapper[4720]: E1013 17:26:41.056070 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 17:26:41.556039909 +0000 UTC m=+147.013290041 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:41 crc kubenswrapper[4720]: I1013 17:26:41.056602 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:41 crc kubenswrapper[4720]: E1013 17:26:41.057021 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 17:26:41.557004653 +0000 UTC m=+147.014254785 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5xfl" (UID: "2c0351ce-4eed-4d0e-b9a4-eb2746bf396a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:41 crc kubenswrapper[4720]: I1013 17:26:41.063157 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jw72g" event={"ID":"44e9378e-be31-4703-b23c-f7ffdbc89a2b","Type":"ContainerStarted","Data":"5a135bb7b85effd92690ec2de427d387d36a40f58256dca7631a1e40093136b8"} Oct 13 17:26:41 crc kubenswrapper[4720]: I1013 17:26:41.063237 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jw72g" event={"ID":"44e9378e-be31-4703-b23c-f7ffdbc89a2b","Type":"ContainerStarted","Data":"d704ebb08b5cc35f30cdb766ceb90481e8547108f9542a71196e6a443859611e"} Oct 13 17:26:41 crc kubenswrapper[4720]: I1013 17:26:41.064979 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dl79l" event={"ID":"815faec7-4eeb-473a-b9ca-3ae41842aa02","Type":"ContainerStarted","Data":"c1a28c1a3e2e97d6b596339933ad97ee81277f00b6ed5c0bcfb8000a03cab682"} Oct 13 17:26:41 crc kubenswrapper[4720]: I1013 17:26:41.066498 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-pmrm5" event={"ID":"d0e58b14-51d4-4b94-ba16-1058cde1dda1","Type":"ContainerStarted","Data":"a0dc38959afc0564fefbc806cfa05e1d623aab08cd128c8bd0ca5441bf11a221"} Oct 13 17:26:41 crc kubenswrapper[4720]: I1013 17:26:41.066527 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-pmrm5" event={"ID":"d0e58b14-51d4-4b94-ba16-1058cde1dda1","Type":"ContainerStarted","Data":"07872cac00895f9fb99499ae7b1d86c1e331d992b1fd981669205ce4f4ca0c94"} Oct 13 17:26:41 crc kubenswrapper[4720]: I1013 17:26:41.068254 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-jvfrr" event={"ID":"90197382-1e9f-4823-a4e7-92fafeb46d66","Type":"ContainerStarted","Data":"c2fa335c5fc09429696b0e768842caac81a58e25a580dd5654adce79034489a3"} Oct 13 17:26:41 crc kubenswrapper[4720]: I1013 17:26:41.069682 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-hjhns" event={"ID":"b641d3d3-3e84-4b58-ba79-e1f941260618","Type":"ContainerStarted","Data":"54bb16c2eaeb33fb4017f86c56593da06397bbf6a24cf45a04b7deb2d513e6e3"} Oct 13 17:26:41 crc kubenswrapper[4720]: I1013 17:26:41.069727 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-hjhns" event={"ID":"b641d3d3-3e84-4b58-ba79-e1f941260618","Type":"ContainerStarted","Data":"7d1dc019cb73e863fff9b69881b810642e70e48f3f1e4bade2f810b101eb0e4d"} Oct 13 17:26:41 crc kubenswrapper[4720]: I1013 17:26:41.071124 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vhgw6" event={"ID":"93c081dc-50a9-43b9-874c-0b7e46ebbbc9","Type":"ContainerStarted","Data":"e6590ba3e770bdcb23a8790c280fd5dc62718a5b98c7811a66f0635f08d60be5"} Oct 13 17:26:41 crc kubenswrapper[4720]: I1013 17:26:41.072480 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mfblh" event={"ID":"3a01c34e-55fd-4492-8fbe-c15dbb539271","Type":"ContainerStarted","Data":"a01ad71d36a3de0e64fb6414085180ac5a676ef3841047cb107318bbdc1c709c"} Oct 13 17:26:41 crc kubenswrapper[4720]: I1013 17:26:41.073592 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4x4s2" event={"ID":"9bec609a-6eb7-479a-ae81-dd2d7acc1742","Type":"ContainerStarted","Data":"97ced39af09a08b299efe7333c3c570205dee4e67dadfca66bcdece8b4aaca94"} Oct 13 17:26:41 crc kubenswrapper[4720]: I1013 17:26:41.075440 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pc8vw" event={"ID":"b2df17f0-8c2c-4ca1-afe3-7b82dcb27712","Type":"ContainerStarted","Data":"8de69a0107f8512592a2c6df1e3e0ba6cbc6b640d3569a03ae10e57bfb3eda36"} Oct 13 17:26:41 crc kubenswrapper[4720]: I1013 17:26:41.075873 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pc8vw" Oct 13 17:26:41 crc kubenswrapper[4720]: I1013 17:26:41.077209 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9tdzd" event={"ID":"e2f98220-0477-4a6d-8cf3-fa055d01bc3a","Type":"ContainerStarted","Data":"af45dbc3f638dc67de31ac629be915eb1fc5e8c7de06792db9502619bab01066"} Oct 13 17:26:41 crc kubenswrapper[4720]: I1013 17:26:41.078716 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-9w7qp" event={"ID":"fcbfa827-1469-4afb-b411-c6015d3e3195","Type":"ContainerStarted","Data":"535dfea9df697ffe94d09a4c7062b1e3581dbb255239f22d32e83d7bc6fa5a92"} Oct 13 17:26:41 crc kubenswrapper[4720]: I1013 17:26:41.080939 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-mjzm6" event={"ID":"cabe6b29-bccd-4995-ab54-b6cabc86f7bf","Type":"ContainerStarted","Data":"263ecb11ed333371859239e37762000fb4be7bbd80e1dff2c3b47c6309297bb6"} Oct 13 17:26:41 crc kubenswrapper[4720]: I1013 17:26:41.086645 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29339595-f78vr" event={"ID":"0a6d69da-3074-4b30-898b-4bb2eea1fb75","Type":"ContainerStarted","Data":"f3e5ed6dd43e4d22a6c59b3c81a2671caa27241e6072846ed5c0838f28f48a31"} Oct 13 17:26:41 crc kubenswrapper[4720]: I1013 17:26:41.088999 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-7vb4h" event={"ID":"f8b9591c-6d8d-4942-aa6c-f16b6b5f4992","Type":"ContainerStarted","Data":"a09fa9f908bed82bec5f0ce479313704f6c799ec241ac18f09178117a1a982be"} Oct 13 17:26:41 crc kubenswrapper[4720]: I1013 17:26:41.090529 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b5ljk" event={"ID":"f94b07c7-b111-450a-bd2a-0977944282a9","Type":"ContainerStarted","Data":"769a577e479ca5d22c9c891fa44064d576f81dcaa801a3e5d218ab7b0812b3d8"} Oct 13 17:26:41 crc kubenswrapper[4720]: I1013 17:26:41.092983 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vm4rv" event={"ID":"599353db-9adf-4393-9920-4de023b156c4","Type":"ContainerStarted","Data":"10798aa32841fd0a1a087b74457b3a9f78288a24d62d48489385458f06aa8ca4"} Oct 13 17:26:41 crc kubenswrapper[4720]: I1013 17:26:41.094617 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-5tn27" event={"ID":"61e69b4f-7bcc-4ce4-9bd2-3200c0d0f883","Type":"ContainerStarted","Data":"ea972fba7d649934e2583a8f14eb6b1c41e77150bd08895cdade90c4f2918c64"} Oct 13 17:26:41 crc kubenswrapper[4720]: I1013 17:26:41.095963 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7xzcq" event={"ID":"3f98514a-d643-4723-8d83-cda6c55d4874","Type":"ContainerStarted","Data":"76e71f0f3367d3cf3311d1169834f7e5c9aed916bf9ef3d4b41a622e9b4e3309"} Oct 13 17:26:41 crc kubenswrapper[4720]: I1013 17:26:41.101597 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mg7s9" event={"ID":"0ee94e27-af7c-49c7-a57f-8f1a18ba53e3","Type":"ContainerStarted","Data":"9bfeb63aebe69dc0a67043e4a35cb70b377e1dcb126d609704bb131bf7f548fe"} Oct 13 17:26:41 crc kubenswrapper[4720]: I1013 17:26:41.103721 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-fvlfw" event={"ID":"7277d186-5645-445d-aea3-37215cf98836","Type":"ContainerStarted","Data":"81f79604b0ede90a712af178ccae72b06838c00e329eedf56ebcc397d222d7cf"} Oct 13 17:26:41 crc kubenswrapper[4720]: I1013 17:26:41.104453 4720 patch_prober.go:28] interesting pod/downloads-7954f5f757-l8tzk container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Oct 13 17:26:41 crc kubenswrapper[4720]: I1013 17:26:41.104495 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-l8tzk" podUID="e33fd1d2-081b-4e68-ab37-623406daeaeb" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Oct 13 17:26:41 crc kubenswrapper[4720]: I1013 17:26:41.104699 4720 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-xpdm2 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Oct 13 17:26:41 crc kubenswrapper[4720]: I1013 17:26:41.104829 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-xpdm2" podUID="5b9cfa7f-e80a-42b8-b6f0-239165447812" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Oct 13 17:26:41 crc kubenswrapper[4720]: I1013 17:26:41.114165 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-jrc4l" Oct 13 17:26:41 crc kubenswrapper[4720]: I1013 17:26:41.126232 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-jw72g" podStartSLOduration=121.12620933 podStartE2EDuration="2m1.12620933s" podCreationTimestamp="2025-10-13 17:24:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:26:41.101417507 +0000 UTC m=+146.558667639" watchObservedRunningTime="2025-10-13 17:26:41.12620933 +0000 UTC m=+146.583459462" Oct 13 17:26:41 crc kubenswrapper[4720]: I1013 17:26:41.127263 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pc8vw" podStartSLOduration=122.127257617 podStartE2EDuration="2m2.127257617s" podCreationTimestamp="2025-10-13 17:24:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:26:41.125920632 +0000 UTC m=+146.583170764" watchObservedRunningTime="2025-10-13 17:26:41.127257617 +0000 UTC m=+146.584507749" Oct 13 17:26:41 crc kubenswrapper[4720]: I1013 17:26:41.154480 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7xzcq" Oct 13 17:26:41 crc kubenswrapper[4720]: I1013 17:26:41.158306 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 17:26:41 crc kubenswrapper[4720]: E1013 17:26:41.160760 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 17:26:41.660738071 +0000 UTC m=+147.117988203 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:41 crc kubenswrapper[4720]: I1013 17:26:41.168805 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:41 crc kubenswrapper[4720]: E1013 17:26:41.170062 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 17:26:41.670047149 +0000 UTC m=+147.127297281 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5xfl" (UID: "2c0351ce-4eed-4d0e-b9a4-eb2746bf396a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:41 crc kubenswrapper[4720]: I1013 17:26:41.239012 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dl79l" podStartSLOduration=121.238997519 podStartE2EDuration="2m1.238997519s" podCreationTimestamp="2025-10-13 17:24:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:26:41.228490471 +0000 UTC m=+146.685740593" watchObservedRunningTime="2025-10-13 17:26:41.238997519 +0000 UTC m=+146.696247651" Oct 13 17:26:41 crc kubenswrapper[4720]: I1013 17:26:41.239546 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-jvfrr" podStartSLOduration=122.239542093 podStartE2EDuration="2m2.239542093s" podCreationTimestamp="2025-10-13 17:24:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:26:41.192773819 +0000 UTC m=+146.650023951" watchObservedRunningTime="2025-10-13 17:26:41.239542093 +0000 UTC m=+146.696792225" Oct 13 17:26:41 crc kubenswrapper[4720]: I1013 17:26:41.272569 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 17:26:41 crc kubenswrapper[4720]: E1013 17:26:41.274083 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 17:26:41.774062644 +0000 UTC m=+147.231312776 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:41 crc kubenswrapper[4720]: I1013 17:26:41.368875 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-hjhns" podStartSLOduration=121.368860743 podStartE2EDuration="2m1.368860743s" podCreationTimestamp="2025-10-13 17:24:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:26:41.367595521 +0000 UTC m=+146.824845653" watchObservedRunningTime="2025-10-13 17:26:41.368860743 +0000 UTC m=+146.826110865" Oct 13 17:26:41 crc kubenswrapper[4720]: I1013 17:26:41.369585 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-pmrm5" podStartSLOduration=7.369578261 podStartE2EDuration="7.369578261s" podCreationTimestamp="2025-10-13 17:26:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:26:41.321551855 +0000 UTC m=+146.778801987" watchObservedRunningTime="2025-10-13 17:26:41.369578261 +0000 UTC m=+146.826828393" Oct 13 17:26:41 crc kubenswrapper[4720]: I1013 17:26:41.385421 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:41 crc kubenswrapper[4720]: E1013 17:26:41.385934 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 17:26:41.885912718 +0000 UTC m=+147.343162850 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5xfl" (UID: "2c0351ce-4eed-4d0e-b9a4-eb2746bf396a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:41 crc kubenswrapper[4720]: I1013 17:26:41.416978 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mg7s9" Oct 13 17:26:41 crc kubenswrapper[4720]: I1013 17:26:41.417304 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mg7s9" Oct 13 17:26:41 crc kubenswrapper[4720]: I1013 17:26:41.490831 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 17:26:41 crc kubenswrapper[4720]: E1013 17:26:41.491393 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 17:26:41.991377361 +0000 UTC m=+147.448627493 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:41 crc kubenswrapper[4720]: I1013 17:26:41.518955 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4x4s2" podStartSLOduration=121.518937494 podStartE2EDuration="2m1.518937494s" podCreationTimestamp="2025-10-13 17:24:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:26:41.455736131 +0000 UTC m=+146.912986263" watchObservedRunningTime="2025-10-13 17:26:41.518937494 +0000 UTC m=+146.976187626" Oct 13 17:26:41 crc kubenswrapper[4720]: I1013 17:26:41.567207 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mfblh" podStartSLOduration=121.567179796 podStartE2EDuration="2m1.567179796s" podCreationTimestamp="2025-10-13 17:24:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:26:41.562511916 +0000 UTC m=+147.019762048" watchObservedRunningTime="2025-10-13 17:26:41.567179796 +0000 UTC m=+147.024429928" Oct 13 17:26:41 crc kubenswrapper[4720]: I1013 17:26:41.567286 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-b5ljk" podStartSLOduration=121.567283328 podStartE2EDuration="2m1.567283328s" podCreationTimestamp="2025-10-13 17:24:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:26:41.520850643 +0000 UTC m=+146.978100775" watchObservedRunningTime="2025-10-13 17:26:41.567283328 +0000 UTC m=+147.024533450" Oct 13 17:26:41 crc kubenswrapper[4720]: I1013 17:26:41.592342 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:41 crc kubenswrapper[4720]: E1013 17:26:41.592716 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 17:26:42.092704557 +0000 UTC m=+147.549954689 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5xfl" (UID: "2c0351ce-4eed-4d0e-b9a4-eb2746bf396a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:41 crc kubenswrapper[4720]: I1013 17:26:41.673763 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vm4rv" podStartSLOduration=121.673747826 podStartE2EDuration="2m1.673747826s" podCreationTimestamp="2025-10-13 17:24:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:26:41.611975719 +0000 UTC m=+147.069225851" watchObservedRunningTime="2025-10-13 17:26:41.673747826 +0000 UTC m=+147.130997958" Oct 13 17:26:41 crc kubenswrapper[4720]: I1013 17:26:41.696621 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 17:26:41 crc kubenswrapper[4720]: E1013 17:26:41.696917 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 17:26:42.196902967 +0000 UTC m=+147.654153089 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:41 crc kubenswrapper[4720]: I1013 17:26:41.798624 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:41 crc kubenswrapper[4720]: E1013 17:26:41.800904 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 17:26:42.300884481 +0000 UTC m=+147.758134623 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5xfl" (UID: "2c0351ce-4eed-4d0e-b9a4-eb2746bf396a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:41 crc kubenswrapper[4720]: I1013 17:26:41.864259 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mg7s9" Oct 13 17:26:41 crc kubenswrapper[4720]: I1013 17:26:41.899520 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 17:26:41 crc kubenswrapper[4720]: E1013 17:26:41.899732 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 17:26:42.399701314 +0000 UTC m=+147.856951446 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:41 crc kubenswrapper[4720]: I1013 17:26:41.900162 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:41 crc kubenswrapper[4720]: E1013 17:26:41.900669 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 17:26:42.400662578 +0000 UTC m=+147.857912710 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5xfl" (UID: "2c0351ce-4eed-4d0e-b9a4-eb2746bf396a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:41 crc kubenswrapper[4720]: I1013 17:26:41.920943 4720 patch_prober.go:28] interesting pod/router-default-5444994796-vttl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 13 17:26:41 crc kubenswrapper[4720]: [-]has-synced failed: reason withheld Oct 13 17:26:41 crc kubenswrapper[4720]: [+]process-running ok Oct 13 17:26:41 crc kubenswrapper[4720]: healthz check failed Oct 13 17:26:41 crc kubenswrapper[4720]: I1013 17:26:41.921387 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vttl9" podUID="a13b5004-f884-4558-b38b-b3c8028a73d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 13 17:26:42 crc kubenswrapper[4720]: I1013 17:26:42.001924 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 17:26:42 crc kubenswrapper[4720]: E1013 17:26:42.002115 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 17:26:42.502086367 +0000 UTC m=+147.959336489 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:42 crc kubenswrapper[4720]: I1013 17:26:42.002426 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:42 crc kubenswrapper[4720]: E1013 17:26:42.002846 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 17:26:42.502826756 +0000 UTC m=+147.960076888 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5xfl" (UID: "2c0351ce-4eed-4d0e-b9a4-eb2746bf396a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:42 crc kubenswrapper[4720]: I1013 17:26:42.103649 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 17:26:42 crc kubenswrapper[4720]: E1013 17:26:42.103831 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 17:26:42.603803344 +0000 UTC m=+148.061053476 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:42 crc kubenswrapper[4720]: I1013 17:26:42.104271 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:42 crc kubenswrapper[4720]: E1013 17:26:42.104816 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 17:26:42.604788939 +0000 UTC m=+148.062039071 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5xfl" (UID: "2c0351ce-4eed-4d0e-b9a4-eb2746bf396a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:42 crc kubenswrapper[4720]: I1013 17:26:42.129833 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9tdzd" event={"ID":"e2f98220-0477-4a6d-8cf3-fa055d01bc3a","Type":"ContainerStarted","Data":"66b06524dd73fa9fbe8d0759b46a131fca927cd1fbb263e2239fad7ad5be4d82"} Oct 13 17:26:42 crc kubenswrapper[4720]: I1013 17:26:42.129950 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9tdzd" event={"ID":"e2f98220-0477-4a6d-8cf3-fa055d01bc3a","Type":"ContainerStarted","Data":"6f4d93e496872b42c769113edda8f4ec99d4741a82ebb02b6ffaf114e4b13f2f"} Oct 13 17:26:42 crc kubenswrapper[4720]: I1013 17:26:42.132929 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-pmrm5" Oct 13 17:26:42 crc kubenswrapper[4720]: I1013 17:26:42.142635 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mg7s9" Oct 13 17:26:42 crc kubenswrapper[4720]: I1013 17:26:42.193456 4720 patch_prober.go:28] interesting pod/apiserver-76f77b778f-mm8n9 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 13 17:26:42 crc kubenswrapper[4720]: [+]log ok Oct 13 17:26:42 crc kubenswrapper[4720]: [+]etcd ok Oct 13 17:26:42 crc kubenswrapper[4720]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 13 17:26:42 crc kubenswrapper[4720]: [+]poststarthook/generic-apiserver-start-informers ok Oct 13 17:26:42 crc kubenswrapper[4720]: [+]poststarthook/max-in-flight-filter ok Oct 13 17:26:42 crc kubenswrapper[4720]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 13 17:26:42 crc kubenswrapper[4720]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 13 17:26:42 crc kubenswrapper[4720]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Oct 13 17:26:42 crc kubenswrapper[4720]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Oct 13 17:26:42 crc kubenswrapper[4720]: [+]poststarthook/project.openshift.io-projectcache ok Oct 13 17:26:42 crc kubenswrapper[4720]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 13 17:26:42 crc kubenswrapper[4720]: [+]poststarthook/openshift.io-startinformers ok Oct 13 17:26:42 crc kubenswrapper[4720]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 13 17:26:42 crc kubenswrapper[4720]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 13 17:26:42 crc kubenswrapper[4720]: livez check failed Oct 13 17:26:42 crc kubenswrapper[4720]: I1013 17:26:42.193517 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-mm8n9" podUID="7e7db006-6ea5-44bb-89ad-0d6a6a4810ca" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 13 17:26:42 crc kubenswrapper[4720]: I1013 17:26:42.205542 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 17:26:42 crc kubenswrapper[4720]: E1013 17:26:42.205754 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 17:26:42.705724676 +0000 UTC m=+148.162974808 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:42 crc kubenswrapper[4720]: I1013 17:26:42.206472 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:42 crc kubenswrapper[4720]: E1013 17:26:42.209418 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 17:26:42.70940551 +0000 UTC m=+148.166655632 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5xfl" (UID: "2c0351ce-4eed-4d0e-b9a4-eb2746bf396a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:42 crc kubenswrapper[4720]: I1013 17:26:42.313881 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 17:26:42 crc kubenswrapper[4720]: E1013 17:26:42.314954 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 17:26:42.814930953 +0000 UTC m=+148.272181085 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:42 crc kubenswrapper[4720]: I1013 17:26:42.416133 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:42 crc kubenswrapper[4720]: E1013 17:26:42.416471 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 17:26:42.916459175 +0000 UTC m=+148.373709307 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5xfl" (UID: "2c0351ce-4eed-4d0e-b9a4-eb2746bf396a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:42 crc kubenswrapper[4720]: I1013 17:26:42.516677 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 17:26:42 crc kubenswrapper[4720]: E1013 17:26:42.516897 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 17:26:43.016856758 +0000 UTC m=+148.474106890 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:42 crc kubenswrapper[4720]: I1013 17:26:42.517110 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:42 crc kubenswrapper[4720]: E1013 17:26:42.517434 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 17:26:43.017421072 +0000 UTC m=+148.474671204 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5xfl" (UID: "2c0351ce-4eed-4d0e-b9a4-eb2746bf396a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:42 crc kubenswrapper[4720]: I1013 17:26:42.618636 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 17:26:42 crc kubenswrapper[4720]: E1013 17:26:42.618910 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 17:26:43.118864392 +0000 UTC m=+148.576114514 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:42 crc kubenswrapper[4720]: I1013 17:26:42.619272 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:42 crc kubenswrapper[4720]: E1013 17:26:42.619637 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 17:26:43.119622521 +0000 UTC m=+148.576872653 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5xfl" (UID: "2c0351ce-4eed-4d0e-b9a4-eb2746bf396a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:42 crc kubenswrapper[4720]: I1013 17:26:42.720635 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 17:26:42 crc kubenswrapper[4720]: E1013 17:26:42.720809 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 17:26:43.220777214 +0000 UTC m=+148.678027336 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:42 crc kubenswrapper[4720]: I1013 17:26:42.720852 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:42 crc kubenswrapper[4720]: E1013 17:26:42.721146 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 17:26:43.221133683 +0000 UTC m=+148.678383815 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5xfl" (UID: "2c0351ce-4eed-4d0e-b9a4-eb2746bf396a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:42 crc kubenswrapper[4720]: I1013 17:26:42.822509 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 17:26:42 crc kubenswrapper[4720]: E1013 17:26:42.822645 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 17:26:43.322625573 +0000 UTC m=+148.779875705 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:42 crc kubenswrapper[4720]: I1013 17:26:42.822788 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:42 crc kubenswrapper[4720]: E1013 17:26:42.823036 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 17:26:43.323028554 +0000 UTC m=+148.780278676 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5xfl" (UID: "2c0351ce-4eed-4d0e-b9a4-eb2746bf396a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:42 crc kubenswrapper[4720]: I1013 17:26:42.915739 4720 patch_prober.go:28] interesting pod/router-default-5444994796-vttl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 13 17:26:42 crc kubenswrapper[4720]: [-]has-synced failed: reason withheld Oct 13 17:26:42 crc kubenswrapper[4720]: [+]process-running ok Oct 13 17:26:42 crc kubenswrapper[4720]: healthz check failed Oct 13 17:26:42 crc kubenswrapper[4720]: I1013 17:26:42.915813 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vttl9" podUID="a13b5004-f884-4558-b38b-b3c8028a73d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 13 17:26:42 crc kubenswrapper[4720]: I1013 17:26:42.924374 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 17:26:42 crc kubenswrapper[4720]: E1013 17:26:42.924563 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 17:26:43.424529555 +0000 UTC m=+148.881779697 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:42 crc kubenswrapper[4720]: I1013 17:26:42.924706 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:42 crc kubenswrapper[4720]: E1013 17:26:42.925113 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 17:26:43.425101649 +0000 UTC m=+148.882351781 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5xfl" (UID: "2c0351ce-4eed-4d0e-b9a4-eb2746bf396a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:42 crc kubenswrapper[4720]: I1013 17:26:42.982010 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6n8tc"] Oct 13 17:26:42 crc kubenswrapper[4720]: I1013 17:26:42.982975 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6n8tc" Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.010680 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.019923 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6n8tc"] Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.025700 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 17:26:43 crc kubenswrapper[4720]: E1013 17:26:43.025910 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 17:26:43.525877652 +0000 UTC m=+148.983127774 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.026246 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.026340 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45a4b9f0-8d20-4a69-bd03-2c54f1a66867-utilities\") pod \"certified-operators-6n8tc\" (UID: \"45a4b9f0-8d20-4a69-bd03-2c54f1a66867\") " pod="openshift-marketplace/certified-operators-6n8tc" Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.026387 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45a4b9f0-8d20-4a69-bd03-2c54f1a66867-catalog-content\") pod \"certified-operators-6n8tc\" (UID: \"45a4b9f0-8d20-4a69-bd03-2c54f1a66867\") " pod="openshift-marketplace/certified-operators-6n8tc" Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.026471 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62dvw\" (UniqueName: \"kubernetes.io/projected/45a4b9f0-8d20-4a69-bd03-2c54f1a66867-kube-api-access-62dvw\") pod \"certified-operators-6n8tc\" (UID: \"45a4b9f0-8d20-4a69-bd03-2c54f1a66867\") " pod="openshift-marketplace/certified-operators-6n8tc" Oct 13 17:26:43 crc kubenswrapper[4720]: E1013 17:26:43.026644 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 17:26:43.526634541 +0000 UTC m=+148.983884673 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5xfl" (UID: "2c0351ce-4eed-4d0e-b9a4-eb2746bf396a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.083537 4720 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.127182 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 17:26:43 crc kubenswrapper[4720]: E1013 17:26:43.127358 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 17:26:43.627337432 +0000 UTC m=+149.084587564 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.127392 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.127450 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45a4b9f0-8d20-4a69-bd03-2c54f1a66867-utilities\") pod \"certified-operators-6n8tc\" (UID: \"45a4b9f0-8d20-4a69-bd03-2c54f1a66867\") " pod="openshift-marketplace/certified-operators-6n8tc" Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.127471 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.127493 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45a4b9f0-8d20-4a69-bd03-2c54f1a66867-catalog-content\") pod \"certified-operators-6n8tc\" (UID: \"45a4b9f0-8d20-4a69-bd03-2c54f1a66867\") " pod="openshift-marketplace/certified-operators-6n8tc" Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.127509 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.127535 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62dvw\" (UniqueName: \"kubernetes.io/projected/45a4b9f0-8d20-4a69-bd03-2c54f1a66867-kube-api-access-62dvw\") pod \"certified-operators-6n8tc\" (UID: \"45a4b9f0-8d20-4a69-bd03-2c54f1a66867\") " pod="openshift-marketplace/certified-operators-6n8tc" Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.127553 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.127576 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:26:43 crc kubenswrapper[4720]: E1013 17:26:43.127731 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 17:26:43.627721032 +0000 UTC m=+149.084971164 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5xfl" (UID: "2c0351ce-4eed-4d0e-b9a4-eb2746bf396a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.127946 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45a4b9f0-8d20-4a69-bd03-2c54f1a66867-utilities\") pod \"certified-operators-6n8tc\" (UID: \"45a4b9f0-8d20-4a69-bd03-2c54f1a66867\") " pod="openshift-marketplace/certified-operators-6n8tc" Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.128357 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45a4b9f0-8d20-4a69-bd03-2c54f1a66867-catalog-content\") pod \"certified-operators-6n8tc\" (UID: \"45a4b9f0-8d20-4a69-bd03-2c54f1a66867\") " pod="openshift-marketplace/certified-operators-6n8tc" Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.132542 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.147032 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.147057 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.148030 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.151549 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9tdzd" event={"ID":"e2f98220-0477-4a6d-8cf3-fa055d01bc3a","Type":"ContainerStarted","Data":"3095d176d7d6a8440298d4564b6e2a166c9c091bc53b44f197a8aff1f6bf6f00"} Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.160706 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62dvw\" (UniqueName: \"kubernetes.io/projected/45a4b9f0-8d20-4a69-bd03-2c54f1a66867-kube-api-access-62dvw\") pod \"certified-operators-6n8tc\" (UID: \"45a4b9f0-8d20-4a69-bd03-2c54f1a66867\") " pod="openshift-marketplace/certified-operators-6n8tc" Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.191860 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-9tdzd" podStartSLOduration=10.191841078 podStartE2EDuration="10.191841078s" podCreationTimestamp="2025-10-13 17:26:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:26:43.189023387 +0000 UTC m=+148.646273519" watchObservedRunningTime="2025-10-13 17:26:43.191841078 +0000 UTC m=+148.649091210" Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.192229 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.214813 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.219141 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-85tbj"] Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.222915 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-85tbj" Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.225989 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.232024 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 17:26:43 crc kubenswrapper[4720]: E1013 17:26:43.232284 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 17:26:43.73224417 +0000 UTC m=+149.189494302 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.233350 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:43 crc kubenswrapper[4720]: E1013 17:26:43.235681 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-13 17:26:43.735660187 +0000 UTC m=+149.192910319 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5xfl" (UID: "2c0351ce-4eed-4d0e-b9a4-eb2746bf396a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.248154 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-85tbj"] Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.276647 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pc8vw" Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.288496 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.305824 4720 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-10-13T17:26:43.083573405Z","Handler":null,"Name":""} Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.321384 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6n8tc" Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.337814 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.338290 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h74rp\" (UniqueName: \"kubernetes.io/projected/4ec46228-40b7-4dd0-b773-59e2b088ef17-kube-api-access-h74rp\") pod \"community-operators-85tbj\" (UID: \"4ec46228-40b7-4dd0-b773-59e2b088ef17\") " pod="openshift-marketplace/community-operators-85tbj" Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.338325 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ec46228-40b7-4dd0-b773-59e2b088ef17-catalog-content\") pod \"community-operators-85tbj\" (UID: \"4ec46228-40b7-4dd0-b773-59e2b088ef17\") " pod="openshift-marketplace/community-operators-85tbj" Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.338348 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ec46228-40b7-4dd0-b773-59e2b088ef17-utilities\") pod \"community-operators-85tbj\" (UID: \"4ec46228-40b7-4dd0-b773-59e2b088ef17\") " pod="openshift-marketplace/community-operators-85tbj" Oct 13 17:26:43 crc kubenswrapper[4720]: E1013 17:26:43.338505 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-13 17:26:43.838480922 +0000 UTC m=+149.295731054 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.383328 4720 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.383382 4720 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.402021 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pq4c6"] Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.403880 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pq4c6" Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.406945 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pq4c6"] Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.440624 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwsqz\" (UniqueName: \"kubernetes.io/projected/9a30ffbf-3240-409c-8c06-d9e12a160aab-kube-api-access-hwsqz\") pod \"certified-operators-pq4c6\" (UID: \"9a30ffbf-3240-409c-8c06-d9e12a160aab\") " pod="openshift-marketplace/certified-operators-pq4c6" Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.440665 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h74rp\" (UniqueName: \"kubernetes.io/projected/4ec46228-40b7-4dd0-b773-59e2b088ef17-kube-api-access-h74rp\") pod \"community-operators-85tbj\" (UID: \"4ec46228-40b7-4dd0-b773-59e2b088ef17\") " pod="openshift-marketplace/community-operators-85tbj" Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.440683 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ec46228-40b7-4dd0-b773-59e2b088ef17-catalog-content\") pod \"community-operators-85tbj\" (UID: \"4ec46228-40b7-4dd0-b773-59e2b088ef17\") " pod="openshift-marketplace/community-operators-85tbj" Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.440705 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ec46228-40b7-4dd0-b773-59e2b088ef17-utilities\") pod \"community-operators-85tbj\" (UID: \"4ec46228-40b7-4dd0-b773-59e2b088ef17\") " pod="openshift-marketplace/community-operators-85tbj" Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.440758 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a30ffbf-3240-409c-8c06-d9e12a160aab-catalog-content\") pod \"certified-operators-pq4c6\" (UID: \"9a30ffbf-3240-409c-8c06-d9e12a160aab\") " pod="openshift-marketplace/certified-operators-pq4c6" Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.440788 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a30ffbf-3240-409c-8c06-d9e12a160aab-utilities\") pod \"certified-operators-pq4c6\" (UID: \"9a30ffbf-3240-409c-8c06-d9e12a160aab\") " pod="openshift-marketplace/certified-operators-pq4c6" Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.440814 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.442039 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ec46228-40b7-4dd0-b773-59e2b088ef17-utilities\") pod \"community-operators-85tbj\" (UID: \"4ec46228-40b7-4dd0-b773-59e2b088ef17\") " pod="openshift-marketplace/community-operators-85tbj" Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.442308 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ec46228-40b7-4dd0-b773-59e2b088ef17-catalog-content\") pod \"community-operators-85tbj\" (UID: \"4ec46228-40b7-4dd0-b773-59e2b088ef17\") " pod="openshift-marketplace/community-operators-85tbj" Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.452881 4720 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.452917 4720 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.465554 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h74rp\" (UniqueName: \"kubernetes.io/projected/4ec46228-40b7-4dd0-b773-59e2b088ef17-kube-api-access-h74rp\") pod \"community-operators-85tbj\" (UID: \"4ec46228-40b7-4dd0-b773-59e2b088ef17\") " pod="openshift-marketplace/community-operators-85tbj" Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.497685 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5xfl\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.544111 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.544533 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a30ffbf-3240-409c-8c06-d9e12a160aab-catalog-content\") pod \"certified-operators-pq4c6\" (UID: \"9a30ffbf-3240-409c-8c06-d9e12a160aab\") " pod="openshift-marketplace/certified-operators-pq4c6" Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.544597 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a30ffbf-3240-409c-8c06-d9e12a160aab-utilities\") pod \"certified-operators-pq4c6\" (UID: \"9a30ffbf-3240-409c-8c06-d9e12a160aab\") " pod="openshift-marketplace/certified-operators-pq4c6" Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.544626 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwsqz\" (UniqueName: \"kubernetes.io/projected/9a30ffbf-3240-409c-8c06-d9e12a160aab-kube-api-access-hwsqz\") pod \"certified-operators-pq4c6\" (UID: \"9a30ffbf-3240-409c-8c06-d9e12a160aab\") " pod="openshift-marketplace/certified-operators-pq4c6" Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.545643 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a30ffbf-3240-409c-8c06-d9e12a160aab-utilities\") pod \"certified-operators-pq4c6\" (UID: \"9a30ffbf-3240-409c-8c06-d9e12a160aab\") " pod="openshift-marketplace/certified-operators-pq4c6" Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.545795 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a30ffbf-3240-409c-8c06-d9e12a160aab-catalog-content\") pod \"certified-operators-pq4c6\" (UID: \"9a30ffbf-3240-409c-8c06-d9e12a160aab\") " pod="openshift-marketplace/certified-operators-pq4c6" Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.555869 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.578393 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-85tbj" Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.589985 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-crcm9"] Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.590165 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwsqz\" (UniqueName: \"kubernetes.io/projected/9a30ffbf-3240-409c-8c06-d9e12a160aab-kube-api-access-hwsqz\") pod \"certified-operators-pq4c6\" (UID: \"9a30ffbf-3240-409c-8c06-d9e12a160aab\") " pod="openshift-marketplace/certified-operators-pq4c6" Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.591109 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-crcm9" Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.651390 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24757001-c6f3-4ecc-9b71-8fa697a28390-utilities\") pod \"community-operators-crcm9\" (UID: \"24757001-c6f3-4ecc-9b71-8fa697a28390\") " pod="openshift-marketplace/community-operators-crcm9" Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.651475 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24757001-c6f3-4ecc-9b71-8fa697a28390-catalog-content\") pod \"community-operators-crcm9\" (UID: \"24757001-c6f3-4ecc-9b71-8fa697a28390\") " pod="openshift-marketplace/community-operators-crcm9" Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.651529 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b299d\" (UniqueName: \"kubernetes.io/projected/24757001-c6f3-4ecc-9b71-8fa697a28390-kube-api-access-b299d\") pod \"community-operators-crcm9\" (UID: \"24757001-c6f3-4ecc-9b71-8fa697a28390\") " pod="openshift-marketplace/community-operators-crcm9" Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.667494 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.683213 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-crcm9"] Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.753819 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24757001-c6f3-4ecc-9b71-8fa697a28390-catalog-content\") pod \"community-operators-crcm9\" (UID: \"24757001-c6f3-4ecc-9b71-8fa697a28390\") " pod="openshift-marketplace/community-operators-crcm9" Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.753875 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b299d\" (UniqueName: \"kubernetes.io/projected/24757001-c6f3-4ecc-9b71-8fa697a28390-kube-api-access-b299d\") pod \"community-operators-crcm9\" (UID: \"24757001-c6f3-4ecc-9b71-8fa697a28390\") " pod="openshift-marketplace/community-operators-crcm9" Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.753943 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24757001-c6f3-4ecc-9b71-8fa697a28390-utilities\") pod \"community-operators-crcm9\" (UID: \"24757001-c6f3-4ecc-9b71-8fa697a28390\") " pod="openshift-marketplace/community-operators-crcm9" Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.754440 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24757001-c6f3-4ecc-9b71-8fa697a28390-utilities\") pod \"community-operators-crcm9\" (UID: \"24757001-c6f3-4ecc-9b71-8fa697a28390\") " pod="openshift-marketplace/community-operators-crcm9" Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.754646 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24757001-c6f3-4ecc-9b71-8fa697a28390-catalog-content\") pod \"community-operators-crcm9\" (UID: \"24757001-c6f3-4ecc-9b71-8fa697a28390\") " pod="openshift-marketplace/community-operators-crcm9" Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.779671 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pq4c6" Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.802903 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b299d\" (UniqueName: \"kubernetes.io/projected/24757001-c6f3-4ecc-9b71-8fa697a28390-kube-api-access-b299d\") pod \"community-operators-crcm9\" (UID: \"24757001-c6f3-4ecc-9b71-8fa697a28390\") " pod="openshift-marketplace/community-operators-crcm9" Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.928756 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6n8tc"] Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.931363 4720 patch_prober.go:28] interesting pod/router-default-5444994796-vttl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 13 17:26:43 crc kubenswrapper[4720]: [-]has-synced failed: reason withheld Oct 13 17:26:43 crc kubenswrapper[4720]: [+]process-running ok Oct 13 17:26:43 crc kubenswrapper[4720]: healthz check failed Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.931428 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vttl9" podUID="a13b5004-f884-4558-b38b-b3c8028a73d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 13 17:26:43 crc kubenswrapper[4720]: I1013 17:26:43.939813 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-crcm9" Oct 13 17:26:43 crc kubenswrapper[4720]: W1013 17:26:43.948054 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-9d147b8e4ea55f6e70e0ce3ccd5b16c1816f61d0b450c44134527fb4c6d4a6fc WatchSource:0}: Error finding container 9d147b8e4ea55f6e70e0ce3ccd5b16c1816f61d0b450c44134527fb4c6d4a6fc: Status 404 returned error can't find the container with id 9d147b8e4ea55f6e70e0ce3ccd5b16c1816f61d0b450c44134527fb4c6d4a6fc Oct 13 17:26:44 crc kubenswrapper[4720]: I1013 17:26:44.089121 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-85tbj"] Oct 13 17:26:44 crc kubenswrapper[4720]: W1013 17:26:44.103119 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ec46228_40b7_4dd0_b773_59e2b088ef17.slice/crio-e2d34bce12acaa7aa7f392567b324b2e2eb04b0c482579607b97a61e7704a85e WatchSource:0}: Error finding container e2d34bce12acaa7aa7f392567b324b2e2eb04b0c482579607b97a61e7704a85e: Status 404 returned error can't find the container with id e2d34bce12acaa7aa7f392567b324b2e2eb04b0c482579607b97a61e7704a85e Oct 13 17:26:44 crc kubenswrapper[4720]: I1013 17:26:44.164815 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pq4c6"] Oct 13 17:26:44 crc kubenswrapper[4720]: I1013 17:26:44.211111 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-w5xfl"] Oct 13 17:26:44 crc kubenswrapper[4720]: I1013 17:26:44.219004 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6n8tc" event={"ID":"45a4b9f0-8d20-4a69-bd03-2c54f1a66867","Type":"ContainerStarted","Data":"5cac65b383ce4c5880558dd1f8f5cd81497aa8dc9624a3866e93157d58c3ce72"} Oct 13 17:26:44 crc kubenswrapper[4720]: I1013 17:26:44.236829 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"b069059a09dfffd6b317893c8bef5f1ab04234590e1530c3c4321f1831eee515"} Oct 13 17:26:44 crc kubenswrapper[4720]: I1013 17:26:44.259238 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"c95afb96d1e01c9121cd18419ebc0716440de6263ff89510d312ade13a161431"} Oct 13 17:26:44 crc kubenswrapper[4720]: I1013 17:26:44.260273 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"4007c98f19fe29025d9642b8d4435597b0f238e2fbba29ca4b3ecb29d39d3b08"} Oct 13 17:26:44 crc kubenswrapper[4720]: I1013 17:26:44.265122 4720 generic.go:334] "Generic (PLEG): container finished" podID="0a6d69da-3074-4b30-898b-4bb2eea1fb75" containerID="f3e5ed6dd43e4d22a6c59b3c81a2671caa27241e6072846ed5c0838f28f48a31" exitCode=0 Oct 13 17:26:44 crc kubenswrapper[4720]: I1013 17:26:44.265480 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29339595-f78vr" event={"ID":"0a6d69da-3074-4b30-898b-4bb2eea1fb75","Type":"ContainerDied","Data":"f3e5ed6dd43e4d22a6c59b3c81a2671caa27241e6072846ed5c0838f28f48a31"} Oct 13 17:26:44 crc kubenswrapper[4720]: I1013 17:26:44.277746 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"9d147b8e4ea55f6e70e0ce3ccd5b16c1816f61d0b450c44134527fb4c6d4a6fc"} Oct 13 17:26:44 crc kubenswrapper[4720]: I1013 17:26:44.280419 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-85tbj" event={"ID":"4ec46228-40b7-4dd0-b773-59e2b088ef17","Type":"ContainerStarted","Data":"e2d34bce12acaa7aa7f392567b324b2e2eb04b0c482579607b97a61e7704a85e"} Oct 13 17:26:44 crc kubenswrapper[4720]: I1013 17:26:44.299507 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-crcm9"] Oct 13 17:26:44 crc kubenswrapper[4720]: I1013 17:26:44.916843 4720 patch_prober.go:28] interesting pod/router-default-5444994796-vttl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 13 17:26:44 crc kubenswrapper[4720]: [-]has-synced failed: reason withheld Oct 13 17:26:44 crc kubenswrapper[4720]: [+]process-running ok Oct 13 17:26:44 crc kubenswrapper[4720]: healthz check failed Oct 13 17:26:44 crc kubenswrapper[4720]: I1013 17:26:44.917570 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vttl9" podUID="a13b5004-f884-4558-b38b-b3c8028a73d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 13 17:26:45 crc kubenswrapper[4720]: I1013 17:26:45.186801 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Oct 13 17:26:45 crc kubenswrapper[4720]: I1013 17:26:45.194663 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dp5pr"] Oct 13 17:26:45 crc kubenswrapper[4720]: I1013 17:26:45.196875 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dp5pr" Oct 13 17:26:45 crc kubenswrapper[4720]: I1013 17:26:45.198630 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Oct 13 17:26:45 crc kubenswrapper[4720]: I1013 17:26:45.202124 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dp5pr"] Oct 13 17:26:45 crc kubenswrapper[4720]: I1013 17:26:45.213072 4720 patch_prober.go:28] interesting pod/machine-config-daemon-htwnl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 17:26:45 crc kubenswrapper[4720]: I1013 17:26:45.213123 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 17:26:45 crc kubenswrapper[4720]: I1013 17:26:45.276073 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfv6h\" (UniqueName: \"kubernetes.io/projected/cba388bf-d5f4-4a4e-8add-d8e4d7489f14-kube-api-access-hfv6h\") pod \"redhat-marketplace-dp5pr\" (UID: \"cba388bf-d5f4-4a4e-8add-d8e4d7489f14\") " pod="openshift-marketplace/redhat-marketplace-dp5pr" Oct 13 17:26:45 crc kubenswrapper[4720]: I1013 17:26:45.276210 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cba388bf-d5f4-4a4e-8add-d8e4d7489f14-utilities\") pod \"redhat-marketplace-dp5pr\" (UID: \"cba388bf-d5f4-4a4e-8add-d8e4d7489f14\") " pod="openshift-marketplace/redhat-marketplace-dp5pr" Oct 13 17:26:45 crc kubenswrapper[4720]: I1013 17:26:45.276256 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cba388bf-d5f4-4a4e-8add-d8e4d7489f14-catalog-content\") pod \"redhat-marketplace-dp5pr\" (UID: \"cba388bf-d5f4-4a4e-8add-d8e4d7489f14\") " pod="openshift-marketplace/redhat-marketplace-dp5pr" Oct 13 17:26:45 crc kubenswrapper[4720]: I1013 17:26:45.300663 4720 generic.go:334] "Generic (PLEG): container finished" podID="45a4b9f0-8d20-4a69-bd03-2c54f1a66867" containerID="392b3b63152f91c71d0d17a5bc672ed9d54fad598b7732a68363c0d87ceb00c0" exitCode=0 Oct 13 17:26:45 crc kubenswrapper[4720]: I1013 17:26:45.300763 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6n8tc" event={"ID":"45a4b9f0-8d20-4a69-bd03-2c54f1a66867","Type":"ContainerDied","Data":"392b3b63152f91c71d0d17a5bc672ed9d54fad598b7732a68363c0d87ceb00c0"} Oct 13 17:26:45 crc kubenswrapper[4720]: I1013 17:26:45.305313 4720 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 13 17:26:45 crc kubenswrapper[4720]: I1013 17:26:45.308575 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"07f3f8a556a14118e2791560b6ed3990ea64109dca79608694982dc7455112aa"} Oct 13 17:26:45 crc kubenswrapper[4720]: I1013 17:26:45.309214 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:26:45 crc kubenswrapper[4720]: I1013 17:26:45.323927 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"b0e24bb28363268fb48d2dfd9a81cf669c8fcd21b45f415fdd890066ce54133a"} Oct 13 17:26:45 crc kubenswrapper[4720]: I1013 17:26:45.328763 4720 generic.go:334] "Generic (PLEG): container finished" podID="24757001-c6f3-4ecc-9b71-8fa697a28390" containerID="790f14a9ca3f07f0d025a75e72f0d6ae03ffbd4e2f10dce66c68bc3edca0cb2f" exitCode=0 Oct 13 17:26:45 crc kubenswrapper[4720]: I1013 17:26:45.328834 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-crcm9" event={"ID":"24757001-c6f3-4ecc-9b71-8fa697a28390","Type":"ContainerDied","Data":"790f14a9ca3f07f0d025a75e72f0d6ae03ffbd4e2f10dce66c68bc3edca0cb2f"} Oct 13 17:26:45 crc kubenswrapper[4720]: I1013 17:26:45.328861 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-crcm9" event={"ID":"24757001-c6f3-4ecc-9b71-8fa697a28390","Type":"ContainerStarted","Data":"ac49c6ee19160ea3d2bfe32911faa92c1362dc920a34793be2e56ebb8238f368"} Oct 13 17:26:45 crc kubenswrapper[4720]: I1013 17:26:45.332353 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" event={"ID":"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a","Type":"ContainerStarted","Data":"4f98436f4e49cbf14734d9595bc80f9b873a4c59276150e787c4ddaebeeb3b41"} Oct 13 17:26:45 crc kubenswrapper[4720]: I1013 17:26:45.332410 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" event={"ID":"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a","Type":"ContainerStarted","Data":"ea16400664aefae5b0f63ea77546e2bdd2ac92f3fafee8b8691ffcbdee777424"} Oct 13 17:26:45 crc kubenswrapper[4720]: I1013 17:26:45.332752 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:26:45 crc kubenswrapper[4720]: I1013 17:26:45.334609 4720 generic.go:334] "Generic (PLEG): container finished" podID="4ec46228-40b7-4dd0-b773-59e2b088ef17" containerID="3f7a4801a12c4bcad073fec39c0901777fad1bdf0a8c72ee565664766dacf083" exitCode=0 Oct 13 17:26:45 crc kubenswrapper[4720]: I1013 17:26:45.334659 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-85tbj" event={"ID":"4ec46228-40b7-4dd0-b773-59e2b088ef17","Type":"ContainerDied","Data":"3f7a4801a12c4bcad073fec39c0901777fad1bdf0a8c72ee565664766dacf083"} Oct 13 17:26:45 crc kubenswrapper[4720]: I1013 17:26:45.336334 4720 generic.go:334] "Generic (PLEG): container finished" podID="9a30ffbf-3240-409c-8c06-d9e12a160aab" containerID="a51c7e462fe1dd873716ac507106e24d31897359723edf335099b4ad91d27e96" exitCode=0 Oct 13 17:26:45 crc kubenswrapper[4720]: I1013 17:26:45.336433 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pq4c6" event={"ID":"9a30ffbf-3240-409c-8c06-d9e12a160aab","Type":"ContainerDied","Data":"a51c7e462fe1dd873716ac507106e24d31897359723edf335099b4ad91d27e96"} Oct 13 17:26:45 crc kubenswrapper[4720]: I1013 17:26:45.336473 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pq4c6" event={"ID":"9a30ffbf-3240-409c-8c06-d9e12a160aab","Type":"ContainerStarted","Data":"45090cf5200de1a3df5bfe4405180953dfba66f15e2ea4a8f94d56fd8148e62c"} Oct 13 17:26:45 crc kubenswrapper[4720]: I1013 17:26:45.377671 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfv6h\" (UniqueName: \"kubernetes.io/projected/cba388bf-d5f4-4a4e-8add-d8e4d7489f14-kube-api-access-hfv6h\") pod \"redhat-marketplace-dp5pr\" (UID: \"cba388bf-d5f4-4a4e-8add-d8e4d7489f14\") " pod="openshift-marketplace/redhat-marketplace-dp5pr" Oct 13 17:26:45 crc kubenswrapper[4720]: I1013 17:26:45.377821 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cba388bf-d5f4-4a4e-8add-d8e4d7489f14-utilities\") pod \"redhat-marketplace-dp5pr\" (UID: \"cba388bf-d5f4-4a4e-8add-d8e4d7489f14\") " pod="openshift-marketplace/redhat-marketplace-dp5pr" Oct 13 17:26:45 crc kubenswrapper[4720]: I1013 17:26:45.377891 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cba388bf-d5f4-4a4e-8add-d8e4d7489f14-catalog-content\") pod \"redhat-marketplace-dp5pr\" (UID: \"cba388bf-d5f4-4a4e-8add-d8e4d7489f14\") " pod="openshift-marketplace/redhat-marketplace-dp5pr" Oct 13 17:26:45 crc kubenswrapper[4720]: I1013 17:26:45.380249 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cba388bf-d5f4-4a4e-8add-d8e4d7489f14-catalog-content\") pod \"redhat-marketplace-dp5pr\" (UID: \"cba388bf-d5f4-4a4e-8add-d8e4d7489f14\") " pod="openshift-marketplace/redhat-marketplace-dp5pr" Oct 13 17:26:45 crc kubenswrapper[4720]: I1013 17:26:45.380335 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cba388bf-d5f4-4a4e-8add-d8e4d7489f14-utilities\") pod \"redhat-marketplace-dp5pr\" (UID: \"cba388bf-d5f4-4a4e-8add-d8e4d7489f14\") " pod="openshift-marketplace/redhat-marketplace-dp5pr" Oct 13 17:26:45 crc kubenswrapper[4720]: I1013 17:26:45.409465 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfv6h\" (UniqueName: \"kubernetes.io/projected/cba388bf-d5f4-4a4e-8add-d8e4d7489f14-kube-api-access-hfv6h\") pod \"redhat-marketplace-dp5pr\" (UID: \"cba388bf-d5f4-4a4e-8add-d8e4d7489f14\") " pod="openshift-marketplace/redhat-marketplace-dp5pr" Oct 13 17:26:45 crc kubenswrapper[4720]: I1013 17:26:45.455084 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" podStartSLOduration=126.455066741 podStartE2EDuration="2m6.455066741s" podCreationTimestamp="2025-10-13 17:24:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:26:45.454327102 +0000 UTC m=+150.911577234" watchObservedRunningTime="2025-10-13 17:26:45.455066741 +0000 UTC m=+150.912316873" Oct 13 17:26:45 crc kubenswrapper[4720]: I1013 17:26:45.533874 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dp5pr" Oct 13 17:26:45 crc kubenswrapper[4720]: I1013 17:26:45.583963 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-m4v6f"] Oct 13 17:26:45 crc kubenswrapper[4720]: I1013 17:26:45.585247 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m4v6f" Oct 13 17:26:45 crc kubenswrapper[4720]: I1013 17:26:45.596904 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m4v6f"] Oct 13 17:26:45 crc kubenswrapper[4720]: I1013 17:26:45.598453 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29339595-f78vr" Oct 13 17:26:45 crc kubenswrapper[4720]: I1013 17:26:45.685089 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a6d69da-3074-4b30-898b-4bb2eea1fb75-config-volume\") pod \"0a6d69da-3074-4b30-898b-4bb2eea1fb75\" (UID: \"0a6d69da-3074-4b30-898b-4bb2eea1fb75\") " Oct 13 17:26:45 crc kubenswrapper[4720]: I1013 17:26:45.685166 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5s65\" (UniqueName: \"kubernetes.io/projected/0a6d69da-3074-4b30-898b-4bb2eea1fb75-kube-api-access-d5s65\") pod \"0a6d69da-3074-4b30-898b-4bb2eea1fb75\" (UID: \"0a6d69da-3074-4b30-898b-4bb2eea1fb75\") " Oct 13 17:26:45 crc kubenswrapper[4720]: I1013 17:26:45.685227 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a6d69da-3074-4b30-898b-4bb2eea1fb75-secret-volume\") pod \"0a6d69da-3074-4b30-898b-4bb2eea1fb75\" (UID: \"0a6d69da-3074-4b30-898b-4bb2eea1fb75\") " Oct 13 17:26:45 crc kubenswrapper[4720]: I1013 17:26:45.685412 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/949931eb-c57d-40eb-a3da-f19524cbaf1d-catalog-content\") pod \"redhat-marketplace-m4v6f\" (UID: \"949931eb-c57d-40eb-a3da-f19524cbaf1d\") " pod="openshift-marketplace/redhat-marketplace-m4v6f" Oct 13 17:26:45 crc kubenswrapper[4720]: I1013 17:26:45.685477 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qmjw\" (UniqueName: \"kubernetes.io/projected/949931eb-c57d-40eb-a3da-f19524cbaf1d-kube-api-access-7qmjw\") pod \"redhat-marketplace-m4v6f\" (UID: \"949931eb-c57d-40eb-a3da-f19524cbaf1d\") " pod="openshift-marketplace/redhat-marketplace-m4v6f" Oct 13 17:26:45 crc kubenswrapper[4720]: I1013 17:26:45.685587 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/949931eb-c57d-40eb-a3da-f19524cbaf1d-utilities\") pod \"redhat-marketplace-m4v6f\" (UID: \"949931eb-c57d-40eb-a3da-f19524cbaf1d\") " pod="openshift-marketplace/redhat-marketplace-m4v6f" Oct 13 17:26:45 crc kubenswrapper[4720]: I1013 17:26:45.686040 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a6d69da-3074-4b30-898b-4bb2eea1fb75-config-volume" (OuterVolumeSpecName: "config-volume") pod "0a6d69da-3074-4b30-898b-4bb2eea1fb75" (UID: "0a6d69da-3074-4b30-898b-4bb2eea1fb75"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:26:45 crc kubenswrapper[4720]: I1013 17:26:45.692818 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a6d69da-3074-4b30-898b-4bb2eea1fb75-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0a6d69da-3074-4b30-898b-4bb2eea1fb75" (UID: "0a6d69da-3074-4b30-898b-4bb2eea1fb75"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:26:45 crc kubenswrapper[4720]: I1013 17:26:45.693042 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a6d69da-3074-4b30-898b-4bb2eea1fb75-kube-api-access-d5s65" (OuterVolumeSpecName: "kube-api-access-d5s65") pod "0a6d69da-3074-4b30-898b-4bb2eea1fb75" (UID: "0a6d69da-3074-4b30-898b-4bb2eea1fb75"). InnerVolumeSpecName "kube-api-access-d5s65". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:26:45 crc kubenswrapper[4720]: I1013 17:26:45.731536 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dp5pr"] Oct 13 17:26:45 crc kubenswrapper[4720]: W1013 17:26:45.740042 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcba388bf_d5f4_4a4e_8add_d8e4d7489f14.slice/crio-58338dcc5638e39a983be7b708d59d0394e88673f6992b54f3ef4a9254985c97 WatchSource:0}: Error finding container 58338dcc5638e39a983be7b708d59d0394e88673f6992b54f3ef4a9254985c97: Status 404 returned error can't find the container with id 58338dcc5638e39a983be7b708d59d0394e88673f6992b54f3ef4a9254985c97 Oct 13 17:26:45 crc kubenswrapper[4720]: I1013 17:26:45.786847 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qmjw\" (UniqueName: \"kubernetes.io/projected/949931eb-c57d-40eb-a3da-f19524cbaf1d-kube-api-access-7qmjw\") pod \"redhat-marketplace-m4v6f\" (UID: \"949931eb-c57d-40eb-a3da-f19524cbaf1d\") " pod="openshift-marketplace/redhat-marketplace-m4v6f" Oct 13 17:26:45 crc kubenswrapper[4720]: I1013 17:26:45.786964 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/949931eb-c57d-40eb-a3da-f19524cbaf1d-utilities\") pod \"redhat-marketplace-m4v6f\" (UID: \"949931eb-c57d-40eb-a3da-f19524cbaf1d\") " pod="openshift-marketplace/redhat-marketplace-m4v6f" Oct 13 17:26:45 crc kubenswrapper[4720]: I1013 17:26:45.786991 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/949931eb-c57d-40eb-a3da-f19524cbaf1d-catalog-content\") pod \"redhat-marketplace-m4v6f\" (UID: \"949931eb-c57d-40eb-a3da-f19524cbaf1d\") " pod="openshift-marketplace/redhat-marketplace-m4v6f" Oct 13 17:26:45 crc kubenswrapper[4720]: I1013 17:26:45.787023 4720 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a6d69da-3074-4b30-898b-4bb2eea1fb75-config-volume\") on node \"crc\" DevicePath \"\"" Oct 13 17:26:45 crc kubenswrapper[4720]: I1013 17:26:45.787036 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5s65\" (UniqueName: \"kubernetes.io/projected/0a6d69da-3074-4b30-898b-4bb2eea1fb75-kube-api-access-d5s65\") on node \"crc\" DevicePath \"\"" Oct 13 17:26:45 crc kubenswrapper[4720]: I1013 17:26:45.787053 4720 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0a6d69da-3074-4b30-898b-4bb2eea1fb75-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 13 17:26:45 crc kubenswrapper[4720]: I1013 17:26:45.787461 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/949931eb-c57d-40eb-a3da-f19524cbaf1d-catalog-content\") pod \"redhat-marketplace-m4v6f\" (UID: \"949931eb-c57d-40eb-a3da-f19524cbaf1d\") " pod="openshift-marketplace/redhat-marketplace-m4v6f" Oct 13 17:26:45 crc kubenswrapper[4720]: I1013 17:26:45.787759 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/949931eb-c57d-40eb-a3da-f19524cbaf1d-utilities\") pod \"redhat-marketplace-m4v6f\" (UID: \"949931eb-c57d-40eb-a3da-f19524cbaf1d\") " pod="openshift-marketplace/redhat-marketplace-m4v6f" Oct 13 17:26:45 crc kubenswrapper[4720]: I1013 17:26:45.802826 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qmjw\" (UniqueName: \"kubernetes.io/projected/949931eb-c57d-40eb-a3da-f19524cbaf1d-kube-api-access-7qmjw\") pod \"redhat-marketplace-m4v6f\" (UID: \"949931eb-c57d-40eb-a3da-f19524cbaf1d\") " pod="openshift-marketplace/redhat-marketplace-m4v6f" Oct 13 17:26:45 crc kubenswrapper[4720]: I1013 17:26:45.915534 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m4v6f" Oct 13 17:26:45 crc kubenswrapper[4720]: I1013 17:26:45.917408 4720 patch_prober.go:28] interesting pod/router-default-5444994796-vttl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 13 17:26:45 crc kubenswrapper[4720]: [-]has-synced failed: reason withheld Oct 13 17:26:45 crc kubenswrapper[4720]: [+]process-running ok Oct 13 17:26:45 crc kubenswrapper[4720]: healthz check failed Oct 13 17:26:45 crc kubenswrapper[4720]: I1013 17:26:45.917477 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vttl9" podUID="a13b5004-f884-4558-b38b-b3c8028a73d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 13 17:26:45 crc kubenswrapper[4720]: I1013 17:26:45.941563 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-mm8n9" Oct 13 17:26:45 crc kubenswrapper[4720]: I1013 17:26:45.948540 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-mm8n9" Oct 13 17:26:46 crc kubenswrapper[4720]: I1013 17:26:46.023237 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Oct 13 17:26:46 crc kubenswrapper[4720]: E1013 17:26:46.023561 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a6d69da-3074-4b30-898b-4bb2eea1fb75" containerName="collect-profiles" Oct 13 17:26:46 crc kubenswrapper[4720]: I1013 17:26:46.023583 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a6d69da-3074-4b30-898b-4bb2eea1fb75" containerName="collect-profiles" Oct 13 17:26:46 crc kubenswrapper[4720]: I1013 17:26:46.023729 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a6d69da-3074-4b30-898b-4bb2eea1fb75" containerName="collect-profiles" Oct 13 17:26:46 crc kubenswrapper[4720]: I1013 17:26:46.024334 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 13 17:26:46 crc kubenswrapper[4720]: I1013 17:26:46.034314 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Oct 13 17:26:46 crc kubenswrapper[4720]: I1013 17:26:46.034579 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Oct 13 17:26:46 crc kubenswrapper[4720]: I1013 17:26:46.055030 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Oct 13 17:26:46 crc kubenswrapper[4720]: I1013 17:26:46.090860 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3508e8e0-82f4-4f2e-92d7-32cca02f3656-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3508e8e0-82f4-4f2e-92d7-32cca02f3656\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 13 17:26:46 crc kubenswrapper[4720]: I1013 17:26:46.090951 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3508e8e0-82f4-4f2e-92d7-32cca02f3656-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3508e8e0-82f4-4f2e-92d7-32cca02f3656\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 13 17:26:46 crc kubenswrapper[4720]: I1013 17:26:46.181714 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-skh6q"] Oct 13 17:26:46 crc kubenswrapper[4720]: I1013 17:26:46.183664 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-skh6q" Oct 13 17:26:46 crc kubenswrapper[4720]: I1013 17:26:46.189695 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m4v6f"] Oct 13 17:26:46 crc kubenswrapper[4720]: I1013 17:26:46.190354 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Oct 13 17:26:46 crc kubenswrapper[4720]: I1013 17:26:46.191914 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3508e8e0-82f4-4f2e-92d7-32cca02f3656-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3508e8e0-82f4-4f2e-92d7-32cca02f3656\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 13 17:26:46 crc kubenswrapper[4720]: I1013 17:26:46.192225 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cca499e4-939e-4ba7-b98b-2965e14da5c3-catalog-content\") pod \"redhat-operators-skh6q\" (UID: \"cca499e4-939e-4ba7-b98b-2965e14da5c3\") " pod="openshift-marketplace/redhat-operators-skh6q" Oct 13 17:26:46 crc kubenswrapper[4720]: I1013 17:26:46.192247 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cca499e4-939e-4ba7-b98b-2965e14da5c3-utilities\") pod \"redhat-operators-skh6q\" (UID: \"cca499e4-939e-4ba7-b98b-2965e14da5c3\") " pod="openshift-marketplace/redhat-operators-skh6q" Oct 13 17:26:46 crc kubenswrapper[4720]: I1013 17:26:46.192280 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3508e8e0-82f4-4f2e-92d7-32cca02f3656-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3508e8e0-82f4-4f2e-92d7-32cca02f3656\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 13 17:26:46 crc kubenswrapper[4720]: I1013 17:26:46.192316 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgrwm\" (UniqueName: \"kubernetes.io/projected/cca499e4-939e-4ba7-b98b-2965e14da5c3-kube-api-access-rgrwm\") pod \"redhat-operators-skh6q\" (UID: \"cca499e4-939e-4ba7-b98b-2965e14da5c3\") " pod="openshift-marketplace/redhat-operators-skh6q" Oct 13 17:26:46 crc kubenswrapper[4720]: I1013 17:26:46.192407 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3508e8e0-82f4-4f2e-92d7-32cca02f3656-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3508e8e0-82f4-4f2e-92d7-32cca02f3656\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 13 17:26:46 crc kubenswrapper[4720]: I1013 17:26:46.193410 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-skh6q"] Oct 13 17:26:46 crc kubenswrapper[4720]: I1013 17:26:46.228507 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3508e8e0-82f4-4f2e-92d7-32cca02f3656-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3508e8e0-82f4-4f2e-92d7-32cca02f3656\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 13 17:26:46 crc kubenswrapper[4720]: I1013 17:26:46.293338 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cca499e4-939e-4ba7-b98b-2965e14da5c3-catalog-content\") pod \"redhat-operators-skh6q\" (UID: \"cca499e4-939e-4ba7-b98b-2965e14da5c3\") " pod="openshift-marketplace/redhat-operators-skh6q" Oct 13 17:26:46 crc kubenswrapper[4720]: I1013 17:26:46.293569 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cca499e4-939e-4ba7-b98b-2965e14da5c3-utilities\") pod \"redhat-operators-skh6q\" (UID: \"cca499e4-939e-4ba7-b98b-2965e14da5c3\") " pod="openshift-marketplace/redhat-operators-skh6q" Oct 13 17:26:46 crc kubenswrapper[4720]: I1013 17:26:46.293608 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgrwm\" (UniqueName: \"kubernetes.io/projected/cca499e4-939e-4ba7-b98b-2965e14da5c3-kube-api-access-rgrwm\") pod \"redhat-operators-skh6q\" (UID: \"cca499e4-939e-4ba7-b98b-2965e14da5c3\") " pod="openshift-marketplace/redhat-operators-skh6q" Oct 13 17:26:46 crc kubenswrapper[4720]: I1013 17:26:46.294270 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cca499e4-939e-4ba7-b98b-2965e14da5c3-catalog-content\") pod \"redhat-operators-skh6q\" (UID: \"cca499e4-939e-4ba7-b98b-2965e14da5c3\") " pod="openshift-marketplace/redhat-operators-skh6q" Oct 13 17:26:46 crc kubenswrapper[4720]: I1013 17:26:46.294394 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cca499e4-939e-4ba7-b98b-2965e14da5c3-utilities\") pod \"redhat-operators-skh6q\" (UID: \"cca499e4-939e-4ba7-b98b-2965e14da5c3\") " pod="openshift-marketplace/redhat-operators-skh6q" Oct 13 17:26:46 crc kubenswrapper[4720]: I1013 17:26:46.312004 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgrwm\" (UniqueName: \"kubernetes.io/projected/cca499e4-939e-4ba7-b98b-2965e14da5c3-kube-api-access-rgrwm\") pod \"redhat-operators-skh6q\" (UID: \"cca499e4-939e-4ba7-b98b-2965e14da5c3\") " pod="openshift-marketplace/redhat-operators-skh6q" Oct 13 17:26:46 crc kubenswrapper[4720]: I1013 17:26:46.344970 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29339595-f78vr" event={"ID":"0a6d69da-3074-4b30-898b-4bb2eea1fb75","Type":"ContainerDied","Data":"38f706bf13060c14021de43434fb358d4b0561b2d45d668d2ef483b998a3e303"} Oct 13 17:26:46 crc kubenswrapper[4720]: I1013 17:26:46.345015 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38f706bf13060c14021de43434fb358d4b0561b2d45d668d2ef483b998a3e303" Oct 13 17:26:46 crc kubenswrapper[4720]: I1013 17:26:46.345039 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29339595-f78vr" Oct 13 17:26:46 crc kubenswrapper[4720]: I1013 17:26:46.358881 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m4v6f" event={"ID":"949931eb-c57d-40eb-a3da-f19524cbaf1d","Type":"ContainerStarted","Data":"3f63e8c43f000b858c98287a4b093ce64e949847eec0d32ff2cc77663b52aedc"} Oct 13 17:26:46 crc kubenswrapper[4720]: I1013 17:26:46.367582 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 13 17:26:46 crc kubenswrapper[4720]: I1013 17:26:46.375471 4720 generic.go:334] "Generic (PLEG): container finished" podID="cba388bf-d5f4-4a4e-8add-d8e4d7489f14" containerID="41ce6f38cc9fe7460d5633aea179b70f95795f553d935a0122ac68ea30a9e46c" exitCode=0 Oct 13 17:26:46 crc kubenswrapper[4720]: I1013 17:26:46.376793 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dp5pr" event={"ID":"cba388bf-d5f4-4a4e-8add-d8e4d7489f14","Type":"ContainerDied","Data":"41ce6f38cc9fe7460d5633aea179b70f95795f553d935a0122ac68ea30a9e46c"} Oct 13 17:26:46 crc kubenswrapper[4720]: I1013 17:26:46.376854 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dp5pr" event={"ID":"cba388bf-d5f4-4a4e-8add-d8e4d7489f14","Type":"ContainerStarted","Data":"58338dcc5638e39a983be7b708d59d0394e88673f6992b54f3ef4a9254985c97"} Oct 13 17:26:46 crc kubenswrapper[4720]: I1013 17:26:46.493818 4720 patch_prober.go:28] interesting pod/downloads-7954f5f757-l8tzk container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Oct 13 17:26:46 crc kubenswrapper[4720]: I1013 17:26:46.494180 4720 patch_prober.go:28] interesting pod/downloads-7954f5f757-l8tzk container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Oct 13 17:26:46 crc kubenswrapper[4720]: I1013 17:26:46.494306 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-l8tzk" podUID="e33fd1d2-081b-4e68-ab37-623406daeaeb" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Oct 13 17:26:46 crc kubenswrapper[4720]: I1013 17:26:46.494622 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-l8tzk" podUID="e33fd1d2-081b-4e68-ab37-623406daeaeb" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Oct 13 17:26:46 crc kubenswrapper[4720]: I1013 17:26:46.538505 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-skh6q" Oct 13 17:26:46 crc kubenswrapper[4720]: I1013 17:26:46.581765 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dhtzn"] Oct 13 17:26:46 crc kubenswrapper[4720]: I1013 17:26:46.582963 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dhtzn" Oct 13 17:26:46 crc kubenswrapper[4720]: I1013 17:26:46.599912 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dhtzn"] Oct 13 17:26:46 crc kubenswrapper[4720]: I1013 17:26:46.701296 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a4310d4-4aaa-4de5-be61-46eb59ac5d9b-catalog-content\") pod \"redhat-operators-dhtzn\" (UID: \"0a4310d4-4aaa-4de5-be61-46eb59ac5d9b\") " pod="openshift-marketplace/redhat-operators-dhtzn" Oct 13 17:26:46 crc kubenswrapper[4720]: I1013 17:26:46.701452 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nd9cr\" (UniqueName: \"kubernetes.io/projected/0a4310d4-4aaa-4de5-be61-46eb59ac5d9b-kube-api-access-nd9cr\") pod \"redhat-operators-dhtzn\" (UID: \"0a4310d4-4aaa-4de5-be61-46eb59ac5d9b\") " pod="openshift-marketplace/redhat-operators-dhtzn" Oct 13 17:26:46 crc kubenswrapper[4720]: I1013 17:26:46.701489 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a4310d4-4aaa-4de5-be61-46eb59ac5d9b-utilities\") pod \"redhat-operators-dhtzn\" (UID: \"0a4310d4-4aaa-4de5-be61-46eb59ac5d9b\") " pod="openshift-marketplace/redhat-operators-dhtzn" Oct 13 17:26:46 crc kubenswrapper[4720]: I1013 17:26:46.729174 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-5cfzh" Oct 13 17:26:46 crc kubenswrapper[4720]: I1013 17:26:46.729299 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-5cfzh" Oct 13 17:26:46 crc kubenswrapper[4720]: I1013 17:26:46.729533 4720 patch_prober.go:28] interesting pod/console-f9d7485db-5cfzh container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Oct 13 17:26:46 crc kubenswrapper[4720]: I1013 17:26:46.729613 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-5cfzh" podUID="b0af5887-2244-4dfb-8e2a-a66ac6bf6762" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Oct 13 17:26:46 crc kubenswrapper[4720]: I1013 17:26:46.802515 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nd9cr\" (UniqueName: \"kubernetes.io/projected/0a4310d4-4aaa-4de5-be61-46eb59ac5d9b-kube-api-access-nd9cr\") pod \"redhat-operators-dhtzn\" (UID: \"0a4310d4-4aaa-4de5-be61-46eb59ac5d9b\") " pod="openshift-marketplace/redhat-operators-dhtzn" Oct 13 17:26:46 crc kubenswrapper[4720]: I1013 17:26:46.802573 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a4310d4-4aaa-4de5-be61-46eb59ac5d9b-utilities\") pod \"redhat-operators-dhtzn\" (UID: \"0a4310d4-4aaa-4de5-be61-46eb59ac5d9b\") " pod="openshift-marketplace/redhat-operators-dhtzn" Oct 13 17:26:46 crc kubenswrapper[4720]: I1013 17:26:46.802660 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a4310d4-4aaa-4de5-be61-46eb59ac5d9b-catalog-content\") pod \"redhat-operators-dhtzn\" (UID: \"0a4310d4-4aaa-4de5-be61-46eb59ac5d9b\") " pod="openshift-marketplace/redhat-operators-dhtzn" Oct 13 17:26:46 crc kubenswrapper[4720]: I1013 17:26:46.803315 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a4310d4-4aaa-4de5-be61-46eb59ac5d9b-utilities\") pod \"redhat-operators-dhtzn\" (UID: \"0a4310d4-4aaa-4de5-be61-46eb59ac5d9b\") " pod="openshift-marketplace/redhat-operators-dhtzn" Oct 13 17:26:46 crc kubenswrapper[4720]: I1013 17:26:46.803596 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a4310d4-4aaa-4de5-be61-46eb59ac5d9b-catalog-content\") pod \"redhat-operators-dhtzn\" (UID: \"0a4310d4-4aaa-4de5-be61-46eb59ac5d9b\") " pod="openshift-marketplace/redhat-operators-dhtzn" Oct 13 17:26:46 crc kubenswrapper[4720]: I1013 17:26:46.820177 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nd9cr\" (UniqueName: \"kubernetes.io/projected/0a4310d4-4aaa-4de5-be61-46eb59ac5d9b-kube-api-access-nd9cr\") pod \"redhat-operators-dhtzn\" (UID: \"0a4310d4-4aaa-4de5-be61-46eb59ac5d9b\") " pod="openshift-marketplace/redhat-operators-dhtzn" Oct 13 17:26:46 crc kubenswrapper[4720]: I1013 17:26:46.837548 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Oct 13 17:26:46 crc kubenswrapper[4720]: W1013 17:26:46.850452 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod3508e8e0_82f4_4f2e_92d7_32cca02f3656.slice/crio-746d819ca1edba078a47ecbb4331b7fe2bcb42fb60cf560cc80eb542b80f877a WatchSource:0}: Error finding container 746d819ca1edba078a47ecbb4331b7fe2bcb42fb60cf560cc80eb542b80f877a: Status 404 returned error can't find the container with id 746d819ca1edba078a47ecbb4331b7fe2bcb42fb60cf560cc80eb542b80f877a Oct 13 17:26:46 crc kubenswrapper[4720]: I1013 17:26:46.905461 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dhtzn" Oct 13 17:26:46 crc kubenswrapper[4720]: I1013 17:26:46.912668 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-vttl9" Oct 13 17:26:46 crc kubenswrapper[4720]: I1013 17:26:46.928601 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-skh6q"] Oct 13 17:26:46 crc kubenswrapper[4720]: I1013 17:26:46.947569 4720 patch_prober.go:28] interesting pod/router-default-5444994796-vttl9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 13 17:26:46 crc kubenswrapper[4720]: [-]has-synced failed: reason withheld Oct 13 17:26:46 crc kubenswrapper[4720]: [+]process-running ok Oct 13 17:26:46 crc kubenswrapper[4720]: healthz check failed Oct 13 17:26:46 crc kubenswrapper[4720]: I1013 17:26:46.947869 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-vttl9" podUID="a13b5004-f884-4558-b38b-b3c8028a73d5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 13 17:26:46 crc kubenswrapper[4720]: I1013 17:26:46.990249 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-xpdm2" Oct 13 17:26:47 crc kubenswrapper[4720]: I1013 17:26:47.401386 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"3508e8e0-82f4-4f2e-92d7-32cca02f3656","Type":"ContainerStarted","Data":"746d819ca1edba078a47ecbb4331b7fe2bcb42fb60cf560cc80eb542b80f877a"} Oct 13 17:26:47 crc kubenswrapper[4720]: I1013 17:26:47.407284 4720 generic.go:334] "Generic (PLEG): container finished" podID="cca499e4-939e-4ba7-b98b-2965e14da5c3" containerID="427183d1775f8d339d18c7ad16a9c53f5af7053536a5f706cb51cb1255670629" exitCode=0 Oct 13 17:26:47 crc kubenswrapper[4720]: I1013 17:26:47.407350 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-skh6q" event={"ID":"cca499e4-939e-4ba7-b98b-2965e14da5c3","Type":"ContainerDied","Data":"427183d1775f8d339d18c7ad16a9c53f5af7053536a5f706cb51cb1255670629"} Oct 13 17:26:47 crc kubenswrapper[4720]: I1013 17:26:47.407375 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-skh6q" event={"ID":"cca499e4-939e-4ba7-b98b-2965e14da5c3","Type":"ContainerStarted","Data":"67bfcb29092f7b4b0e1ae9b84aba87c0fd09ac37a432e5f402349c2e7ae402db"} Oct 13 17:26:47 crc kubenswrapper[4720]: I1013 17:26:47.409080 4720 generic.go:334] "Generic (PLEG): container finished" podID="949931eb-c57d-40eb-a3da-f19524cbaf1d" containerID="531b0abe3f21349d4b9693d45261ebdfd914212789dfb75b01246c97c8b83879" exitCode=0 Oct 13 17:26:47 crc kubenswrapper[4720]: I1013 17:26:47.409127 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m4v6f" event={"ID":"949931eb-c57d-40eb-a3da-f19524cbaf1d","Type":"ContainerDied","Data":"531b0abe3f21349d4b9693d45261ebdfd914212789dfb75b01246c97c8b83879"} Oct 13 17:26:47 crc kubenswrapper[4720]: I1013 17:26:47.433880 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dhtzn"] Oct 13 17:26:47 crc kubenswrapper[4720]: I1013 17:26:47.917285 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-vttl9" Oct 13 17:26:47 crc kubenswrapper[4720]: I1013 17:26:47.920124 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-vttl9" Oct 13 17:26:48 crc kubenswrapper[4720]: I1013 17:26:48.440671 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Oct 13 17:26:48 crc kubenswrapper[4720]: I1013 17:26:48.444883 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 13 17:26:48 crc kubenswrapper[4720]: I1013 17:26:48.446116 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Oct 13 17:26:48 crc kubenswrapper[4720]: I1013 17:26:48.447311 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Oct 13 17:26:48 crc kubenswrapper[4720]: I1013 17:26:48.448246 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Oct 13 17:26:48 crc kubenswrapper[4720]: I1013 17:26:48.502083 4720 generic.go:334] "Generic (PLEG): container finished" podID="3508e8e0-82f4-4f2e-92d7-32cca02f3656" containerID="b55c851ce8a9560880a546367bb9b5eef06d69f47de4f5b7ee099c25438189a5" exitCode=0 Oct 13 17:26:48 crc kubenswrapper[4720]: I1013 17:26:48.502147 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"3508e8e0-82f4-4f2e-92d7-32cca02f3656","Type":"ContainerDied","Data":"b55c851ce8a9560880a546367bb9b5eef06d69f47de4f5b7ee099c25438189a5"} Oct 13 17:26:48 crc kubenswrapper[4720]: I1013 17:26:48.504133 4720 generic.go:334] "Generic (PLEG): container finished" podID="0a4310d4-4aaa-4de5-be61-46eb59ac5d9b" containerID="658f335d1cbc20b87cd50d7925a4e563accd9093e8a36ee90c84a2bd0052fe20" exitCode=0 Oct 13 17:26:48 crc kubenswrapper[4720]: I1013 17:26:48.505105 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dhtzn" event={"ID":"0a4310d4-4aaa-4de5-be61-46eb59ac5d9b","Type":"ContainerDied","Data":"658f335d1cbc20b87cd50d7925a4e563accd9093e8a36ee90c84a2bd0052fe20"} Oct 13 17:26:48 crc kubenswrapper[4720]: I1013 17:26:48.505126 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dhtzn" event={"ID":"0a4310d4-4aaa-4de5-be61-46eb59ac5d9b","Type":"ContainerStarted","Data":"0e5eff35fd869e190f671e7ba6f4e85d4a110fdf81146f096efb473701cf9e7d"} Oct 13 17:26:48 crc kubenswrapper[4720]: I1013 17:26:48.546154 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/299b146d-27f2-455c-8d9a-8ae9829d570c-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"299b146d-27f2-455c-8d9a-8ae9829d570c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 13 17:26:48 crc kubenswrapper[4720]: I1013 17:26:48.546213 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/299b146d-27f2-455c-8d9a-8ae9829d570c-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"299b146d-27f2-455c-8d9a-8ae9829d570c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 13 17:26:48 crc kubenswrapper[4720]: I1013 17:26:48.647023 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/299b146d-27f2-455c-8d9a-8ae9829d570c-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"299b146d-27f2-455c-8d9a-8ae9829d570c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 13 17:26:48 crc kubenswrapper[4720]: I1013 17:26:48.647078 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/299b146d-27f2-455c-8d9a-8ae9829d570c-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"299b146d-27f2-455c-8d9a-8ae9829d570c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 13 17:26:48 crc kubenswrapper[4720]: I1013 17:26:48.648163 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/299b146d-27f2-455c-8d9a-8ae9829d570c-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"299b146d-27f2-455c-8d9a-8ae9829d570c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 13 17:26:48 crc kubenswrapper[4720]: I1013 17:26:48.692569 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/299b146d-27f2-455c-8d9a-8ae9829d570c-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"299b146d-27f2-455c-8d9a-8ae9829d570c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 13 17:26:48 crc kubenswrapper[4720]: I1013 17:26:48.792388 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 13 17:26:49 crc kubenswrapper[4720]: I1013 17:26:49.069845 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-pmrm5" Oct 13 17:26:49 crc kubenswrapper[4720]: I1013 17:26:49.430851 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Oct 13 17:26:49 crc kubenswrapper[4720]: W1013 17:26:49.504532 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod299b146d_27f2_455c_8d9a_8ae9829d570c.slice/crio-eee615ac1245405ac3a2e6fbdeec3e58fa69878a52ef2b41db82fc8a085b652c WatchSource:0}: Error finding container eee615ac1245405ac3a2e6fbdeec3e58fa69878a52ef2b41db82fc8a085b652c: Status 404 returned error can't find the container with id eee615ac1245405ac3a2e6fbdeec3e58fa69878a52ef2b41db82fc8a085b652c Oct 13 17:26:50 crc kubenswrapper[4720]: I1013 17:26:49.995802 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 13 17:26:50 crc kubenswrapper[4720]: I1013 17:26:50.083487 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3508e8e0-82f4-4f2e-92d7-32cca02f3656-kubelet-dir\") pod \"3508e8e0-82f4-4f2e-92d7-32cca02f3656\" (UID: \"3508e8e0-82f4-4f2e-92d7-32cca02f3656\") " Oct 13 17:26:50 crc kubenswrapper[4720]: I1013 17:26:50.083581 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3508e8e0-82f4-4f2e-92d7-32cca02f3656-kube-api-access\") pod \"3508e8e0-82f4-4f2e-92d7-32cca02f3656\" (UID: \"3508e8e0-82f4-4f2e-92d7-32cca02f3656\") " Oct 13 17:26:50 crc kubenswrapper[4720]: I1013 17:26:50.083973 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3508e8e0-82f4-4f2e-92d7-32cca02f3656-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3508e8e0-82f4-4f2e-92d7-32cca02f3656" (UID: "3508e8e0-82f4-4f2e-92d7-32cca02f3656"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 17:26:50 crc kubenswrapper[4720]: I1013 17:26:50.091315 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3508e8e0-82f4-4f2e-92d7-32cca02f3656-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3508e8e0-82f4-4f2e-92d7-32cca02f3656" (UID: "3508e8e0-82f4-4f2e-92d7-32cca02f3656"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:26:50 crc kubenswrapper[4720]: I1013 17:26:50.185309 4720 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3508e8e0-82f4-4f2e-92d7-32cca02f3656-kubelet-dir\") on node \"crc\" DevicePath \"\"" Oct 13 17:26:50 crc kubenswrapper[4720]: I1013 17:26:50.185353 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3508e8e0-82f4-4f2e-92d7-32cca02f3656-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 13 17:26:50 crc kubenswrapper[4720]: I1013 17:26:50.539005 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 13 17:26:50 crc kubenswrapper[4720]: I1013 17:26:50.539049 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"3508e8e0-82f4-4f2e-92d7-32cca02f3656","Type":"ContainerDied","Data":"746d819ca1edba078a47ecbb4331b7fe2bcb42fb60cf560cc80eb542b80f877a"} Oct 13 17:26:50 crc kubenswrapper[4720]: I1013 17:26:50.539359 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="746d819ca1edba078a47ecbb4331b7fe2bcb42fb60cf560cc80eb542b80f877a" Oct 13 17:26:50 crc kubenswrapper[4720]: I1013 17:26:50.547276 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"299b146d-27f2-455c-8d9a-8ae9829d570c","Type":"ContainerStarted","Data":"ea4e3a27990aedac1785c188fe4cd971ca8569aca9abccd764c511ca24a5ef64"} Oct 13 17:26:50 crc kubenswrapper[4720]: I1013 17:26:50.547308 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"299b146d-27f2-455c-8d9a-8ae9829d570c","Type":"ContainerStarted","Data":"eee615ac1245405ac3a2e6fbdeec3e58fa69878a52ef2b41db82fc8a085b652c"} Oct 13 17:26:51 crc kubenswrapper[4720]: I1013 17:26:51.563126 4720 generic.go:334] "Generic (PLEG): container finished" podID="299b146d-27f2-455c-8d9a-8ae9829d570c" containerID="ea4e3a27990aedac1785c188fe4cd971ca8569aca9abccd764c511ca24a5ef64" exitCode=0 Oct 13 17:26:51 crc kubenswrapper[4720]: I1013 17:26:51.563228 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"299b146d-27f2-455c-8d9a-8ae9829d570c","Type":"ContainerDied","Data":"ea4e3a27990aedac1785c188fe4cd971ca8569aca9abccd764c511ca24a5ef64"} Oct 13 17:26:56 crc kubenswrapper[4720]: I1013 17:26:56.498803 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-l8tzk" Oct 13 17:26:56 crc kubenswrapper[4720]: I1013 17:26:56.731217 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-5cfzh" Oct 13 17:26:56 crc kubenswrapper[4720]: I1013 17:26:56.741820 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-5cfzh" Oct 13 17:26:57 crc kubenswrapper[4720]: I1013 17:26:57.499608 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 13 17:26:57 crc kubenswrapper[4720]: I1013 17:26:57.566953 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:26:57 crc kubenswrapper[4720]: I1013 17:26:57.587081 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/299b146d-27f2-455c-8d9a-8ae9829d570c-kubelet-dir\") pod \"299b146d-27f2-455c-8d9a-8ae9829d570c\" (UID: \"299b146d-27f2-455c-8d9a-8ae9829d570c\") " Oct 13 17:26:57 crc kubenswrapper[4720]: I1013 17:26:57.587216 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/299b146d-27f2-455c-8d9a-8ae9829d570c-kube-api-access\") pod \"299b146d-27f2-455c-8d9a-8ae9829d570c\" (UID: \"299b146d-27f2-455c-8d9a-8ae9829d570c\") " Oct 13 17:26:57 crc kubenswrapper[4720]: I1013 17:26:57.587220 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/299b146d-27f2-455c-8d9a-8ae9829d570c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "299b146d-27f2-455c-8d9a-8ae9829d570c" (UID: "299b146d-27f2-455c-8d9a-8ae9829d570c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 17:26:57 crc kubenswrapper[4720]: I1013 17:26:57.587422 4720 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/299b146d-27f2-455c-8d9a-8ae9829d570c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Oct 13 17:26:57 crc kubenswrapper[4720]: I1013 17:26:57.592722 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/299b146d-27f2-455c-8d9a-8ae9829d570c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "299b146d-27f2-455c-8d9a-8ae9829d570c" (UID: "299b146d-27f2-455c-8d9a-8ae9829d570c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:26:57 crc kubenswrapper[4720]: I1013 17:26:57.605589 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"299b146d-27f2-455c-8d9a-8ae9829d570c","Type":"ContainerDied","Data":"eee615ac1245405ac3a2e6fbdeec3e58fa69878a52ef2b41db82fc8a085b652c"} Oct 13 17:26:57 crc kubenswrapper[4720]: I1013 17:26:57.605616 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 13 17:26:57 crc kubenswrapper[4720]: I1013 17:26:57.605627 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eee615ac1245405ac3a2e6fbdeec3e58fa69878a52ef2b41db82fc8a085b652c" Oct 13 17:26:57 crc kubenswrapper[4720]: I1013 17:26:57.691094 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/299b146d-27f2-455c-8d9a-8ae9829d570c-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 13 17:27:02 crc kubenswrapper[4720]: I1013 17:27:02.359110 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61-metrics-certs\") pod \"network-metrics-daemon-c6ntg\" (UID: \"c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61\") " pod="openshift-multus/network-metrics-daemon-c6ntg" Oct 13 17:27:02 crc kubenswrapper[4720]: I1013 17:27:02.384114 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61-metrics-certs\") pod \"network-metrics-daemon-c6ntg\" (UID: \"c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61\") " pod="openshift-multus/network-metrics-daemon-c6ntg" Oct 13 17:27:02 crc kubenswrapper[4720]: I1013 17:27:02.405298 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-c6ntg" Oct 13 17:27:03 crc kubenswrapper[4720]: I1013 17:27:03.674542 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:27:11 crc kubenswrapper[4720]: E1013 17:27:11.962134 4720 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Oct 13 17:27:11 crc kubenswrapper[4720]: E1013 17:27:11.962956 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h74rp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-85tbj_openshift-marketplace(4ec46228-40b7-4dd0-b773-59e2b088ef17): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 13 17:27:11 crc kubenswrapper[4720]: E1013 17:27:11.964454 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-85tbj" podUID="4ec46228-40b7-4dd0-b773-59e2b088ef17" Oct 13 17:27:12 crc kubenswrapper[4720]: I1013 17:27:12.395725 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-c6ntg"] Oct 13 17:27:12 crc kubenswrapper[4720]: W1013 17:27:12.406892 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1ce1f4c_9fc2_4364_91fc_dd9dcbfc8c61.slice/crio-20ec1a66718044cb436cd36ffa442f2a909a24e28e04fdd66ff3cbd01d4a4895 WatchSource:0}: Error finding container 20ec1a66718044cb436cd36ffa442f2a909a24e28e04fdd66ff3cbd01d4a4895: Status 404 returned error can't find the container with id 20ec1a66718044cb436cd36ffa442f2a909a24e28e04fdd66ff3cbd01d4a4895 Oct 13 17:27:12 crc kubenswrapper[4720]: I1013 17:27:12.691821 4720 generic.go:334] "Generic (PLEG): container finished" podID="24757001-c6f3-4ecc-9b71-8fa697a28390" containerID="c6f37dabd8d97e52205949791e387bd853be569cb24db811983503a3abc556ce" exitCode=0 Oct 13 17:27:12 crc kubenswrapper[4720]: I1013 17:27:12.691892 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-crcm9" event={"ID":"24757001-c6f3-4ecc-9b71-8fa697a28390","Type":"ContainerDied","Data":"c6f37dabd8d97e52205949791e387bd853be569cb24db811983503a3abc556ce"} Oct 13 17:27:12 crc kubenswrapper[4720]: I1013 17:27:12.695474 4720 generic.go:334] "Generic (PLEG): container finished" podID="9a30ffbf-3240-409c-8c06-d9e12a160aab" containerID="826e4a20690bcb0f0aa0f6c4e56d2111d0b65da0031b2d4b628c43d61f0d1eba" exitCode=0 Oct 13 17:27:12 crc kubenswrapper[4720]: I1013 17:27:12.695563 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pq4c6" event={"ID":"9a30ffbf-3240-409c-8c06-d9e12a160aab","Type":"ContainerDied","Data":"826e4a20690bcb0f0aa0f6c4e56d2111d0b65da0031b2d4b628c43d61f0d1eba"} Oct 13 17:27:12 crc kubenswrapper[4720]: I1013 17:27:12.697343 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6n8tc" event={"ID":"45a4b9f0-8d20-4a69-bd03-2c54f1a66867","Type":"ContainerDied","Data":"89923e28ff7b52cae40ac9293ef7a90e6cb8e7616d954b82130dddad23d6ae55"} Oct 13 17:27:12 crc kubenswrapper[4720]: I1013 17:27:12.697648 4720 generic.go:334] "Generic (PLEG): container finished" podID="45a4b9f0-8d20-4a69-bd03-2c54f1a66867" containerID="89923e28ff7b52cae40ac9293ef7a90e6cb8e7616d954b82130dddad23d6ae55" exitCode=0 Oct 13 17:27:12 crc kubenswrapper[4720]: I1013 17:27:12.703284 4720 generic.go:334] "Generic (PLEG): container finished" podID="cba388bf-d5f4-4a4e-8add-d8e4d7489f14" containerID="16966728b322fb302e5cc8d1cbd3329c3123673d9d7834071a6b2a14ae159e56" exitCode=0 Oct 13 17:27:12 crc kubenswrapper[4720]: I1013 17:27:12.703351 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dp5pr" event={"ID":"cba388bf-d5f4-4a4e-8add-d8e4d7489f14","Type":"ContainerDied","Data":"16966728b322fb302e5cc8d1cbd3329c3123673d9d7834071a6b2a14ae159e56"} Oct 13 17:27:12 crc kubenswrapper[4720]: I1013 17:27:12.707998 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-skh6q" event={"ID":"cca499e4-939e-4ba7-b98b-2965e14da5c3","Type":"ContainerStarted","Data":"2ed0c853721c429187d42c4b8c88c42d897ed63f340cfc7558bdc44e908ad230"} Oct 13 17:27:12 crc kubenswrapper[4720]: I1013 17:27:12.710150 4720 generic.go:334] "Generic (PLEG): container finished" podID="949931eb-c57d-40eb-a3da-f19524cbaf1d" containerID="a3443b760cb93019e2d405d6f1624a50aa2e8f3d8c1d61ee149cde6561b0ed83" exitCode=0 Oct 13 17:27:12 crc kubenswrapper[4720]: I1013 17:27:12.710242 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m4v6f" event={"ID":"949931eb-c57d-40eb-a3da-f19524cbaf1d","Type":"ContainerDied","Data":"a3443b760cb93019e2d405d6f1624a50aa2e8f3d8c1d61ee149cde6561b0ed83"} Oct 13 17:27:12 crc kubenswrapper[4720]: I1013 17:27:12.737195 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dhtzn" event={"ID":"0a4310d4-4aaa-4de5-be61-46eb59ac5d9b","Type":"ContainerStarted","Data":"e379b0e23442bcc813fa938f822bff4a2eae3928bd687ed061587cc5620a850d"} Oct 13 17:27:12 crc kubenswrapper[4720]: I1013 17:27:12.744709 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-c6ntg" event={"ID":"c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61","Type":"ContainerStarted","Data":"a7e46356406ac9b217f3dd18f4100962b138cb04ad864977222edd963072c8e5"} Oct 13 17:27:12 crc kubenswrapper[4720]: I1013 17:27:12.744754 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-c6ntg" event={"ID":"c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61","Type":"ContainerStarted","Data":"20ec1a66718044cb436cd36ffa442f2a909a24e28e04fdd66ff3cbd01d4a4895"} Oct 13 17:27:12 crc kubenswrapper[4720]: E1013 17:27:12.748442 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-85tbj" podUID="4ec46228-40b7-4dd0-b773-59e2b088ef17" Oct 13 17:27:13 crc kubenswrapper[4720]: I1013 17:27:13.765268 4720 generic.go:334] "Generic (PLEG): container finished" podID="cca499e4-939e-4ba7-b98b-2965e14da5c3" containerID="2ed0c853721c429187d42c4b8c88c42d897ed63f340cfc7558bdc44e908ad230" exitCode=0 Oct 13 17:27:13 crc kubenswrapper[4720]: I1013 17:27:13.765643 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-skh6q" event={"ID":"cca499e4-939e-4ba7-b98b-2965e14da5c3","Type":"ContainerDied","Data":"2ed0c853721c429187d42c4b8c88c42d897ed63f340cfc7558bdc44e908ad230"} Oct 13 17:27:13 crc kubenswrapper[4720]: I1013 17:27:13.775839 4720 generic.go:334] "Generic (PLEG): container finished" podID="0a4310d4-4aaa-4de5-be61-46eb59ac5d9b" containerID="e379b0e23442bcc813fa938f822bff4a2eae3928bd687ed061587cc5620a850d" exitCode=0 Oct 13 17:27:13 crc kubenswrapper[4720]: I1013 17:27:13.775921 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dhtzn" event={"ID":"0a4310d4-4aaa-4de5-be61-46eb59ac5d9b","Type":"ContainerDied","Data":"e379b0e23442bcc813fa938f822bff4a2eae3928bd687ed061587cc5620a850d"} Oct 13 17:27:13 crc kubenswrapper[4720]: I1013 17:27:13.779531 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-c6ntg" event={"ID":"c1ce1f4c-9fc2-4364-91fc-dd9dcbfc8c61","Type":"ContainerStarted","Data":"4d2a177f94b893393130f16061c55395c1e5a27df23647dba1302ac86dcf041c"} Oct 13 17:27:13 crc kubenswrapper[4720]: I1013 17:27:13.782694 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-crcm9" event={"ID":"24757001-c6f3-4ecc-9b71-8fa697a28390","Type":"ContainerStarted","Data":"99604ef1ec0076fd63a03010dc24da695520c71f931a1a737353e42a5a594750"} Oct 13 17:27:13 crc kubenswrapper[4720]: I1013 17:27:13.784844 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pq4c6" event={"ID":"9a30ffbf-3240-409c-8c06-d9e12a160aab","Type":"ContainerStarted","Data":"ca8dede282c99d75e25442624105964c832dc53a8df4f96d5633b227b22af288"} Oct 13 17:27:13 crc kubenswrapper[4720]: I1013 17:27:13.832297 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-c6ntg" podStartSLOduration=154.832275624 podStartE2EDuration="2m34.832275624s" podCreationTimestamp="2025-10-13 17:24:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:27:13.828906108 +0000 UTC m=+179.286156250" watchObservedRunningTime="2025-10-13 17:27:13.832275624 +0000 UTC m=+179.289525766" Oct 13 17:27:13 crc kubenswrapper[4720]: I1013 17:27:13.856930 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-crcm9" podStartSLOduration=2.691806685 podStartE2EDuration="30.856912533s" podCreationTimestamp="2025-10-13 17:26:43 +0000 UTC" firstStartedPulling="2025-10-13 17:26:45.33044158 +0000 UTC m=+150.787691712" lastFinishedPulling="2025-10-13 17:27:13.495547418 +0000 UTC m=+178.952797560" observedRunningTime="2025-10-13 17:27:13.852703155 +0000 UTC m=+179.309953297" watchObservedRunningTime="2025-10-13 17:27:13.856912533 +0000 UTC m=+179.314162685" Oct 13 17:27:13 crc kubenswrapper[4720]: I1013 17:27:13.940120 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-crcm9" Oct 13 17:27:13 crc kubenswrapper[4720]: I1013 17:27:13.940177 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-crcm9" Oct 13 17:27:14 crc kubenswrapper[4720]: I1013 17:27:14.793358 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6n8tc" event={"ID":"45a4b9f0-8d20-4a69-bd03-2c54f1a66867","Type":"ContainerStarted","Data":"cf65fe1c986ef26b2cbcd07d95fc4524019664b900ead71f49b7b97a7370ccac"} Oct 13 17:27:14 crc kubenswrapper[4720]: I1013 17:27:14.796247 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dp5pr" event={"ID":"cba388bf-d5f4-4a4e-8add-d8e4d7489f14","Type":"ContainerStarted","Data":"b1a3a0262a0118f50aac13c998dc0a0112a4fd47dfd4aa7ef890f7df6b4556fd"} Oct 13 17:27:14 crc kubenswrapper[4720]: I1013 17:27:14.798917 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-skh6q" event={"ID":"cca499e4-939e-4ba7-b98b-2965e14da5c3","Type":"ContainerStarted","Data":"acd6732fa35e11a229552026ed184ffe35dee5797908a4230bdc446c7bd0d5a0"} Oct 13 17:27:14 crc kubenswrapper[4720]: I1013 17:27:14.800752 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m4v6f" event={"ID":"949931eb-c57d-40eb-a3da-f19524cbaf1d","Type":"ContainerStarted","Data":"5752192cb52959d19cf9900b9b55c52eaab072cbc787fed587677d75efcb1f4f"} Oct 13 17:27:14 crc kubenswrapper[4720]: I1013 17:27:14.802779 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dhtzn" event={"ID":"0a4310d4-4aaa-4de5-be61-46eb59ac5d9b","Type":"ContainerStarted","Data":"1fb878281b00ca7fd1521fb0ce72dcd352ecf858365d6f08fc096ede95bc4a0e"} Oct 13 17:27:14 crc kubenswrapper[4720]: I1013 17:27:14.814080 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pq4c6" podStartSLOduration=3.7209494640000003 podStartE2EDuration="31.814057346s" podCreationTimestamp="2025-10-13 17:26:43 +0000 UTC" firstStartedPulling="2025-10-13 17:26:45.337787027 +0000 UTC m=+150.795037159" lastFinishedPulling="2025-10-13 17:27:13.430894889 +0000 UTC m=+178.888145041" observedRunningTime="2025-10-13 17:27:13.875320363 +0000 UTC m=+179.332570505" watchObservedRunningTime="2025-10-13 17:27:14.814057346 +0000 UTC m=+180.271307478" Oct 13 17:27:14 crc kubenswrapper[4720]: I1013 17:27:14.814471 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6n8tc" podStartSLOduration=4.47669564 podStartE2EDuration="32.814466526s" podCreationTimestamp="2025-10-13 17:26:42 +0000 UTC" firstStartedPulling="2025-10-13 17:26:45.304960009 +0000 UTC m=+150.762210151" lastFinishedPulling="2025-10-13 17:27:13.642730895 +0000 UTC m=+179.099981037" observedRunningTime="2025-10-13 17:27:14.812585738 +0000 UTC m=+180.269835870" watchObservedRunningTime="2025-10-13 17:27:14.814466526 +0000 UTC m=+180.271716658" Oct 13 17:27:14 crc kubenswrapper[4720]: I1013 17:27:14.839737 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-skh6q" podStartSLOduration=2.005135847 podStartE2EDuration="28.839719511s" podCreationTimestamp="2025-10-13 17:26:46 +0000 UTC" firstStartedPulling="2025-10-13 17:26:47.409288657 +0000 UTC m=+152.866538789" lastFinishedPulling="2025-10-13 17:27:14.243872321 +0000 UTC m=+179.701122453" observedRunningTime="2025-10-13 17:27:14.838205832 +0000 UTC m=+180.295455964" watchObservedRunningTime="2025-10-13 17:27:14.839719511 +0000 UTC m=+180.296969643" Oct 13 17:27:14 crc kubenswrapper[4720]: I1013 17:27:14.860032 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dp5pr" podStartSLOduration=2.703324012 podStartE2EDuration="29.860016339s" podCreationTimestamp="2025-10-13 17:26:45 +0000 UTC" firstStartedPulling="2025-10-13 17:26:46.380125305 +0000 UTC m=+151.837375437" lastFinishedPulling="2025-10-13 17:27:13.536817592 +0000 UTC m=+178.994067764" observedRunningTime="2025-10-13 17:27:14.856986012 +0000 UTC m=+180.314236144" watchObservedRunningTime="2025-10-13 17:27:14.860016339 +0000 UTC m=+180.317266471" Oct 13 17:27:14 crc kubenswrapper[4720]: I1013 17:27:14.899736 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-m4v6f" podStartSLOduration=3.744355208 podStartE2EDuration="29.899716613s" podCreationTimestamp="2025-10-13 17:26:45 +0000 UTC" firstStartedPulling="2025-10-13 17:26:47.413340191 +0000 UTC m=+152.870590323" lastFinishedPulling="2025-10-13 17:27:13.568701576 +0000 UTC m=+179.025951728" observedRunningTime="2025-10-13 17:27:14.879512447 +0000 UTC m=+180.336762579" watchObservedRunningTime="2025-10-13 17:27:14.899716613 +0000 UTC m=+180.356966745" Oct 13 17:27:14 crc kubenswrapper[4720]: I1013 17:27:14.900671 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dhtzn" podStartSLOduration=3.252794166 podStartE2EDuration="28.900663587s" podCreationTimestamp="2025-10-13 17:26:46 +0000 UTC" firstStartedPulling="2025-10-13 17:26:48.510755994 +0000 UTC m=+153.968006126" lastFinishedPulling="2025-10-13 17:27:14.158625425 +0000 UTC m=+179.615875547" observedRunningTime="2025-10-13 17:27:14.898344588 +0000 UTC m=+180.355594720" watchObservedRunningTime="2025-10-13 17:27:14.900663587 +0000 UTC m=+180.357913719" Oct 13 17:27:15 crc kubenswrapper[4720]: I1013 17:27:15.077514 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-crcm9" podUID="24757001-c6f3-4ecc-9b71-8fa697a28390" containerName="registry-server" probeResult="failure" output=< Oct 13 17:27:15 crc kubenswrapper[4720]: timeout: failed to connect service ":50051" within 1s Oct 13 17:27:15 crc kubenswrapper[4720]: > Oct 13 17:27:15 crc kubenswrapper[4720]: I1013 17:27:15.213134 4720 patch_prober.go:28] interesting pod/machine-config-daemon-htwnl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 17:27:15 crc kubenswrapper[4720]: I1013 17:27:15.213603 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 17:27:15 crc kubenswrapper[4720]: I1013 17:27:15.535276 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dp5pr" Oct 13 17:27:15 crc kubenswrapper[4720]: I1013 17:27:15.535330 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dp5pr" Oct 13 17:27:15 crc kubenswrapper[4720]: I1013 17:27:15.605711 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dp5pr" Oct 13 17:27:15 crc kubenswrapper[4720]: I1013 17:27:15.916792 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-m4v6f" Oct 13 17:27:15 crc kubenswrapper[4720]: I1013 17:27:15.916865 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-m4v6f" Oct 13 17:27:15 crc kubenswrapper[4720]: I1013 17:27:15.978706 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-m4v6f" Oct 13 17:27:16 crc kubenswrapper[4720]: I1013 17:27:16.540274 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-skh6q" Oct 13 17:27:16 crc kubenswrapper[4720]: I1013 17:27:16.540621 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-skh6q" Oct 13 17:27:16 crc kubenswrapper[4720]: I1013 17:27:16.906019 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dhtzn" Oct 13 17:27:16 crc kubenswrapper[4720]: I1013 17:27:16.906266 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dhtzn" Oct 13 17:27:16 crc kubenswrapper[4720]: I1013 17:27:16.959505 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-vhgw6" Oct 13 17:27:17 crc kubenswrapper[4720]: I1013 17:27:17.579480 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-skh6q" podUID="cca499e4-939e-4ba7-b98b-2965e14da5c3" containerName="registry-server" probeResult="failure" output=< Oct 13 17:27:17 crc kubenswrapper[4720]: timeout: failed to connect service ":50051" within 1s Oct 13 17:27:17 crc kubenswrapper[4720]: > Oct 13 17:27:17 crc kubenswrapper[4720]: I1013 17:27:17.958762 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dhtzn" podUID="0a4310d4-4aaa-4de5-be61-46eb59ac5d9b" containerName="registry-server" probeResult="failure" output=< Oct 13 17:27:17 crc kubenswrapper[4720]: timeout: failed to connect service ":50051" within 1s Oct 13 17:27:17 crc kubenswrapper[4720]: > Oct 13 17:27:23 crc kubenswrapper[4720]: I1013 17:27:23.299950 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 13 17:27:23 crc kubenswrapper[4720]: I1013 17:27:23.322674 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6n8tc" Oct 13 17:27:23 crc kubenswrapper[4720]: I1013 17:27:23.323760 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6n8tc" Oct 13 17:27:23 crc kubenswrapper[4720]: I1013 17:27:23.392471 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6n8tc" Oct 13 17:27:23 crc kubenswrapper[4720]: I1013 17:27:23.781487 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pq4c6" Oct 13 17:27:23 crc kubenswrapper[4720]: I1013 17:27:23.782767 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pq4c6" Oct 13 17:27:23 crc kubenswrapper[4720]: I1013 17:27:23.838523 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pq4c6" Oct 13 17:27:23 crc kubenswrapper[4720]: I1013 17:27:23.924353 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pq4c6" Oct 13 17:27:23 crc kubenswrapper[4720]: I1013 17:27:23.936520 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6n8tc" Oct 13 17:27:24 crc kubenswrapper[4720]: I1013 17:27:24.019152 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-crcm9" Oct 13 17:27:24 crc kubenswrapper[4720]: I1013 17:27:24.074742 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-crcm9" Oct 13 17:27:25 crc kubenswrapper[4720]: I1013 17:27:25.029042 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pq4c6"] Oct 13 17:27:25 crc kubenswrapper[4720]: I1013 17:27:25.583125 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dp5pr" Oct 13 17:27:25 crc kubenswrapper[4720]: I1013 17:27:25.982087 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-m4v6f" Oct 13 17:27:26 crc kubenswrapper[4720]: I1013 17:27:26.436073 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-crcm9"] Oct 13 17:27:26 crc kubenswrapper[4720]: I1013 17:27:26.436858 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-crcm9" podUID="24757001-c6f3-4ecc-9b71-8fa697a28390" containerName="registry-server" containerID="cri-o://99604ef1ec0076fd63a03010dc24da695520c71f931a1a737353e42a5a594750" gracePeriod=2 Oct 13 17:27:26 crc kubenswrapper[4720]: I1013 17:27:26.590319 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-skh6q" Oct 13 17:27:26 crc kubenswrapper[4720]: I1013 17:27:26.641534 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-skh6q" Oct 13 17:27:26 crc kubenswrapper[4720]: I1013 17:27:26.893372 4720 generic.go:334] "Generic (PLEG): container finished" podID="24757001-c6f3-4ecc-9b71-8fa697a28390" containerID="99604ef1ec0076fd63a03010dc24da695520c71f931a1a737353e42a5a594750" exitCode=0 Oct 13 17:27:26 crc kubenswrapper[4720]: I1013 17:27:26.893433 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-crcm9" event={"ID":"24757001-c6f3-4ecc-9b71-8fa697a28390","Type":"ContainerDied","Data":"99604ef1ec0076fd63a03010dc24da695520c71f931a1a737353e42a5a594750"} Oct 13 17:27:26 crc kubenswrapper[4720]: I1013 17:27:26.893775 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-crcm9" event={"ID":"24757001-c6f3-4ecc-9b71-8fa697a28390","Type":"ContainerDied","Data":"ac49c6ee19160ea3d2bfe32911faa92c1362dc920a34793be2e56ebb8238f368"} Oct 13 17:27:26 crc kubenswrapper[4720]: I1013 17:27:26.893850 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac49c6ee19160ea3d2bfe32911faa92c1362dc920a34793be2e56ebb8238f368" Oct 13 17:27:26 crc kubenswrapper[4720]: I1013 17:27:26.893896 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pq4c6" podUID="9a30ffbf-3240-409c-8c06-d9e12a160aab" containerName="registry-server" containerID="cri-o://ca8dede282c99d75e25442624105964c832dc53a8df4f96d5633b227b22af288" gracePeriod=2 Oct 13 17:27:26 crc kubenswrapper[4720]: I1013 17:27:26.915579 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-crcm9" Oct 13 17:27:26 crc kubenswrapper[4720]: I1013 17:27:26.982099 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dhtzn" Oct 13 17:27:27 crc kubenswrapper[4720]: I1013 17:27:27.041846 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dhtzn" Oct 13 17:27:27 crc kubenswrapper[4720]: I1013 17:27:27.108119 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24757001-c6f3-4ecc-9b71-8fa697a28390-catalog-content\") pod \"24757001-c6f3-4ecc-9b71-8fa697a28390\" (UID: \"24757001-c6f3-4ecc-9b71-8fa697a28390\") " Oct 13 17:27:27 crc kubenswrapper[4720]: I1013 17:27:27.108182 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b299d\" (UniqueName: \"kubernetes.io/projected/24757001-c6f3-4ecc-9b71-8fa697a28390-kube-api-access-b299d\") pod \"24757001-c6f3-4ecc-9b71-8fa697a28390\" (UID: \"24757001-c6f3-4ecc-9b71-8fa697a28390\") " Oct 13 17:27:27 crc kubenswrapper[4720]: I1013 17:27:27.108506 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24757001-c6f3-4ecc-9b71-8fa697a28390-utilities\") pod \"24757001-c6f3-4ecc-9b71-8fa697a28390\" (UID: \"24757001-c6f3-4ecc-9b71-8fa697a28390\") " Oct 13 17:27:27 crc kubenswrapper[4720]: I1013 17:27:27.109789 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24757001-c6f3-4ecc-9b71-8fa697a28390-utilities" (OuterVolumeSpecName: "utilities") pod "24757001-c6f3-4ecc-9b71-8fa697a28390" (UID: "24757001-c6f3-4ecc-9b71-8fa697a28390"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:27:27 crc kubenswrapper[4720]: I1013 17:27:27.121697 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24757001-c6f3-4ecc-9b71-8fa697a28390-kube-api-access-b299d" (OuterVolumeSpecName: "kube-api-access-b299d") pod "24757001-c6f3-4ecc-9b71-8fa697a28390" (UID: "24757001-c6f3-4ecc-9b71-8fa697a28390"). InnerVolumeSpecName "kube-api-access-b299d". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:27:27 crc kubenswrapper[4720]: I1013 17:27:27.174775 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24757001-c6f3-4ecc-9b71-8fa697a28390-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "24757001-c6f3-4ecc-9b71-8fa697a28390" (UID: "24757001-c6f3-4ecc-9b71-8fa697a28390"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:27:27 crc kubenswrapper[4720]: I1013 17:27:27.211326 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24757001-c6f3-4ecc-9b71-8fa697a28390-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 17:27:27 crc kubenswrapper[4720]: I1013 17:27:27.211367 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24757001-c6f3-4ecc-9b71-8fa697a28390-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 17:27:27 crc kubenswrapper[4720]: I1013 17:27:27.211382 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b299d\" (UniqueName: \"kubernetes.io/projected/24757001-c6f3-4ecc-9b71-8fa697a28390-kube-api-access-b299d\") on node \"crc\" DevicePath \"\"" Oct 13 17:27:27 crc kubenswrapper[4720]: I1013 17:27:27.291714 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pq4c6" Oct 13 17:27:27 crc kubenswrapper[4720]: I1013 17:27:27.414098 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a30ffbf-3240-409c-8c06-d9e12a160aab-catalog-content\") pod \"9a30ffbf-3240-409c-8c06-d9e12a160aab\" (UID: \"9a30ffbf-3240-409c-8c06-d9e12a160aab\") " Oct 13 17:27:27 crc kubenswrapper[4720]: I1013 17:27:27.414264 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a30ffbf-3240-409c-8c06-d9e12a160aab-utilities\") pod \"9a30ffbf-3240-409c-8c06-d9e12a160aab\" (UID: \"9a30ffbf-3240-409c-8c06-d9e12a160aab\") " Oct 13 17:27:27 crc kubenswrapper[4720]: I1013 17:27:27.414369 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwsqz\" (UniqueName: \"kubernetes.io/projected/9a30ffbf-3240-409c-8c06-d9e12a160aab-kube-api-access-hwsqz\") pod \"9a30ffbf-3240-409c-8c06-d9e12a160aab\" (UID: \"9a30ffbf-3240-409c-8c06-d9e12a160aab\") " Oct 13 17:27:27 crc kubenswrapper[4720]: I1013 17:27:27.415265 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a30ffbf-3240-409c-8c06-d9e12a160aab-utilities" (OuterVolumeSpecName: "utilities") pod "9a30ffbf-3240-409c-8c06-d9e12a160aab" (UID: "9a30ffbf-3240-409c-8c06-d9e12a160aab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:27:27 crc kubenswrapper[4720]: I1013 17:27:27.418165 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a30ffbf-3240-409c-8c06-d9e12a160aab-kube-api-access-hwsqz" (OuterVolumeSpecName: "kube-api-access-hwsqz") pod "9a30ffbf-3240-409c-8c06-d9e12a160aab" (UID: "9a30ffbf-3240-409c-8c06-d9e12a160aab"). InnerVolumeSpecName "kube-api-access-hwsqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:27:27 crc kubenswrapper[4720]: I1013 17:27:27.516560 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwsqz\" (UniqueName: \"kubernetes.io/projected/9a30ffbf-3240-409c-8c06-d9e12a160aab-kube-api-access-hwsqz\") on node \"crc\" DevicePath \"\"" Oct 13 17:27:27 crc kubenswrapper[4720]: I1013 17:27:27.516597 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a30ffbf-3240-409c-8c06-d9e12a160aab-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 17:27:27 crc kubenswrapper[4720]: I1013 17:27:27.526910 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a30ffbf-3240-409c-8c06-d9e12a160aab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9a30ffbf-3240-409c-8c06-d9e12a160aab" (UID: "9a30ffbf-3240-409c-8c06-d9e12a160aab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:27:27 crc kubenswrapper[4720]: I1013 17:27:27.617813 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a30ffbf-3240-409c-8c06-d9e12a160aab-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 17:27:27 crc kubenswrapper[4720]: I1013 17:27:27.903181 4720 generic.go:334] "Generic (PLEG): container finished" podID="9a30ffbf-3240-409c-8c06-d9e12a160aab" containerID="ca8dede282c99d75e25442624105964c832dc53a8df4f96d5633b227b22af288" exitCode=0 Oct 13 17:27:27 crc kubenswrapper[4720]: I1013 17:27:27.903240 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pq4c6" event={"ID":"9a30ffbf-3240-409c-8c06-d9e12a160aab","Type":"ContainerDied","Data":"ca8dede282c99d75e25442624105964c832dc53a8df4f96d5633b227b22af288"} Oct 13 17:27:27 crc kubenswrapper[4720]: I1013 17:27:27.903273 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pq4c6" Oct 13 17:27:27 crc kubenswrapper[4720]: I1013 17:27:27.903612 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pq4c6" event={"ID":"9a30ffbf-3240-409c-8c06-d9e12a160aab","Type":"ContainerDied","Data":"45090cf5200de1a3df5bfe4405180953dfba66f15e2ea4a8f94d56fd8148e62c"} Oct 13 17:27:27 crc kubenswrapper[4720]: I1013 17:27:27.903642 4720 scope.go:117] "RemoveContainer" containerID="ca8dede282c99d75e25442624105964c832dc53a8df4f96d5633b227b22af288" Oct 13 17:27:27 crc kubenswrapper[4720]: I1013 17:27:27.903686 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-crcm9" Oct 13 17:27:27 crc kubenswrapper[4720]: I1013 17:27:27.925434 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-crcm9"] Oct 13 17:27:27 crc kubenswrapper[4720]: I1013 17:27:27.928799 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-crcm9"] Oct 13 17:27:27 crc kubenswrapper[4720]: I1013 17:27:27.930285 4720 scope.go:117] "RemoveContainer" containerID="826e4a20690bcb0f0aa0f6c4e56d2111d0b65da0031b2d4b628c43d61f0d1eba" Oct 13 17:27:27 crc kubenswrapper[4720]: I1013 17:27:27.937670 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pq4c6"] Oct 13 17:27:27 crc kubenswrapper[4720]: I1013 17:27:27.946920 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pq4c6"] Oct 13 17:27:27 crc kubenswrapper[4720]: I1013 17:27:27.951926 4720 scope.go:117] "RemoveContainer" containerID="a51c7e462fe1dd873716ac507106e24d31897359723edf335099b4ad91d27e96" Oct 13 17:27:27 crc kubenswrapper[4720]: I1013 17:27:27.997757 4720 scope.go:117] "RemoveContainer" containerID="ca8dede282c99d75e25442624105964c832dc53a8df4f96d5633b227b22af288" Oct 13 17:27:27 crc kubenswrapper[4720]: E1013 17:27:27.998128 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca8dede282c99d75e25442624105964c832dc53a8df4f96d5633b227b22af288\": container with ID starting with ca8dede282c99d75e25442624105964c832dc53a8df4f96d5633b227b22af288 not found: ID does not exist" containerID="ca8dede282c99d75e25442624105964c832dc53a8df4f96d5633b227b22af288" Oct 13 17:27:27 crc kubenswrapper[4720]: I1013 17:27:27.998190 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca8dede282c99d75e25442624105964c832dc53a8df4f96d5633b227b22af288"} err="failed to get container status \"ca8dede282c99d75e25442624105964c832dc53a8df4f96d5633b227b22af288\": rpc error: code = NotFound desc = could not find container \"ca8dede282c99d75e25442624105964c832dc53a8df4f96d5633b227b22af288\": container with ID starting with ca8dede282c99d75e25442624105964c832dc53a8df4f96d5633b227b22af288 not found: ID does not exist" Oct 13 17:27:27 crc kubenswrapper[4720]: I1013 17:27:27.998254 4720 scope.go:117] "RemoveContainer" containerID="826e4a20690bcb0f0aa0f6c4e56d2111d0b65da0031b2d4b628c43d61f0d1eba" Oct 13 17:27:27 crc kubenswrapper[4720]: E1013 17:27:27.998531 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"826e4a20690bcb0f0aa0f6c4e56d2111d0b65da0031b2d4b628c43d61f0d1eba\": container with ID starting with 826e4a20690bcb0f0aa0f6c4e56d2111d0b65da0031b2d4b628c43d61f0d1eba not found: ID does not exist" containerID="826e4a20690bcb0f0aa0f6c4e56d2111d0b65da0031b2d4b628c43d61f0d1eba" Oct 13 17:27:27 crc kubenswrapper[4720]: I1013 17:27:27.998567 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"826e4a20690bcb0f0aa0f6c4e56d2111d0b65da0031b2d4b628c43d61f0d1eba"} err="failed to get container status \"826e4a20690bcb0f0aa0f6c4e56d2111d0b65da0031b2d4b628c43d61f0d1eba\": rpc error: code = NotFound desc = could not find container \"826e4a20690bcb0f0aa0f6c4e56d2111d0b65da0031b2d4b628c43d61f0d1eba\": container with ID starting with 826e4a20690bcb0f0aa0f6c4e56d2111d0b65da0031b2d4b628c43d61f0d1eba not found: ID does not exist" Oct 13 17:27:27 crc kubenswrapper[4720]: I1013 17:27:27.998591 4720 scope.go:117] "RemoveContainer" containerID="a51c7e462fe1dd873716ac507106e24d31897359723edf335099b4ad91d27e96" Oct 13 17:27:27 crc kubenswrapper[4720]: E1013 17:27:27.998799 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a51c7e462fe1dd873716ac507106e24d31897359723edf335099b4ad91d27e96\": container with ID starting with a51c7e462fe1dd873716ac507106e24d31897359723edf335099b4ad91d27e96 not found: ID does not exist" containerID="a51c7e462fe1dd873716ac507106e24d31897359723edf335099b4ad91d27e96" Oct 13 17:27:27 crc kubenswrapper[4720]: I1013 17:27:27.998828 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a51c7e462fe1dd873716ac507106e24d31897359723edf335099b4ad91d27e96"} err="failed to get container status \"a51c7e462fe1dd873716ac507106e24d31897359723edf335099b4ad91d27e96\": rpc error: code = NotFound desc = could not find container \"a51c7e462fe1dd873716ac507106e24d31897359723edf335099b4ad91d27e96\": container with ID starting with a51c7e462fe1dd873716ac507106e24d31897359723edf335099b4ad91d27e96 not found: ID does not exist" Oct 13 17:27:28 crc kubenswrapper[4720]: I1013 17:27:28.831725 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m4v6f"] Oct 13 17:27:28 crc kubenswrapper[4720]: I1013 17:27:28.833907 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-m4v6f" podUID="949931eb-c57d-40eb-a3da-f19524cbaf1d" containerName="registry-server" containerID="cri-o://5752192cb52959d19cf9900b9b55c52eaab072cbc787fed587677d75efcb1f4f" gracePeriod=2 Oct 13 17:27:28 crc kubenswrapper[4720]: I1013 17:27:28.912450 4720 generic.go:334] "Generic (PLEG): container finished" podID="4ec46228-40b7-4dd0-b773-59e2b088ef17" containerID="4bca547a8c401420848cad24c228e6afa1c703ff12323a5cf36959502b5072d1" exitCode=0 Oct 13 17:27:28 crc kubenswrapper[4720]: I1013 17:27:28.912495 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-85tbj" event={"ID":"4ec46228-40b7-4dd0-b773-59e2b088ef17","Type":"ContainerDied","Data":"4bca547a8c401420848cad24c228e6afa1c703ff12323a5cf36959502b5072d1"} Oct 13 17:27:29 crc kubenswrapper[4720]: I1013 17:27:29.186818 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24757001-c6f3-4ecc-9b71-8fa697a28390" path="/var/lib/kubelet/pods/24757001-c6f3-4ecc-9b71-8fa697a28390/volumes" Oct 13 17:27:29 crc kubenswrapper[4720]: I1013 17:27:29.187637 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a30ffbf-3240-409c-8c06-d9e12a160aab" path="/var/lib/kubelet/pods/9a30ffbf-3240-409c-8c06-d9e12a160aab/volumes" Oct 13 17:27:29 crc kubenswrapper[4720]: I1013 17:27:29.487297 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m4v6f" Oct 13 17:27:29 crc kubenswrapper[4720]: I1013 17:27:29.644511 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/949931eb-c57d-40eb-a3da-f19524cbaf1d-utilities\") pod \"949931eb-c57d-40eb-a3da-f19524cbaf1d\" (UID: \"949931eb-c57d-40eb-a3da-f19524cbaf1d\") " Oct 13 17:27:29 crc kubenswrapper[4720]: I1013 17:27:29.644974 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qmjw\" (UniqueName: \"kubernetes.io/projected/949931eb-c57d-40eb-a3da-f19524cbaf1d-kube-api-access-7qmjw\") pod \"949931eb-c57d-40eb-a3da-f19524cbaf1d\" (UID: \"949931eb-c57d-40eb-a3da-f19524cbaf1d\") " Oct 13 17:27:29 crc kubenswrapper[4720]: I1013 17:27:29.645004 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/949931eb-c57d-40eb-a3da-f19524cbaf1d-catalog-content\") pod \"949931eb-c57d-40eb-a3da-f19524cbaf1d\" (UID: \"949931eb-c57d-40eb-a3da-f19524cbaf1d\") " Oct 13 17:27:29 crc kubenswrapper[4720]: I1013 17:27:29.645593 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/949931eb-c57d-40eb-a3da-f19524cbaf1d-utilities" (OuterVolumeSpecName: "utilities") pod "949931eb-c57d-40eb-a3da-f19524cbaf1d" (UID: "949931eb-c57d-40eb-a3da-f19524cbaf1d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:27:29 crc kubenswrapper[4720]: I1013 17:27:29.650213 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/949931eb-c57d-40eb-a3da-f19524cbaf1d-kube-api-access-7qmjw" (OuterVolumeSpecName: "kube-api-access-7qmjw") pod "949931eb-c57d-40eb-a3da-f19524cbaf1d" (UID: "949931eb-c57d-40eb-a3da-f19524cbaf1d"). InnerVolumeSpecName "kube-api-access-7qmjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:27:29 crc kubenswrapper[4720]: I1013 17:27:29.663706 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/949931eb-c57d-40eb-a3da-f19524cbaf1d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "949931eb-c57d-40eb-a3da-f19524cbaf1d" (UID: "949931eb-c57d-40eb-a3da-f19524cbaf1d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:27:29 crc kubenswrapper[4720]: I1013 17:27:29.746841 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/949931eb-c57d-40eb-a3da-f19524cbaf1d-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 17:27:29 crc kubenswrapper[4720]: I1013 17:27:29.746879 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qmjw\" (UniqueName: \"kubernetes.io/projected/949931eb-c57d-40eb-a3da-f19524cbaf1d-kube-api-access-7qmjw\") on node \"crc\" DevicePath \"\"" Oct 13 17:27:29 crc kubenswrapper[4720]: I1013 17:27:29.746894 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/949931eb-c57d-40eb-a3da-f19524cbaf1d-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 17:27:29 crc kubenswrapper[4720]: I1013 17:27:29.920701 4720 generic.go:334] "Generic (PLEG): container finished" podID="949931eb-c57d-40eb-a3da-f19524cbaf1d" containerID="5752192cb52959d19cf9900b9b55c52eaab072cbc787fed587677d75efcb1f4f" exitCode=0 Oct 13 17:27:29 crc kubenswrapper[4720]: I1013 17:27:29.920762 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m4v6f" event={"ID":"949931eb-c57d-40eb-a3da-f19524cbaf1d","Type":"ContainerDied","Data":"5752192cb52959d19cf9900b9b55c52eaab072cbc787fed587677d75efcb1f4f"} Oct 13 17:27:29 crc kubenswrapper[4720]: I1013 17:27:29.920803 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m4v6f" event={"ID":"949931eb-c57d-40eb-a3da-f19524cbaf1d","Type":"ContainerDied","Data":"3f63e8c43f000b858c98287a4b093ce64e949847eec0d32ff2cc77663b52aedc"} Oct 13 17:27:29 crc kubenswrapper[4720]: I1013 17:27:29.920833 4720 scope.go:117] "RemoveContainer" containerID="5752192cb52959d19cf9900b9b55c52eaab072cbc787fed587677d75efcb1f4f" Oct 13 17:27:29 crc kubenswrapper[4720]: I1013 17:27:29.920989 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m4v6f" Oct 13 17:27:29 crc kubenswrapper[4720]: I1013 17:27:29.938010 4720 scope.go:117] "RemoveContainer" containerID="a3443b760cb93019e2d405d6f1624a50aa2e8f3d8c1d61ee149cde6561b0ed83" Oct 13 17:27:29 crc kubenswrapper[4720]: I1013 17:27:29.957669 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m4v6f"] Oct 13 17:27:29 crc kubenswrapper[4720]: I1013 17:27:29.963829 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-m4v6f"] Oct 13 17:27:29 crc kubenswrapper[4720]: I1013 17:27:29.978154 4720 scope.go:117] "RemoveContainer" containerID="531b0abe3f21349d4b9693d45261ebdfd914212789dfb75b01246c97c8b83879" Oct 13 17:27:29 crc kubenswrapper[4720]: I1013 17:27:29.997790 4720 scope.go:117] "RemoveContainer" containerID="5752192cb52959d19cf9900b9b55c52eaab072cbc787fed587677d75efcb1f4f" Oct 13 17:27:29 crc kubenswrapper[4720]: E1013 17:27:29.998402 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5752192cb52959d19cf9900b9b55c52eaab072cbc787fed587677d75efcb1f4f\": container with ID starting with 5752192cb52959d19cf9900b9b55c52eaab072cbc787fed587677d75efcb1f4f not found: ID does not exist" containerID="5752192cb52959d19cf9900b9b55c52eaab072cbc787fed587677d75efcb1f4f" Oct 13 17:27:29 crc kubenswrapper[4720]: I1013 17:27:29.998481 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5752192cb52959d19cf9900b9b55c52eaab072cbc787fed587677d75efcb1f4f"} err="failed to get container status \"5752192cb52959d19cf9900b9b55c52eaab072cbc787fed587677d75efcb1f4f\": rpc error: code = NotFound desc = could not find container \"5752192cb52959d19cf9900b9b55c52eaab072cbc787fed587677d75efcb1f4f\": container with ID starting with 5752192cb52959d19cf9900b9b55c52eaab072cbc787fed587677d75efcb1f4f not found: ID does not exist" Oct 13 17:27:29 crc kubenswrapper[4720]: I1013 17:27:29.998547 4720 scope.go:117] "RemoveContainer" containerID="a3443b760cb93019e2d405d6f1624a50aa2e8f3d8c1d61ee149cde6561b0ed83" Oct 13 17:27:30 crc kubenswrapper[4720]: E1013 17:27:29.999596 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3443b760cb93019e2d405d6f1624a50aa2e8f3d8c1d61ee149cde6561b0ed83\": container with ID starting with a3443b760cb93019e2d405d6f1624a50aa2e8f3d8c1d61ee149cde6561b0ed83 not found: ID does not exist" containerID="a3443b760cb93019e2d405d6f1624a50aa2e8f3d8c1d61ee149cde6561b0ed83" Oct 13 17:27:30 crc kubenswrapper[4720]: I1013 17:27:29.999830 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3443b760cb93019e2d405d6f1624a50aa2e8f3d8c1d61ee149cde6561b0ed83"} err="failed to get container status \"a3443b760cb93019e2d405d6f1624a50aa2e8f3d8c1d61ee149cde6561b0ed83\": rpc error: code = NotFound desc = could not find container \"a3443b760cb93019e2d405d6f1624a50aa2e8f3d8c1d61ee149cde6561b0ed83\": container with ID starting with a3443b760cb93019e2d405d6f1624a50aa2e8f3d8c1d61ee149cde6561b0ed83 not found: ID does not exist" Oct 13 17:27:30 crc kubenswrapper[4720]: I1013 17:27:29.999868 4720 scope.go:117] "RemoveContainer" containerID="531b0abe3f21349d4b9693d45261ebdfd914212789dfb75b01246c97c8b83879" Oct 13 17:27:30 crc kubenswrapper[4720]: E1013 17:27:30.000363 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"531b0abe3f21349d4b9693d45261ebdfd914212789dfb75b01246c97c8b83879\": container with ID starting with 531b0abe3f21349d4b9693d45261ebdfd914212789dfb75b01246c97c8b83879 not found: ID does not exist" containerID="531b0abe3f21349d4b9693d45261ebdfd914212789dfb75b01246c97c8b83879" Oct 13 17:27:30 crc kubenswrapper[4720]: I1013 17:27:30.000406 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"531b0abe3f21349d4b9693d45261ebdfd914212789dfb75b01246c97c8b83879"} err="failed to get container status \"531b0abe3f21349d4b9693d45261ebdfd914212789dfb75b01246c97c8b83879\": rpc error: code = NotFound desc = could not find container \"531b0abe3f21349d4b9693d45261ebdfd914212789dfb75b01246c97c8b83879\": container with ID starting with 531b0abe3f21349d4b9693d45261ebdfd914212789dfb75b01246c97c8b83879 not found: ID does not exist" Oct 13 17:27:31 crc kubenswrapper[4720]: I1013 17:27:31.176384 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="949931eb-c57d-40eb-a3da-f19524cbaf1d" path="/var/lib/kubelet/pods/949931eb-c57d-40eb-a3da-f19524cbaf1d/volumes" Oct 13 17:27:31 crc kubenswrapper[4720]: I1013 17:27:31.238280 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dhtzn"] Oct 13 17:27:31 crc kubenswrapper[4720]: I1013 17:27:31.238622 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dhtzn" podUID="0a4310d4-4aaa-4de5-be61-46eb59ac5d9b" containerName="registry-server" containerID="cri-o://1fb878281b00ca7fd1521fb0ce72dcd352ecf858365d6f08fc096ede95bc4a0e" gracePeriod=2 Oct 13 17:27:31 crc kubenswrapper[4720]: I1013 17:27:31.601483 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dhtzn" Oct 13 17:27:31 crc kubenswrapper[4720]: I1013 17:27:31.779476 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a4310d4-4aaa-4de5-be61-46eb59ac5d9b-catalog-content\") pod \"0a4310d4-4aaa-4de5-be61-46eb59ac5d9b\" (UID: \"0a4310d4-4aaa-4de5-be61-46eb59ac5d9b\") " Oct 13 17:27:31 crc kubenswrapper[4720]: I1013 17:27:31.779524 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a4310d4-4aaa-4de5-be61-46eb59ac5d9b-utilities\") pod \"0a4310d4-4aaa-4de5-be61-46eb59ac5d9b\" (UID: \"0a4310d4-4aaa-4de5-be61-46eb59ac5d9b\") " Oct 13 17:27:31 crc kubenswrapper[4720]: I1013 17:27:31.779573 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nd9cr\" (UniqueName: \"kubernetes.io/projected/0a4310d4-4aaa-4de5-be61-46eb59ac5d9b-kube-api-access-nd9cr\") pod \"0a4310d4-4aaa-4de5-be61-46eb59ac5d9b\" (UID: \"0a4310d4-4aaa-4de5-be61-46eb59ac5d9b\") " Oct 13 17:27:31 crc kubenswrapper[4720]: I1013 17:27:31.783239 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a4310d4-4aaa-4de5-be61-46eb59ac5d9b-utilities" (OuterVolumeSpecName: "utilities") pod "0a4310d4-4aaa-4de5-be61-46eb59ac5d9b" (UID: "0a4310d4-4aaa-4de5-be61-46eb59ac5d9b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:27:31 crc kubenswrapper[4720]: I1013 17:27:31.787305 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a4310d4-4aaa-4de5-be61-46eb59ac5d9b-kube-api-access-nd9cr" (OuterVolumeSpecName: "kube-api-access-nd9cr") pod "0a4310d4-4aaa-4de5-be61-46eb59ac5d9b" (UID: "0a4310d4-4aaa-4de5-be61-46eb59ac5d9b"). InnerVolumeSpecName "kube-api-access-nd9cr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:27:31 crc kubenswrapper[4720]: I1013 17:27:31.858661 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a4310d4-4aaa-4de5-be61-46eb59ac5d9b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0a4310d4-4aaa-4de5-be61-46eb59ac5d9b" (UID: "0a4310d4-4aaa-4de5-be61-46eb59ac5d9b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:27:31 crc kubenswrapper[4720]: I1013 17:27:31.881387 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nd9cr\" (UniqueName: \"kubernetes.io/projected/0a4310d4-4aaa-4de5-be61-46eb59ac5d9b-kube-api-access-nd9cr\") on node \"crc\" DevicePath \"\"" Oct 13 17:27:31 crc kubenswrapper[4720]: I1013 17:27:31.881452 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a4310d4-4aaa-4de5-be61-46eb59ac5d9b-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 17:27:31 crc kubenswrapper[4720]: I1013 17:27:31.881465 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a4310d4-4aaa-4de5-be61-46eb59ac5d9b-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 17:27:31 crc kubenswrapper[4720]: I1013 17:27:31.939280 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-85tbj" event={"ID":"4ec46228-40b7-4dd0-b773-59e2b088ef17","Type":"ContainerStarted","Data":"c98c132ce99eded322c8469d1679988dbcf6133d7958fe46a0a0873132c6d7f1"} Oct 13 17:27:31 crc kubenswrapper[4720]: I1013 17:27:31.943328 4720 generic.go:334] "Generic (PLEG): container finished" podID="0a4310d4-4aaa-4de5-be61-46eb59ac5d9b" containerID="1fb878281b00ca7fd1521fb0ce72dcd352ecf858365d6f08fc096ede95bc4a0e" exitCode=0 Oct 13 17:27:31 crc kubenswrapper[4720]: I1013 17:27:31.943352 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dhtzn" event={"ID":"0a4310d4-4aaa-4de5-be61-46eb59ac5d9b","Type":"ContainerDied","Data":"1fb878281b00ca7fd1521fb0ce72dcd352ecf858365d6f08fc096ede95bc4a0e"} Oct 13 17:27:31 crc kubenswrapper[4720]: I1013 17:27:31.943368 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dhtzn" event={"ID":"0a4310d4-4aaa-4de5-be61-46eb59ac5d9b","Type":"ContainerDied","Data":"0e5eff35fd869e190f671e7ba6f4e85d4a110fdf81146f096efb473701cf9e7d"} Oct 13 17:27:31 crc kubenswrapper[4720]: I1013 17:27:31.943385 4720 scope.go:117] "RemoveContainer" containerID="1fb878281b00ca7fd1521fb0ce72dcd352ecf858365d6f08fc096ede95bc4a0e" Oct 13 17:27:31 crc kubenswrapper[4720]: I1013 17:27:31.943391 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dhtzn" Oct 13 17:27:31 crc kubenswrapper[4720]: I1013 17:27:31.958066 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-85tbj" podStartSLOduration=3.172330511 podStartE2EDuration="48.95804978s" podCreationTimestamp="2025-10-13 17:26:43 +0000 UTC" firstStartedPulling="2025-10-13 17:26:45.336033452 +0000 UTC m=+150.793283574" lastFinishedPulling="2025-10-13 17:27:31.121752711 +0000 UTC m=+196.579002843" observedRunningTime="2025-10-13 17:27:31.955445133 +0000 UTC m=+197.412695265" watchObservedRunningTime="2025-10-13 17:27:31.95804978 +0000 UTC m=+197.415299912" Oct 13 17:27:31 crc kubenswrapper[4720]: I1013 17:27:31.980545 4720 scope.go:117] "RemoveContainer" containerID="e379b0e23442bcc813fa938f822bff4a2eae3928bd687ed061587cc5620a850d" Oct 13 17:27:31 crc kubenswrapper[4720]: I1013 17:27:31.985649 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dhtzn"] Oct 13 17:27:31 crc kubenswrapper[4720]: I1013 17:27:31.987544 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dhtzn"] Oct 13 17:27:32 crc kubenswrapper[4720]: I1013 17:27:32.012781 4720 scope.go:117] "RemoveContainer" containerID="658f335d1cbc20b87cd50d7925a4e563accd9093e8a36ee90c84a2bd0052fe20" Oct 13 17:27:32 crc kubenswrapper[4720]: I1013 17:27:32.027503 4720 scope.go:117] "RemoveContainer" containerID="1fb878281b00ca7fd1521fb0ce72dcd352ecf858365d6f08fc096ede95bc4a0e" Oct 13 17:27:32 crc kubenswrapper[4720]: E1013 17:27:32.027843 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fb878281b00ca7fd1521fb0ce72dcd352ecf858365d6f08fc096ede95bc4a0e\": container with ID starting with 1fb878281b00ca7fd1521fb0ce72dcd352ecf858365d6f08fc096ede95bc4a0e not found: ID does not exist" containerID="1fb878281b00ca7fd1521fb0ce72dcd352ecf858365d6f08fc096ede95bc4a0e" Oct 13 17:27:32 crc kubenswrapper[4720]: I1013 17:27:32.027874 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fb878281b00ca7fd1521fb0ce72dcd352ecf858365d6f08fc096ede95bc4a0e"} err="failed to get container status \"1fb878281b00ca7fd1521fb0ce72dcd352ecf858365d6f08fc096ede95bc4a0e\": rpc error: code = NotFound desc = could not find container \"1fb878281b00ca7fd1521fb0ce72dcd352ecf858365d6f08fc096ede95bc4a0e\": container with ID starting with 1fb878281b00ca7fd1521fb0ce72dcd352ecf858365d6f08fc096ede95bc4a0e not found: ID does not exist" Oct 13 17:27:32 crc kubenswrapper[4720]: I1013 17:27:32.027897 4720 scope.go:117] "RemoveContainer" containerID="e379b0e23442bcc813fa938f822bff4a2eae3928bd687ed061587cc5620a850d" Oct 13 17:27:32 crc kubenswrapper[4720]: E1013 17:27:32.028253 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e379b0e23442bcc813fa938f822bff4a2eae3928bd687ed061587cc5620a850d\": container with ID starting with e379b0e23442bcc813fa938f822bff4a2eae3928bd687ed061587cc5620a850d not found: ID does not exist" containerID="e379b0e23442bcc813fa938f822bff4a2eae3928bd687ed061587cc5620a850d" Oct 13 17:27:32 crc kubenswrapper[4720]: I1013 17:27:32.028280 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e379b0e23442bcc813fa938f822bff4a2eae3928bd687ed061587cc5620a850d"} err="failed to get container status \"e379b0e23442bcc813fa938f822bff4a2eae3928bd687ed061587cc5620a850d\": rpc error: code = NotFound desc = could not find container \"e379b0e23442bcc813fa938f822bff4a2eae3928bd687ed061587cc5620a850d\": container with ID starting with e379b0e23442bcc813fa938f822bff4a2eae3928bd687ed061587cc5620a850d not found: ID does not exist" Oct 13 17:27:32 crc kubenswrapper[4720]: I1013 17:27:32.028294 4720 scope.go:117] "RemoveContainer" containerID="658f335d1cbc20b87cd50d7925a4e563accd9093e8a36ee90c84a2bd0052fe20" Oct 13 17:27:32 crc kubenswrapper[4720]: E1013 17:27:32.028547 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"658f335d1cbc20b87cd50d7925a4e563accd9093e8a36ee90c84a2bd0052fe20\": container with ID starting with 658f335d1cbc20b87cd50d7925a4e563accd9093e8a36ee90c84a2bd0052fe20 not found: ID does not exist" containerID="658f335d1cbc20b87cd50d7925a4e563accd9093e8a36ee90c84a2bd0052fe20" Oct 13 17:27:32 crc kubenswrapper[4720]: I1013 17:27:32.028610 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"658f335d1cbc20b87cd50d7925a4e563accd9093e8a36ee90c84a2bd0052fe20"} err="failed to get container status \"658f335d1cbc20b87cd50d7925a4e563accd9093e8a36ee90c84a2bd0052fe20\": rpc error: code = NotFound desc = could not find container \"658f335d1cbc20b87cd50d7925a4e563accd9093e8a36ee90c84a2bd0052fe20\": container with ID starting with 658f335d1cbc20b87cd50d7925a4e563accd9093e8a36ee90c84a2bd0052fe20 not found: ID does not exist" Oct 13 17:27:33 crc kubenswrapper[4720]: I1013 17:27:33.178127 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a4310d4-4aaa-4de5-be61-46eb59ac5d9b" path="/var/lib/kubelet/pods/0a4310d4-4aaa-4de5-be61-46eb59ac5d9b/volumes" Oct 13 17:27:33 crc kubenswrapper[4720]: I1013 17:27:33.579468 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-85tbj" Oct 13 17:27:33 crc kubenswrapper[4720]: I1013 17:27:33.579527 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-85tbj" Oct 13 17:27:34 crc kubenswrapper[4720]: I1013 17:27:34.621384 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-85tbj" podUID="4ec46228-40b7-4dd0-b773-59e2b088ef17" containerName="registry-server" probeResult="failure" output=< Oct 13 17:27:34 crc kubenswrapper[4720]: timeout: failed to connect service ":50051" within 1s Oct 13 17:27:34 crc kubenswrapper[4720]: > Oct 13 17:27:34 crc kubenswrapper[4720]: I1013 17:27:34.838862 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-xbb4t"] Oct 13 17:27:43 crc kubenswrapper[4720]: I1013 17:27:43.614234 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-85tbj" Oct 13 17:27:43 crc kubenswrapper[4720]: I1013 17:27:43.650979 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-85tbj" Oct 13 17:27:45 crc kubenswrapper[4720]: I1013 17:27:45.213049 4720 patch_prober.go:28] interesting pod/machine-config-daemon-htwnl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 17:27:45 crc kubenswrapper[4720]: I1013 17:27:45.213133 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 17:27:45 crc kubenswrapper[4720]: I1013 17:27:45.213287 4720 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" Oct 13 17:27:45 crc kubenswrapper[4720]: I1013 17:27:45.216476 4720 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"864e330267c8cec8487e1dcf4a50bbb2ba37d81d80361f0873e6bf69de1ed721"} pod="openshift-machine-config-operator/machine-config-daemon-htwnl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 13 17:27:45 crc kubenswrapper[4720]: I1013 17:27:45.216568 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerName="machine-config-daemon" containerID="cri-o://864e330267c8cec8487e1dcf4a50bbb2ba37d81d80361f0873e6bf69de1ed721" gracePeriod=600 Oct 13 17:27:46 crc kubenswrapper[4720]: I1013 17:27:46.023841 4720 generic.go:334] "Generic (PLEG): container finished" podID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerID="864e330267c8cec8487e1dcf4a50bbb2ba37d81d80361f0873e6bf69de1ed721" exitCode=0 Oct 13 17:27:46 crc kubenswrapper[4720]: I1013 17:27:46.023936 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" event={"ID":"ce442c80-fcde-4b79-b6f9-f8f25771dfd4","Type":"ContainerDied","Data":"864e330267c8cec8487e1dcf4a50bbb2ba37d81d80361f0873e6bf69de1ed721"} Oct 13 17:27:46 crc kubenswrapper[4720]: I1013 17:27:46.024383 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" event={"ID":"ce442c80-fcde-4b79-b6f9-f8f25771dfd4","Type":"ContainerStarted","Data":"3fec166dfba0a192adca998429f2650ea80001802f6d04f7ec8b13f450a085ff"} Oct 13 17:27:59 crc kubenswrapper[4720]: I1013 17:27:59.862151 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-xbb4t" podUID="5abe378d-2b00-4d15-af94-5141934fca47" containerName="oauth-openshift" containerID="cri-o://2acdbaa1aab485da3720f131327fb20dbda869b22199fb4c8f86d8c5b518897e" gracePeriod=15 Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.126573 4720 generic.go:334] "Generic (PLEG): container finished" podID="5abe378d-2b00-4d15-af94-5141934fca47" containerID="2acdbaa1aab485da3720f131327fb20dbda869b22199fb4c8f86d8c5b518897e" exitCode=0 Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.126629 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-xbb4t" event={"ID":"5abe378d-2b00-4d15-af94-5141934fca47","Type":"ContainerDied","Data":"2acdbaa1aab485da3720f131327fb20dbda869b22199fb4c8f86d8c5b518897e"} Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.228258 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-xbb4t" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.248243 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5abe378d-2b00-4d15-af94-5141934fca47-v4-0-config-system-router-certs\") pod \"5abe378d-2b00-4d15-af94-5141934fca47\" (UID: \"5abe378d-2b00-4d15-af94-5141934fca47\") " Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.248467 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5abe378d-2b00-4d15-af94-5141934fca47-v4-0-config-user-template-error\") pod \"5abe378d-2b00-4d15-af94-5141934fca47\" (UID: \"5abe378d-2b00-4d15-af94-5141934fca47\") " Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.248503 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5abe378d-2b00-4d15-af94-5141934fca47-v4-0-config-user-idp-0-file-data\") pod \"5abe378d-2b00-4d15-af94-5141934fca47\" (UID: \"5abe378d-2b00-4d15-af94-5141934fca47\") " Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.248541 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5abe378d-2b00-4d15-af94-5141934fca47-v4-0-config-system-service-ca\") pod \"5abe378d-2b00-4d15-af94-5141934fca47\" (UID: \"5abe378d-2b00-4d15-af94-5141934fca47\") " Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.248580 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5abe378d-2b00-4d15-af94-5141934fca47-v4-0-config-system-serving-cert\") pod \"5abe378d-2b00-4d15-af94-5141934fca47\" (UID: \"5abe378d-2b00-4d15-af94-5141934fca47\") " Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.248616 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5abe378d-2b00-4d15-af94-5141934fca47-audit-policies\") pod \"5abe378d-2b00-4d15-af94-5141934fca47\" (UID: \"5abe378d-2b00-4d15-af94-5141934fca47\") " Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.248653 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5abe378d-2b00-4d15-af94-5141934fca47-v4-0-config-system-session\") pod \"5abe378d-2b00-4d15-af94-5141934fca47\" (UID: \"5abe378d-2b00-4d15-af94-5141934fca47\") " Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.248690 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5abe378d-2b00-4d15-af94-5141934fca47-v4-0-config-system-ocp-branding-template\") pod \"5abe378d-2b00-4d15-af94-5141934fca47\" (UID: \"5abe378d-2b00-4d15-af94-5141934fca47\") " Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.248729 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5abe378d-2b00-4d15-af94-5141934fca47-audit-dir\") pod \"5abe378d-2b00-4d15-af94-5141934fca47\" (UID: \"5abe378d-2b00-4d15-af94-5141934fca47\") " Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.248753 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5abe378d-2b00-4d15-af94-5141934fca47-v4-0-config-system-trusted-ca-bundle\") pod \"5abe378d-2b00-4d15-af94-5141934fca47\" (UID: \"5abe378d-2b00-4d15-af94-5141934fca47\") " Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.248786 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5abe378d-2b00-4d15-af94-5141934fca47-v4-0-config-user-template-login\") pod \"5abe378d-2b00-4d15-af94-5141934fca47\" (UID: \"5abe378d-2b00-4d15-af94-5141934fca47\") " Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.248829 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pm6tt\" (UniqueName: \"kubernetes.io/projected/5abe378d-2b00-4d15-af94-5141934fca47-kube-api-access-pm6tt\") pod \"5abe378d-2b00-4d15-af94-5141934fca47\" (UID: \"5abe378d-2b00-4d15-af94-5141934fca47\") " Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.248877 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5abe378d-2b00-4d15-af94-5141934fca47-v4-0-config-system-cliconfig\") pod \"5abe378d-2b00-4d15-af94-5141934fca47\" (UID: \"5abe378d-2b00-4d15-af94-5141934fca47\") " Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.248971 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5abe378d-2b00-4d15-af94-5141934fca47-v4-0-config-user-template-provider-selection\") pod \"5abe378d-2b00-4d15-af94-5141934fca47\" (UID: \"5abe378d-2b00-4d15-af94-5141934fca47\") " Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.253544 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5abe378d-2b00-4d15-af94-5141934fca47-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "5abe378d-2b00-4d15-af94-5141934fca47" (UID: "5abe378d-2b00-4d15-af94-5141934fca47"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.254386 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5abe378d-2b00-4d15-af94-5141934fca47-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "5abe378d-2b00-4d15-af94-5141934fca47" (UID: "5abe378d-2b00-4d15-af94-5141934fca47"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.254982 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5abe378d-2b00-4d15-af94-5141934fca47-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "5abe378d-2b00-4d15-af94-5141934fca47" (UID: "5abe378d-2b00-4d15-af94-5141934fca47"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.255429 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5abe378d-2b00-4d15-af94-5141934fca47-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "5abe378d-2b00-4d15-af94-5141934fca47" (UID: "5abe378d-2b00-4d15-af94-5141934fca47"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.256614 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5abe378d-2b00-4d15-af94-5141934fca47-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "5abe378d-2b00-4d15-af94-5141934fca47" (UID: "5abe378d-2b00-4d15-af94-5141934fca47"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.301350 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5abe378d-2b00-4d15-af94-5141934fca47-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "5abe378d-2b00-4d15-af94-5141934fca47" (UID: "5abe378d-2b00-4d15-af94-5141934fca47"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.301975 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5abe378d-2b00-4d15-af94-5141934fca47-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "5abe378d-2b00-4d15-af94-5141934fca47" (UID: "5abe378d-2b00-4d15-af94-5141934fca47"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.302249 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5abe378d-2b00-4d15-af94-5141934fca47-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "5abe378d-2b00-4d15-af94-5141934fca47" (UID: "5abe378d-2b00-4d15-af94-5141934fca47"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.302584 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5abe378d-2b00-4d15-af94-5141934fca47-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "5abe378d-2b00-4d15-af94-5141934fca47" (UID: "5abe378d-2b00-4d15-af94-5141934fca47"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.302901 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5abe378d-2b00-4d15-af94-5141934fca47-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "5abe378d-2b00-4d15-af94-5141934fca47" (UID: "5abe378d-2b00-4d15-af94-5141934fca47"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.303922 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5abe378d-2b00-4d15-af94-5141934fca47-kube-api-access-pm6tt" (OuterVolumeSpecName: "kube-api-access-pm6tt") pod "5abe378d-2b00-4d15-af94-5141934fca47" (UID: "5abe378d-2b00-4d15-af94-5141934fca47"). InnerVolumeSpecName "kube-api-access-pm6tt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.310364 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5abe378d-2b00-4d15-af94-5141934fca47-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "5abe378d-2b00-4d15-af94-5141934fca47" (UID: "5abe378d-2b00-4d15-af94-5141934fca47"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.310709 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5abe378d-2b00-4d15-af94-5141934fca47-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "5abe378d-2b00-4d15-af94-5141934fca47" (UID: "5abe378d-2b00-4d15-af94-5141934fca47"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.310938 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-846dc6fc5d-qzkqr"] Oct 13 17:28:00 crc kubenswrapper[4720]: E1013 17:28:00.311175 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a4310d4-4aaa-4de5-be61-46eb59ac5d9b" containerName="extract-utilities" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.311209 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a4310d4-4aaa-4de5-be61-46eb59ac5d9b" containerName="extract-utilities" Oct 13 17:28:00 crc kubenswrapper[4720]: E1013 17:28:00.311224 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="949931eb-c57d-40eb-a3da-f19524cbaf1d" containerName="registry-server" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.311233 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="949931eb-c57d-40eb-a3da-f19524cbaf1d" containerName="registry-server" Oct 13 17:28:00 crc kubenswrapper[4720]: E1013 17:28:00.311243 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5abe378d-2b00-4d15-af94-5141934fca47" containerName="oauth-openshift" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.311251 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="5abe378d-2b00-4d15-af94-5141934fca47" containerName="oauth-openshift" Oct 13 17:28:00 crc kubenswrapper[4720]: E1013 17:28:00.311261 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="949931eb-c57d-40eb-a3da-f19524cbaf1d" containerName="extract-utilities" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.311268 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="949931eb-c57d-40eb-a3da-f19524cbaf1d" containerName="extract-utilities" Oct 13 17:28:00 crc kubenswrapper[4720]: E1013 17:28:00.311280 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="299b146d-27f2-455c-8d9a-8ae9829d570c" containerName="pruner" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.311286 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="299b146d-27f2-455c-8d9a-8ae9829d570c" containerName="pruner" Oct 13 17:28:00 crc kubenswrapper[4720]: E1013 17:28:00.311297 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24757001-c6f3-4ecc-9b71-8fa697a28390" containerName="extract-utilities" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.311305 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="24757001-c6f3-4ecc-9b71-8fa697a28390" containerName="extract-utilities" Oct 13 17:28:00 crc kubenswrapper[4720]: E1013 17:28:00.311317 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a4310d4-4aaa-4de5-be61-46eb59ac5d9b" containerName="extract-content" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.311325 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a4310d4-4aaa-4de5-be61-46eb59ac5d9b" containerName="extract-content" Oct 13 17:28:00 crc kubenswrapper[4720]: E1013 17:28:00.311337 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24757001-c6f3-4ecc-9b71-8fa697a28390" containerName="extract-content" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.311347 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="24757001-c6f3-4ecc-9b71-8fa697a28390" containerName="extract-content" Oct 13 17:28:00 crc kubenswrapper[4720]: E1013 17:28:00.311357 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a30ffbf-3240-409c-8c06-d9e12a160aab" containerName="registry-server" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.311364 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a30ffbf-3240-409c-8c06-d9e12a160aab" containerName="registry-server" Oct 13 17:28:00 crc kubenswrapper[4720]: E1013 17:28:00.311374 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a30ffbf-3240-409c-8c06-d9e12a160aab" containerName="extract-content" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.311382 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a30ffbf-3240-409c-8c06-d9e12a160aab" containerName="extract-content" Oct 13 17:28:00 crc kubenswrapper[4720]: E1013 17:28:00.311393 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="949931eb-c57d-40eb-a3da-f19524cbaf1d" containerName="extract-content" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.311400 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="949931eb-c57d-40eb-a3da-f19524cbaf1d" containerName="extract-content" Oct 13 17:28:00 crc kubenswrapper[4720]: E1013 17:28:00.311408 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24757001-c6f3-4ecc-9b71-8fa697a28390" containerName="registry-server" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.311415 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="24757001-c6f3-4ecc-9b71-8fa697a28390" containerName="registry-server" Oct 13 17:28:00 crc kubenswrapper[4720]: E1013 17:28:00.311423 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a30ffbf-3240-409c-8c06-d9e12a160aab" containerName="extract-utilities" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.311430 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a30ffbf-3240-409c-8c06-d9e12a160aab" containerName="extract-utilities" Oct 13 17:28:00 crc kubenswrapper[4720]: E1013 17:28:00.311442 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3508e8e0-82f4-4f2e-92d7-32cca02f3656" containerName="pruner" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.311449 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="3508e8e0-82f4-4f2e-92d7-32cca02f3656" containerName="pruner" Oct 13 17:28:00 crc kubenswrapper[4720]: E1013 17:28:00.311462 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a4310d4-4aaa-4de5-be61-46eb59ac5d9b" containerName="registry-server" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.311469 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a4310d4-4aaa-4de5-be61-46eb59ac5d9b" containerName="registry-server" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.311575 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="3508e8e0-82f4-4f2e-92d7-32cca02f3656" containerName="pruner" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.311587 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a4310d4-4aaa-4de5-be61-46eb59ac5d9b" containerName="registry-server" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.311596 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="299b146d-27f2-455c-8d9a-8ae9829d570c" containerName="pruner" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.311581 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5abe378d-2b00-4d15-af94-5141934fca47-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "5abe378d-2b00-4d15-af94-5141934fca47" (UID: "5abe378d-2b00-4d15-af94-5141934fca47"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.311603 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a30ffbf-3240-409c-8c06-d9e12a160aab" containerName="registry-server" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.311673 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="949931eb-c57d-40eb-a3da-f19524cbaf1d" containerName="registry-server" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.311691 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="5abe378d-2b00-4d15-af94-5141934fca47" containerName="oauth-openshift" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.311706 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="24757001-c6f3-4ecc-9b71-8fa697a28390" containerName="registry-server" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.312244 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-846dc6fc5d-qzkqr" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.326085 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-846dc6fc5d-qzkqr"] Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.350316 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/30423fb4-47ee-4fbd-ae6a-d4cd8c84a636-v4-0-config-system-router-certs\") pod \"oauth-openshift-846dc6fc5d-qzkqr\" (UID: \"30423fb4-47ee-4fbd-ae6a-d4cd8c84a636\") " pod="openshift-authentication/oauth-openshift-846dc6fc5d-qzkqr" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.350364 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/30423fb4-47ee-4fbd-ae6a-d4cd8c84a636-audit-policies\") pod \"oauth-openshift-846dc6fc5d-qzkqr\" (UID: \"30423fb4-47ee-4fbd-ae6a-d4cd8c84a636\") " pod="openshift-authentication/oauth-openshift-846dc6fc5d-qzkqr" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.350386 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/30423fb4-47ee-4fbd-ae6a-d4cd8c84a636-v4-0-config-system-cliconfig\") pod \"oauth-openshift-846dc6fc5d-qzkqr\" (UID: \"30423fb4-47ee-4fbd-ae6a-d4cd8c84a636\") " pod="openshift-authentication/oauth-openshift-846dc6fc5d-qzkqr" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.350402 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/30423fb4-47ee-4fbd-ae6a-d4cd8c84a636-v4-0-config-user-template-login\") pod \"oauth-openshift-846dc6fc5d-qzkqr\" (UID: \"30423fb4-47ee-4fbd-ae6a-d4cd8c84a636\") " pod="openshift-authentication/oauth-openshift-846dc6fc5d-qzkqr" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.350423 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/30423fb4-47ee-4fbd-ae6a-d4cd8c84a636-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-846dc6fc5d-qzkqr\" (UID: \"30423fb4-47ee-4fbd-ae6a-d4cd8c84a636\") " pod="openshift-authentication/oauth-openshift-846dc6fc5d-qzkqr" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.350443 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/30423fb4-47ee-4fbd-ae6a-d4cd8c84a636-v4-0-config-system-serving-cert\") pod \"oauth-openshift-846dc6fc5d-qzkqr\" (UID: \"30423fb4-47ee-4fbd-ae6a-d4cd8c84a636\") " pod="openshift-authentication/oauth-openshift-846dc6fc5d-qzkqr" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.350464 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30423fb4-47ee-4fbd-ae6a-d4cd8c84a636-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-846dc6fc5d-qzkqr\" (UID: \"30423fb4-47ee-4fbd-ae6a-d4cd8c84a636\") " pod="openshift-authentication/oauth-openshift-846dc6fc5d-qzkqr" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.350487 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsjrs\" (UniqueName: \"kubernetes.io/projected/30423fb4-47ee-4fbd-ae6a-d4cd8c84a636-kube-api-access-gsjrs\") pod \"oauth-openshift-846dc6fc5d-qzkqr\" (UID: \"30423fb4-47ee-4fbd-ae6a-d4cd8c84a636\") " pod="openshift-authentication/oauth-openshift-846dc6fc5d-qzkqr" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.350506 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/30423fb4-47ee-4fbd-ae6a-d4cd8c84a636-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-846dc6fc5d-qzkqr\" (UID: \"30423fb4-47ee-4fbd-ae6a-d4cd8c84a636\") " pod="openshift-authentication/oauth-openshift-846dc6fc5d-qzkqr" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.350523 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/30423fb4-47ee-4fbd-ae6a-d4cd8c84a636-v4-0-config-user-template-error\") pod \"oauth-openshift-846dc6fc5d-qzkqr\" (UID: \"30423fb4-47ee-4fbd-ae6a-d4cd8c84a636\") " pod="openshift-authentication/oauth-openshift-846dc6fc5d-qzkqr" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.350543 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/30423fb4-47ee-4fbd-ae6a-d4cd8c84a636-audit-dir\") pod \"oauth-openshift-846dc6fc5d-qzkqr\" (UID: \"30423fb4-47ee-4fbd-ae6a-d4cd8c84a636\") " pod="openshift-authentication/oauth-openshift-846dc6fc5d-qzkqr" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.350568 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/30423fb4-47ee-4fbd-ae6a-d4cd8c84a636-v4-0-config-system-service-ca\") pod \"oauth-openshift-846dc6fc5d-qzkqr\" (UID: \"30423fb4-47ee-4fbd-ae6a-d4cd8c84a636\") " pod="openshift-authentication/oauth-openshift-846dc6fc5d-qzkqr" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.350583 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/30423fb4-47ee-4fbd-ae6a-d4cd8c84a636-v4-0-config-system-session\") pod \"oauth-openshift-846dc6fc5d-qzkqr\" (UID: \"30423fb4-47ee-4fbd-ae6a-d4cd8c84a636\") " pod="openshift-authentication/oauth-openshift-846dc6fc5d-qzkqr" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.350695 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/30423fb4-47ee-4fbd-ae6a-d4cd8c84a636-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-846dc6fc5d-qzkqr\" (UID: \"30423fb4-47ee-4fbd-ae6a-d4cd8c84a636\") " pod="openshift-authentication/oauth-openshift-846dc6fc5d-qzkqr" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.350777 4720 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5abe378d-2b00-4d15-af94-5141934fca47-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.350789 4720 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5abe378d-2b00-4d15-af94-5141934fca47-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.350799 4720 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5abe378d-2b00-4d15-af94-5141934fca47-audit-policies\") on node \"crc\" DevicePath \"\"" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.350811 4720 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5abe378d-2b00-4d15-af94-5141934fca47-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.350821 4720 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5abe378d-2b00-4d15-af94-5141934fca47-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.350833 4720 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5abe378d-2b00-4d15-af94-5141934fca47-audit-dir\") on node \"crc\" DevicePath \"\"" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.350841 4720 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5abe378d-2b00-4d15-af94-5141934fca47-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.350850 4720 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5abe378d-2b00-4d15-af94-5141934fca47-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.350860 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pm6tt\" (UniqueName: \"kubernetes.io/projected/5abe378d-2b00-4d15-af94-5141934fca47-kube-api-access-pm6tt\") on node \"crc\" DevicePath \"\"" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.350869 4720 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5abe378d-2b00-4d15-af94-5141934fca47-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.350879 4720 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5abe378d-2b00-4d15-af94-5141934fca47-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.350890 4720 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5abe378d-2b00-4d15-af94-5141934fca47-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.350900 4720 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5abe378d-2b00-4d15-af94-5141934fca47-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.350908 4720 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5abe378d-2b00-4d15-af94-5141934fca47-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.452090 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/30423fb4-47ee-4fbd-ae6a-d4cd8c84a636-v4-0-config-system-service-ca\") pod \"oauth-openshift-846dc6fc5d-qzkqr\" (UID: \"30423fb4-47ee-4fbd-ae6a-d4cd8c84a636\") " pod="openshift-authentication/oauth-openshift-846dc6fc5d-qzkqr" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.452528 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/30423fb4-47ee-4fbd-ae6a-d4cd8c84a636-v4-0-config-system-session\") pod \"oauth-openshift-846dc6fc5d-qzkqr\" (UID: \"30423fb4-47ee-4fbd-ae6a-d4cd8c84a636\") " pod="openshift-authentication/oauth-openshift-846dc6fc5d-qzkqr" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.452596 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/30423fb4-47ee-4fbd-ae6a-d4cd8c84a636-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-846dc6fc5d-qzkqr\" (UID: \"30423fb4-47ee-4fbd-ae6a-d4cd8c84a636\") " pod="openshift-authentication/oauth-openshift-846dc6fc5d-qzkqr" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.452639 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/30423fb4-47ee-4fbd-ae6a-d4cd8c84a636-v4-0-config-system-router-certs\") pod \"oauth-openshift-846dc6fc5d-qzkqr\" (UID: \"30423fb4-47ee-4fbd-ae6a-d4cd8c84a636\") " pod="openshift-authentication/oauth-openshift-846dc6fc5d-qzkqr" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.452669 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/30423fb4-47ee-4fbd-ae6a-d4cd8c84a636-audit-policies\") pod \"oauth-openshift-846dc6fc5d-qzkqr\" (UID: \"30423fb4-47ee-4fbd-ae6a-d4cd8c84a636\") " pod="openshift-authentication/oauth-openshift-846dc6fc5d-qzkqr" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.452695 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/30423fb4-47ee-4fbd-ae6a-d4cd8c84a636-v4-0-config-system-cliconfig\") pod \"oauth-openshift-846dc6fc5d-qzkqr\" (UID: \"30423fb4-47ee-4fbd-ae6a-d4cd8c84a636\") " pod="openshift-authentication/oauth-openshift-846dc6fc5d-qzkqr" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.452715 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/30423fb4-47ee-4fbd-ae6a-d4cd8c84a636-v4-0-config-user-template-login\") pod \"oauth-openshift-846dc6fc5d-qzkqr\" (UID: \"30423fb4-47ee-4fbd-ae6a-d4cd8c84a636\") " pod="openshift-authentication/oauth-openshift-846dc6fc5d-qzkqr" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.453502 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/30423fb4-47ee-4fbd-ae6a-d4cd8c84a636-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-846dc6fc5d-qzkqr\" (UID: \"30423fb4-47ee-4fbd-ae6a-d4cd8c84a636\") " pod="openshift-authentication/oauth-openshift-846dc6fc5d-qzkqr" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.453615 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/30423fb4-47ee-4fbd-ae6a-d4cd8c84a636-audit-policies\") pod \"oauth-openshift-846dc6fc5d-qzkqr\" (UID: \"30423fb4-47ee-4fbd-ae6a-d4cd8c84a636\") " pod="openshift-authentication/oauth-openshift-846dc6fc5d-qzkqr" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.453644 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/30423fb4-47ee-4fbd-ae6a-d4cd8c84a636-v4-0-config-system-serving-cert\") pod \"oauth-openshift-846dc6fc5d-qzkqr\" (UID: \"30423fb4-47ee-4fbd-ae6a-d4cd8c84a636\") " pod="openshift-authentication/oauth-openshift-846dc6fc5d-qzkqr" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.453515 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/30423fb4-47ee-4fbd-ae6a-d4cd8c84a636-v4-0-config-system-service-ca\") pod \"oauth-openshift-846dc6fc5d-qzkqr\" (UID: \"30423fb4-47ee-4fbd-ae6a-d4cd8c84a636\") " pod="openshift-authentication/oauth-openshift-846dc6fc5d-qzkqr" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.453755 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30423fb4-47ee-4fbd-ae6a-d4cd8c84a636-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-846dc6fc5d-qzkqr\" (UID: \"30423fb4-47ee-4fbd-ae6a-d4cd8c84a636\") " pod="openshift-authentication/oauth-openshift-846dc6fc5d-qzkqr" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.453888 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsjrs\" (UniqueName: \"kubernetes.io/projected/30423fb4-47ee-4fbd-ae6a-d4cd8c84a636-kube-api-access-gsjrs\") pod \"oauth-openshift-846dc6fc5d-qzkqr\" (UID: \"30423fb4-47ee-4fbd-ae6a-d4cd8c84a636\") " pod="openshift-authentication/oauth-openshift-846dc6fc5d-qzkqr" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.453971 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/30423fb4-47ee-4fbd-ae6a-d4cd8c84a636-v4-0-config-system-cliconfig\") pod \"oauth-openshift-846dc6fc5d-qzkqr\" (UID: \"30423fb4-47ee-4fbd-ae6a-d4cd8c84a636\") " pod="openshift-authentication/oauth-openshift-846dc6fc5d-qzkqr" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.454001 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/30423fb4-47ee-4fbd-ae6a-d4cd8c84a636-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-846dc6fc5d-qzkqr\" (UID: \"30423fb4-47ee-4fbd-ae6a-d4cd8c84a636\") " pod="openshift-authentication/oauth-openshift-846dc6fc5d-qzkqr" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.454059 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/30423fb4-47ee-4fbd-ae6a-d4cd8c84a636-v4-0-config-user-template-error\") pod \"oauth-openshift-846dc6fc5d-qzkqr\" (UID: \"30423fb4-47ee-4fbd-ae6a-d4cd8c84a636\") " pod="openshift-authentication/oauth-openshift-846dc6fc5d-qzkqr" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.454116 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/30423fb4-47ee-4fbd-ae6a-d4cd8c84a636-audit-dir\") pod \"oauth-openshift-846dc6fc5d-qzkqr\" (UID: \"30423fb4-47ee-4fbd-ae6a-d4cd8c84a636\") " pod="openshift-authentication/oauth-openshift-846dc6fc5d-qzkqr" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.454278 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/30423fb4-47ee-4fbd-ae6a-d4cd8c84a636-audit-dir\") pod \"oauth-openshift-846dc6fc5d-qzkqr\" (UID: \"30423fb4-47ee-4fbd-ae6a-d4cd8c84a636\") " pod="openshift-authentication/oauth-openshift-846dc6fc5d-qzkqr" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.456023 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/30423fb4-47ee-4fbd-ae6a-d4cd8c84a636-v4-0-config-system-session\") pod \"oauth-openshift-846dc6fc5d-qzkqr\" (UID: \"30423fb4-47ee-4fbd-ae6a-d4cd8c84a636\") " pod="openshift-authentication/oauth-openshift-846dc6fc5d-qzkqr" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.456135 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30423fb4-47ee-4fbd-ae6a-d4cd8c84a636-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-846dc6fc5d-qzkqr\" (UID: \"30423fb4-47ee-4fbd-ae6a-d4cd8c84a636\") " pod="openshift-authentication/oauth-openshift-846dc6fc5d-qzkqr" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.457760 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/30423fb4-47ee-4fbd-ae6a-d4cd8c84a636-v4-0-config-system-router-certs\") pod \"oauth-openshift-846dc6fc5d-qzkqr\" (UID: \"30423fb4-47ee-4fbd-ae6a-d4cd8c84a636\") " pod="openshift-authentication/oauth-openshift-846dc6fc5d-qzkqr" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.458288 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/30423fb4-47ee-4fbd-ae6a-d4cd8c84a636-v4-0-config-user-template-error\") pod \"oauth-openshift-846dc6fc5d-qzkqr\" (UID: \"30423fb4-47ee-4fbd-ae6a-d4cd8c84a636\") " pod="openshift-authentication/oauth-openshift-846dc6fc5d-qzkqr" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.458500 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/30423fb4-47ee-4fbd-ae6a-d4cd8c84a636-v4-0-config-system-serving-cert\") pod \"oauth-openshift-846dc6fc5d-qzkqr\" (UID: \"30423fb4-47ee-4fbd-ae6a-d4cd8c84a636\") " pod="openshift-authentication/oauth-openshift-846dc6fc5d-qzkqr" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.459372 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/30423fb4-47ee-4fbd-ae6a-d4cd8c84a636-v4-0-config-user-template-login\") pod \"oauth-openshift-846dc6fc5d-qzkqr\" (UID: \"30423fb4-47ee-4fbd-ae6a-d4cd8c84a636\") " pod="openshift-authentication/oauth-openshift-846dc6fc5d-qzkqr" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.459849 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/30423fb4-47ee-4fbd-ae6a-d4cd8c84a636-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-846dc6fc5d-qzkqr\" (UID: \"30423fb4-47ee-4fbd-ae6a-d4cd8c84a636\") " pod="openshift-authentication/oauth-openshift-846dc6fc5d-qzkqr" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.461534 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/30423fb4-47ee-4fbd-ae6a-d4cd8c84a636-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-846dc6fc5d-qzkqr\" (UID: \"30423fb4-47ee-4fbd-ae6a-d4cd8c84a636\") " pod="openshift-authentication/oauth-openshift-846dc6fc5d-qzkqr" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.463508 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/30423fb4-47ee-4fbd-ae6a-d4cd8c84a636-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-846dc6fc5d-qzkqr\" (UID: \"30423fb4-47ee-4fbd-ae6a-d4cd8c84a636\") " pod="openshift-authentication/oauth-openshift-846dc6fc5d-qzkqr" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.471089 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsjrs\" (UniqueName: \"kubernetes.io/projected/30423fb4-47ee-4fbd-ae6a-d4cd8c84a636-kube-api-access-gsjrs\") pod \"oauth-openshift-846dc6fc5d-qzkqr\" (UID: \"30423fb4-47ee-4fbd-ae6a-d4cd8c84a636\") " pod="openshift-authentication/oauth-openshift-846dc6fc5d-qzkqr" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.628183 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-846dc6fc5d-qzkqr" Oct 13 17:28:00 crc kubenswrapper[4720]: I1013 17:28:00.853664 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-846dc6fc5d-qzkqr"] Oct 13 17:28:01 crc kubenswrapper[4720]: I1013 17:28:01.137762 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-xbb4t" event={"ID":"5abe378d-2b00-4d15-af94-5141934fca47","Type":"ContainerDied","Data":"3abf26304a2065a4747b662d067d3f80e47e9be38d84e3f4244da8b9eb7b63d4"} Oct 13 17:28:01 crc kubenswrapper[4720]: I1013 17:28:01.138508 4720 scope.go:117] "RemoveContainer" containerID="2acdbaa1aab485da3720f131327fb20dbda869b22199fb4c8f86d8c5b518897e" Oct 13 17:28:01 crc kubenswrapper[4720]: I1013 17:28:01.138543 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-xbb4t" Oct 13 17:28:01 crc kubenswrapper[4720]: I1013 17:28:01.142620 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-846dc6fc5d-qzkqr" event={"ID":"30423fb4-47ee-4fbd-ae6a-d4cd8c84a636","Type":"ContainerStarted","Data":"22208bedbf8bd6d506d4edc3d37784dd966df945006c01076f75fab99629a356"} Oct 13 17:28:01 crc kubenswrapper[4720]: I1013 17:28:01.142682 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-846dc6fc5d-qzkqr" event={"ID":"30423fb4-47ee-4fbd-ae6a-d4cd8c84a636","Type":"ContainerStarted","Data":"480901d9df3f023ae58d8bd0f6bf1123d396b10a1f4a8f53defbcd99e80ab835"} Oct 13 17:28:01 crc kubenswrapper[4720]: I1013 17:28:01.142987 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-846dc6fc5d-qzkqr" Oct 13 17:28:01 crc kubenswrapper[4720]: I1013 17:28:01.145154 4720 patch_prober.go:28] interesting pod/oauth-openshift-846dc6fc5d-qzkqr container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.54:6443/healthz\": dial tcp 10.217.0.54:6443: connect: connection refused" start-of-body= Oct 13 17:28:01 crc kubenswrapper[4720]: I1013 17:28:01.145258 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-846dc6fc5d-qzkqr" podUID="30423fb4-47ee-4fbd-ae6a-d4cd8c84a636" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.54:6443/healthz\": dial tcp 10.217.0.54:6443: connect: connection refused" Oct 13 17:28:01 crc kubenswrapper[4720]: I1013 17:28:01.193474 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-846dc6fc5d-qzkqr" podStartSLOduration=27.193442654 podStartE2EDuration="27.193442654s" podCreationTimestamp="2025-10-13 17:27:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:28:01.192084389 +0000 UTC m=+226.649334551" watchObservedRunningTime="2025-10-13 17:28:01.193442654 +0000 UTC m=+226.650692826" Oct 13 17:28:01 crc kubenswrapper[4720]: I1013 17:28:01.216940 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-xbb4t"] Oct 13 17:28:01 crc kubenswrapper[4720]: I1013 17:28:01.225382 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-xbb4t"] Oct 13 17:28:02 crc kubenswrapper[4720]: I1013 17:28:02.160855 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-846dc6fc5d-qzkqr" Oct 13 17:28:03 crc kubenswrapper[4720]: I1013 17:28:03.180224 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5abe378d-2b00-4d15-af94-5141934fca47" path="/var/lib/kubelet/pods/5abe378d-2b00-4d15-af94-5141934fca47/volumes" Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.097130 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6n8tc"] Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.097906 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6n8tc" podUID="45a4b9f0-8d20-4a69-bd03-2c54f1a66867" containerName="registry-server" containerID="cri-o://cf65fe1c986ef26b2cbcd07d95fc4524019664b900ead71f49b7b97a7370ccac" gracePeriod=30 Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.115913 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-85tbj"] Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.116729 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-85tbj" podUID="4ec46228-40b7-4dd0-b773-59e2b088ef17" containerName="registry-server" containerID="cri-o://c98c132ce99eded322c8469d1679988dbcf6133d7958fe46a0a0873132c6d7f1" gracePeriod=30 Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.131691 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xpdm2"] Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.132038 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-xpdm2" podUID="5b9cfa7f-e80a-42b8-b6f0-239165447812" containerName="marketplace-operator" containerID="cri-o://d903ac75d4bb20e2c8d8bdbf3c6afe828dd45ea657c0f7c6ef565f941a5e1515" gracePeriod=30 Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.136381 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dp5pr"] Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.136723 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dp5pr" podUID="cba388bf-d5f4-4a4e-8add-d8e4d7489f14" containerName="registry-server" containerID="cri-o://b1a3a0262a0118f50aac13c998dc0a0112a4fd47dfd4aa7ef890f7df6b4556fd" gracePeriod=30 Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.153275 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wgslm"] Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.154005 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wgslm" Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.156524 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-skh6q"] Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.156806 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-skh6q" podUID="cca499e4-939e-4ba7-b98b-2965e14da5c3" containerName="registry-server" containerID="cri-o://acd6732fa35e11a229552026ed184ffe35dee5797908a4230bdc446c7bd0d5a0" gracePeriod=30 Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.166378 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wgslm"] Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.266563 4720 generic.go:334] "Generic (PLEG): container finished" podID="45a4b9f0-8d20-4a69-bd03-2c54f1a66867" containerID="cf65fe1c986ef26b2cbcd07d95fc4524019664b900ead71f49b7b97a7370ccac" exitCode=0 Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.266635 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6n8tc" event={"ID":"45a4b9f0-8d20-4a69-bd03-2c54f1a66867","Type":"ContainerDied","Data":"cf65fe1c986ef26b2cbcd07d95fc4524019664b900ead71f49b7b97a7370ccac"} Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.268435 4720 generic.go:334] "Generic (PLEG): container finished" podID="cba388bf-d5f4-4a4e-8add-d8e4d7489f14" containerID="b1a3a0262a0118f50aac13c998dc0a0112a4fd47dfd4aa7ef890f7df6b4556fd" exitCode=0 Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.268482 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dp5pr" event={"ID":"cba388bf-d5f4-4a4e-8add-d8e4d7489f14","Type":"ContainerDied","Data":"b1a3a0262a0118f50aac13c998dc0a0112a4fd47dfd4aa7ef890f7df6b4556fd"} Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.269770 4720 generic.go:334] "Generic (PLEG): container finished" podID="5b9cfa7f-e80a-42b8-b6f0-239165447812" containerID="d903ac75d4bb20e2c8d8bdbf3c6afe828dd45ea657c0f7c6ef565f941a5e1515" exitCode=0 Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.269832 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xpdm2" event={"ID":"5b9cfa7f-e80a-42b8-b6f0-239165447812","Type":"ContainerDied","Data":"d903ac75d4bb20e2c8d8bdbf3c6afe828dd45ea657c0f7c6ef565f941a5e1515"} Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.271489 4720 generic.go:334] "Generic (PLEG): container finished" podID="4ec46228-40b7-4dd0-b773-59e2b088ef17" containerID="c98c132ce99eded322c8469d1679988dbcf6133d7958fe46a0a0873132c6d7f1" exitCode=0 Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.271506 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-85tbj" event={"ID":"4ec46228-40b7-4dd0-b773-59e2b088ef17","Type":"ContainerDied","Data":"c98c132ce99eded322c8469d1679988dbcf6133d7958fe46a0a0873132c6d7f1"} Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.347469 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8524b73b-8e30-4e35-bc36-1b3c9e911ad0-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wgslm\" (UID: \"8524b73b-8e30-4e35-bc36-1b3c9e911ad0\") " pod="openshift-marketplace/marketplace-operator-79b997595-wgslm" Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.347541 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8524b73b-8e30-4e35-bc36-1b3c9e911ad0-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wgslm\" (UID: \"8524b73b-8e30-4e35-bc36-1b3c9e911ad0\") " pod="openshift-marketplace/marketplace-operator-79b997595-wgslm" Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.347641 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psd9d\" (UniqueName: \"kubernetes.io/projected/8524b73b-8e30-4e35-bc36-1b3c9e911ad0-kube-api-access-psd9d\") pod \"marketplace-operator-79b997595-wgslm\" (UID: \"8524b73b-8e30-4e35-bc36-1b3c9e911ad0\") " pod="openshift-marketplace/marketplace-operator-79b997595-wgslm" Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.449053 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psd9d\" (UniqueName: \"kubernetes.io/projected/8524b73b-8e30-4e35-bc36-1b3c9e911ad0-kube-api-access-psd9d\") pod \"marketplace-operator-79b997595-wgslm\" (UID: \"8524b73b-8e30-4e35-bc36-1b3c9e911ad0\") " pod="openshift-marketplace/marketplace-operator-79b997595-wgslm" Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.449130 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8524b73b-8e30-4e35-bc36-1b3c9e911ad0-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wgslm\" (UID: \"8524b73b-8e30-4e35-bc36-1b3c9e911ad0\") " pod="openshift-marketplace/marketplace-operator-79b997595-wgslm" Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.449173 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8524b73b-8e30-4e35-bc36-1b3c9e911ad0-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wgslm\" (UID: \"8524b73b-8e30-4e35-bc36-1b3c9e911ad0\") " pod="openshift-marketplace/marketplace-operator-79b997595-wgslm" Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.451053 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8524b73b-8e30-4e35-bc36-1b3c9e911ad0-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wgslm\" (UID: \"8524b73b-8e30-4e35-bc36-1b3c9e911ad0\") " pod="openshift-marketplace/marketplace-operator-79b997595-wgslm" Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.462684 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8524b73b-8e30-4e35-bc36-1b3c9e911ad0-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wgslm\" (UID: \"8524b73b-8e30-4e35-bc36-1b3c9e911ad0\") " pod="openshift-marketplace/marketplace-operator-79b997595-wgslm" Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.466863 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psd9d\" (UniqueName: \"kubernetes.io/projected/8524b73b-8e30-4e35-bc36-1b3c9e911ad0-kube-api-access-psd9d\") pod \"marketplace-operator-79b997595-wgslm\" (UID: \"8524b73b-8e30-4e35-bc36-1b3c9e911ad0\") " pod="openshift-marketplace/marketplace-operator-79b997595-wgslm" Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.476984 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wgslm" Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.571115 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6n8tc" Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.653041 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xpdm2" Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.659395 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-skh6q" Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.676714 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-85tbj" Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.683367 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dp5pr" Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.755157 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45a4b9f0-8d20-4a69-bd03-2c54f1a66867-catalog-content\") pod \"45a4b9f0-8d20-4a69-bd03-2c54f1a66867\" (UID: \"45a4b9f0-8d20-4a69-bd03-2c54f1a66867\") " Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.755254 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45a4b9f0-8d20-4a69-bd03-2c54f1a66867-utilities\") pod \"45a4b9f0-8d20-4a69-bd03-2c54f1a66867\" (UID: \"45a4b9f0-8d20-4a69-bd03-2c54f1a66867\") " Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.755343 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5b9cfa7f-e80a-42b8-b6f0-239165447812-marketplace-operator-metrics\") pod \"5b9cfa7f-e80a-42b8-b6f0-239165447812\" (UID: \"5b9cfa7f-e80a-42b8-b6f0-239165447812\") " Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.755421 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsgtw\" (UniqueName: \"kubernetes.io/projected/5b9cfa7f-e80a-42b8-b6f0-239165447812-kube-api-access-wsgtw\") pod \"5b9cfa7f-e80a-42b8-b6f0-239165447812\" (UID: \"5b9cfa7f-e80a-42b8-b6f0-239165447812\") " Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.755472 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62dvw\" (UniqueName: \"kubernetes.io/projected/45a4b9f0-8d20-4a69-bd03-2c54f1a66867-kube-api-access-62dvw\") pod \"45a4b9f0-8d20-4a69-bd03-2c54f1a66867\" (UID: \"45a4b9f0-8d20-4a69-bd03-2c54f1a66867\") " Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.755500 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5b9cfa7f-e80a-42b8-b6f0-239165447812-marketplace-trusted-ca\") pod \"5b9cfa7f-e80a-42b8-b6f0-239165447812\" (UID: \"5b9cfa7f-e80a-42b8-b6f0-239165447812\") " Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.756430 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b9cfa7f-e80a-42b8-b6f0-239165447812-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "5b9cfa7f-e80a-42b8-b6f0-239165447812" (UID: "5b9cfa7f-e80a-42b8-b6f0-239165447812"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.759386 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wgslm"] Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.763263 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b9cfa7f-e80a-42b8-b6f0-239165447812-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "5b9cfa7f-e80a-42b8-b6f0-239165447812" (UID: "5b9cfa7f-e80a-42b8-b6f0-239165447812"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.764540 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45a4b9f0-8d20-4a69-bd03-2c54f1a66867-utilities" (OuterVolumeSpecName: "utilities") pod "45a4b9f0-8d20-4a69-bd03-2c54f1a66867" (UID: "45a4b9f0-8d20-4a69-bd03-2c54f1a66867"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.767647 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b9cfa7f-e80a-42b8-b6f0-239165447812-kube-api-access-wsgtw" (OuterVolumeSpecName: "kube-api-access-wsgtw") pod "5b9cfa7f-e80a-42b8-b6f0-239165447812" (UID: "5b9cfa7f-e80a-42b8-b6f0-239165447812"). InnerVolumeSpecName "kube-api-access-wsgtw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.771639 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45a4b9f0-8d20-4a69-bd03-2c54f1a66867-kube-api-access-62dvw" (OuterVolumeSpecName: "kube-api-access-62dvw") pod "45a4b9f0-8d20-4a69-bd03-2c54f1a66867" (UID: "45a4b9f0-8d20-4a69-bd03-2c54f1a66867"). InnerVolumeSpecName "kube-api-access-62dvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.815773 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45a4b9f0-8d20-4a69-bd03-2c54f1a66867-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "45a4b9f0-8d20-4a69-bd03-2c54f1a66867" (UID: "45a4b9f0-8d20-4a69-bd03-2c54f1a66867"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.856462 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ec46228-40b7-4dd0-b773-59e2b088ef17-utilities\") pod \"4ec46228-40b7-4dd0-b773-59e2b088ef17\" (UID: \"4ec46228-40b7-4dd0-b773-59e2b088ef17\") " Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.856520 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfv6h\" (UniqueName: \"kubernetes.io/projected/cba388bf-d5f4-4a4e-8add-d8e4d7489f14-kube-api-access-hfv6h\") pod \"cba388bf-d5f4-4a4e-8add-d8e4d7489f14\" (UID: \"cba388bf-d5f4-4a4e-8add-d8e4d7489f14\") " Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.856541 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cca499e4-939e-4ba7-b98b-2965e14da5c3-utilities\") pod \"cca499e4-939e-4ba7-b98b-2965e14da5c3\" (UID: \"cca499e4-939e-4ba7-b98b-2965e14da5c3\") " Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.856556 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h74rp\" (UniqueName: \"kubernetes.io/projected/4ec46228-40b7-4dd0-b773-59e2b088ef17-kube-api-access-h74rp\") pod \"4ec46228-40b7-4dd0-b773-59e2b088ef17\" (UID: \"4ec46228-40b7-4dd0-b773-59e2b088ef17\") " Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.856607 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ec46228-40b7-4dd0-b773-59e2b088ef17-catalog-content\") pod \"4ec46228-40b7-4dd0-b773-59e2b088ef17\" (UID: \"4ec46228-40b7-4dd0-b773-59e2b088ef17\") " Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.856627 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cba388bf-d5f4-4a4e-8add-d8e4d7489f14-utilities\") pod \"cba388bf-d5f4-4a4e-8add-d8e4d7489f14\" (UID: \"cba388bf-d5f4-4a4e-8add-d8e4d7489f14\") " Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.856675 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgrwm\" (UniqueName: \"kubernetes.io/projected/cca499e4-939e-4ba7-b98b-2965e14da5c3-kube-api-access-rgrwm\") pod \"cca499e4-939e-4ba7-b98b-2965e14da5c3\" (UID: \"cca499e4-939e-4ba7-b98b-2965e14da5c3\") " Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.856699 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cca499e4-939e-4ba7-b98b-2965e14da5c3-catalog-content\") pod \"cca499e4-939e-4ba7-b98b-2965e14da5c3\" (UID: \"cca499e4-939e-4ba7-b98b-2965e14da5c3\") " Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.856731 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cba388bf-d5f4-4a4e-8add-d8e4d7489f14-catalog-content\") pod \"cba388bf-d5f4-4a4e-8add-d8e4d7489f14\" (UID: \"cba388bf-d5f4-4a4e-8add-d8e4d7489f14\") " Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.856911 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45a4b9f0-8d20-4a69-bd03-2c54f1a66867-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.856924 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45a4b9f0-8d20-4a69-bd03-2c54f1a66867-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.856933 4720 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5b9cfa7f-e80a-42b8-b6f0-239165447812-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.856944 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wsgtw\" (UniqueName: \"kubernetes.io/projected/5b9cfa7f-e80a-42b8-b6f0-239165447812-kube-api-access-wsgtw\") on node \"crc\" DevicePath \"\"" Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.856953 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62dvw\" (UniqueName: \"kubernetes.io/projected/45a4b9f0-8d20-4a69-bd03-2c54f1a66867-kube-api-access-62dvw\") on node \"crc\" DevicePath \"\"" Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.856962 4720 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5b9cfa7f-e80a-42b8-b6f0-239165447812-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.857155 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ec46228-40b7-4dd0-b773-59e2b088ef17-utilities" (OuterVolumeSpecName: "utilities") pod "4ec46228-40b7-4dd0-b773-59e2b088ef17" (UID: "4ec46228-40b7-4dd0-b773-59e2b088ef17"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.857279 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cca499e4-939e-4ba7-b98b-2965e14da5c3-utilities" (OuterVolumeSpecName: "utilities") pod "cca499e4-939e-4ba7-b98b-2965e14da5c3" (UID: "cca499e4-939e-4ba7-b98b-2965e14da5c3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.857742 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cba388bf-d5f4-4a4e-8add-d8e4d7489f14-utilities" (OuterVolumeSpecName: "utilities") pod "cba388bf-d5f4-4a4e-8add-d8e4d7489f14" (UID: "cba388bf-d5f4-4a4e-8add-d8e4d7489f14"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.860797 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cca499e4-939e-4ba7-b98b-2965e14da5c3-kube-api-access-rgrwm" (OuterVolumeSpecName: "kube-api-access-rgrwm") pod "cca499e4-939e-4ba7-b98b-2965e14da5c3" (UID: "cca499e4-939e-4ba7-b98b-2965e14da5c3"). InnerVolumeSpecName "kube-api-access-rgrwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.861113 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ec46228-40b7-4dd0-b773-59e2b088ef17-kube-api-access-h74rp" (OuterVolumeSpecName: "kube-api-access-h74rp") pod "4ec46228-40b7-4dd0-b773-59e2b088ef17" (UID: "4ec46228-40b7-4dd0-b773-59e2b088ef17"). InnerVolumeSpecName "kube-api-access-h74rp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.862346 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cba388bf-d5f4-4a4e-8add-d8e4d7489f14-kube-api-access-hfv6h" (OuterVolumeSpecName: "kube-api-access-hfv6h") pod "cba388bf-d5f4-4a4e-8add-d8e4d7489f14" (UID: "cba388bf-d5f4-4a4e-8add-d8e4d7489f14"). InnerVolumeSpecName "kube-api-access-hfv6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.871259 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cba388bf-d5f4-4a4e-8add-d8e4d7489f14-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cba388bf-d5f4-4a4e-8add-d8e4d7489f14" (UID: "cba388bf-d5f4-4a4e-8add-d8e4d7489f14"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.920439 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ec46228-40b7-4dd0-b773-59e2b088ef17-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4ec46228-40b7-4dd0-b773-59e2b088ef17" (UID: "4ec46228-40b7-4dd0-b773-59e2b088ef17"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.940649 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cca499e4-939e-4ba7-b98b-2965e14da5c3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cca499e4-939e-4ba7-b98b-2965e14da5c3" (UID: "cca499e4-939e-4ba7-b98b-2965e14da5c3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.958077 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cca499e4-939e-4ba7-b98b-2965e14da5c3-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.958126 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cba388bf-d5f4-4a4e-8add-d8e4d7489f14-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.958140 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ec46228-40b7-4dd0-b773-59e2b088ef17-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.958153 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfv6h\" (UniqueName: \"kubernetes.io/projected/cba388bf-d5f4-4a4e-8add-d8e4d7489f14-kube-api-access-hfv6h\") on node \"crc\" DevicePath \"\"" Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.958168 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h74rp\" (UniqueName: \"kubernetes.io/projected/4ec46228-40b7-4dd0-b773-59e2b088ef17-kube-api-access-h74rp\") on node \"crc\" DevicePath \"\"" Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.958179 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cca499e4-939e-4ba7-b98b-2965e14da5c3-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.958212 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ec46228-40b7-4dd0-b773-59e2b088ef17-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.958225 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cba388bf-d5f4-4a4e-8add-d8e4d7489f14-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 17:28:22 crc kubenswrapper[4720]: I1013 17:28:22.958236 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgrwm\" (UniqueName: \"kubernetes.io/projected/cca499e4-939e-4ba7-b98b-2965e14da5c3-kube-api-access-rgrwm\") on node \"crc\" DevicePath \"\"" Oct 13 17:28:23 crc kubenswrapper[4720]: I1013 17:28:23.278836 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xpdm2" event={"ID":"5b9cfa7f-e80a-42b8-b6f0-239165447812","Type":"ContainerDied","Data":"e9464d85a814f7037d9ce53de7cdb05e479434ca1fb41dec8620b72efc11a98b"} Oct 13 17:28:23 crc kubenswrapper[4720]: I1013 17:28:23.278896 4720 scope.go:117] "RemoveContainer" containerID="d903ac75d4bb20e2c8d8bdbf3c6afe828dd45ea657c0f7c6ef565f941a5e1515" Oct 13 17:28:23 crc kubenswrapper[4720]: I1013 17:28:23.278836 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xpdm2" Oct 13 17:28:23 crc kubenswrapper[4720]: I1013 17:28:23.282150 4720 generic.go:334] "Generic (PLEG): container finished" podID="cca499e4-939e-4ba7-b98b-2965e14da5c3" containerID="acd6732fa35e11a229552026ed184ffe35dee5797908a4230bdc446c7bd0d5a0" exitCode=0 Oct 13 17:28:23 crc kubenswrapper[4720]: I1013 17:28:23.282232 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-skh6q" Oct 13 17:28:23 crc kubenswrapper[4720]: I1013 17:28:23.282253 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-skh6q" event={"ID":"cca499e4-939e-4ba7-b98b-2965e14da5c3","Type":"ContainerDied","Data":"acd6732fa35e11a229552026ed184ffe35dee5797908a4230bdc446c7bd0d5a0"} Oct 13 17:28:23 crc kubenswrapper[4720]: I1013 17:28:23.282338 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-skh6q" event={"ID":"cca499e4-939e-4ba7-b98b-2965e14da5c3","Type":"ContainerDied","Data":"67bfcb29092f7b4b0e1ae9b84aba87c0fd09ac37a432e5f402349c2e7ae402db"} Oct 13 17:28:23 crc kubenswrapper[4720]: I1013 17:28:23.285729 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-85tbj" event={"ID":"4ec46228-40b7-4dd0-b773-59e2b088ef17","Type":"ContainerDied","Data":"e2d34bce12acaa7aa7f392567b324b2e2eb04b0c482579607b97a61e7704a85e"} Oct 13 17:28:23 crc kubenswrapper[4720]: I1013 17:28:23.285773 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-85tbj" Oct 13 17:28:23 crc kubenswrapper[4720]: I1013 17:28:23.287465 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wgslm" event={"ID":"8524b73b-8e30-4e35-bc36-1b3c9e911ad0","Type":"ContainerStarted","Data":"e0fc7f0d3a30634f047ffc541d7cf856466f398f07240546c1d8ea95a69db2a1"} Oct 13 17:28:23 crc kubenswrapper[4720]: I1013 17:28:23.287515 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wgslm" event={"ID":"8524b73b-8e30-4e35-bc36-1b3c9e911ad0","Type":"ContainerStarted","Data":"f3253e473d62ea6ff8a211bff38d6f3020e71f594f04b727ad0c423f39a91bf3"} Oct 13 17:28:23 crc kubenswrapper[4720]: I1013 17:28:23.287864 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-wgslm" Oct 13 17:28:23 crc kubenswrapper[4720]: I1013 17:28:23.291028 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6n8tc" Oct 13 17:28:23 crc kubenswrapper[4720]: I1013 17:28:23.291072 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6n8tc" event={"ID":"45a4b9f0-8d20-4a69-bd03-2c54f1a66867","Type":"ContainerDied","Data":"5cac65b383ce4c5880558dd1f8f5cd81497aa8dc9624a3866e93157d58c3ce72"} Oct 13 17:28:23 crc kubenswrapper[4720]: I1013 17:28:23.293965 4720 scope.go:117] "RemoveContainer" containerID="acd6732fa35e11a229552026ed184ffe35dee5797908a4230bdc446c7bd0d5a0" Oct 13 17:28:23 crc kubenswrapper[4720]: I1013 17:28:23.294235 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dp5pr" event={"ID":"cba388bf-d5f4-4a4e-8add-d8e4d7489f14","Type":"ContainerDied","Data":"58338dcc5638e39a983be7b708d59d0394e88673f6992b54f3ef4a9254985c97"} Oct 13 17:28:23 crc kubenswrapper[4720]: I1013 17:28:23.294332 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dp5pr" Oct 13 17:28:23 crc kubenswrapper[4720]: I1013 17:28:23.294532 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-wgslm" Oct 13 17:28:23 crc kubenswrapper[4720]: I1013 17:28:23.303640 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xpdm2"] Oct 13 17:28:23 crc kubenswrapper[4720]: I1013 17:28:23.307555 4720 scope.go:117] "RemoveContainer" containerID="2ed0c853721c429187d42c4b8c88c42d897ed63f340cfc7558bdc44e908ad230" Oct 13 17:28:23 crc kubenswrapper[4720]: I1013 17:28:23.307596 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xpdm2"] Oct 13 17:28:23 crc kubenswrapper[4720]: I1013 17:28:23.330311 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-skh6q"] Oct 13 17:28:23 crc kubenswrapper[4720]: I1013 17:28:23.333717 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-skh6q"] Oct 13 17:28:23 crc kubenswrapper[4720]: I1013 17:28:23.344578 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-85tbj"] Oct 13 17:28:23 crc kubenswrapper[4720]: I1013 17:28:23.348488 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-85tbj"] Oct 13 17:28:23 crc kubenswrapper[4720]: I1013 17:28:23.357055 4720 scope.go:117] "RemoveContainer" containerID="427183d1775f8d339d18c7ad16a9c53f5af7053536a5f706cb51cb1255670629" Oct 13 17:28:23 crc kubenswrapper[4720]: I1013 17:28:23.359253 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dp5pr"] Oct 13 17:28:23 crc kubenswrapper[4720]: I1013 17:28:23.361453 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dp5pr"] Oct 13 17:28:23 crc kubenswrapper[4720]: I1013 17:28:23.372788 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-wgslm" podStartSLOduration=1.372776331 podStartE2EDuration="1.372776331s" podCreationTimestamp="2025-10-13 17:28:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:28:23.37118015 +0000 UTC m=+248.828430312" watchObservedRunningTime="2025-10-13 17:28:23.372776331 +0000 UTC m=+248.830026463" Oct 13 17:28:23 crc kubenswrapper[4720]: I1013 17:28:23.383046 4720 scope.go:117] "RemoveContainer" containerID="acd6732fa35e11a229552026ed184ffe35dee5797908a4230bdc446c7bd0d5a0" Oct 13 17:28:23 crc kubenswrapper[4720]: E1013 17:28:23.383709 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acd6732fa35e11a229552026ed184ffe35dee5797908a4230bdc446c7bd0d5a0\": container with ID starting with acd6732fa35e11a229552026ed184ffe35dee5797908a4230bdc446c7bd0d5a0 not found: ID does not exist" containerID="acd6732fa35e11a229552026ed184ffe35dee5797908a4230bdc446c7bd0d5a0" Oct 13 17:28:23 crc kubenswrapper[4720]: I1013 17:28:23.383754 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acd6732fa35e11a229552026ed184ffe35dee5797908a4230bdc446c7bd0d5a0"} err="failed to get container status \"acd6732fa35e11a229552026ed184ffe35dee5797908a4230bdc446c7bd0d5a0\": rpc error: code = NotFound desc = could not find container \"acd6732fa35e11a229552026ed184ffe35dee5797908a4230bdc446c7bd0d5a0\": container with ID starting with acd6732fa35e11a229552026ed184ffe35dee5797908a4230bdc446c7bd0d5a0 not found: ID does not exist" Oct 13 17:28:23 crc kubenswrapper[4720]: I1013 17:28:23.383780 4720 scope.go:117] "RemoveContainer" containerID="2ed0c853721c429187d42c4b8c88c42d897ed63f340cfc7558bdc44e908ad230" Oct 13 17:28:23 crc kubenswrapper[4720]: E1013 17:28:23.389170 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ed0c853721c429187d42c4b8c88c42d897ed63f340cfc7558bdc44e908ad230\": container with ID starting with 2ed0c853721c429187d42c4b8c88c42d897ed63f340cfc7558bdc44e908ad230 not found: ID does not exist" containerID="2ed0c853721c429187d42c4b8c88c42d897ed63f340cfc7558bdc44e908ad230" Oct 13 17:28:23 crc kubenswrapper[4720]: I1013 17:28:23.389227 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ed0c853721c429187d42c4b8c88c42d897ed63f340cfc7558bdc44e908ad230"} err="failed to get container status \"2ed0c853721c429187d42c4b8c88c42d897ed63f340cfc7558bdc44e908ad230\": rpc error: code = NotFound desc = could not find container \"2ed0c853721c429187d42c4b8c88c42d897ed63f340cfc7558bdc44e908ad230\": container with ID starting with 2ed0c853721c429187d42c4b8c88c42d897ed63f340cfc7558bdc44e908ad230 not found: ID does not exist" Oct 13 17:28:23 crc kubenswrapper[4720]: I1013 17:28:23.389244 4720 scope.go:117] "RemoveContainer" containerID="427183d1775f8d339d18c7ad16a9c53f5af7053536a5f706cb51cb1255670629" Oct 13 17:28:23 crc kubenswrapper[4720]: E1013 17:28:23.396280 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"427183d1775f8d339d18c7ad16a9c53f5af7053536a5f706cb51cb1255670629\": container with ID starting with 427183d1775f8d339d18c7ad16a9c53f5af7053536a5f706cb51cb1255670629 not found: ID does not exist" containerID="427183d1775f8d339d18c7ad16a9c53f5af7053536a5f706cb51cb1255670629" Oct 13 17:28:23 crc kubenswrapper[4720]: I1013 17:28:23.396403 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"427183d1775f8d339d18c7ad16a9c53f5af7053536a5f706cb51cb1255670629"} err="failed to get container status \"427183d1775f8d339d18c7ad16a9c53f5af7053536a5f706cb51cb1255670629\": rpc error: code = NotFound desc = could not find container \"427183d1775f8d339d18c7ad16a9c53f5af7053536a5f706cb51cb1255670629\": container with ID starting with 427183d1775f8d339d18c7ad16a9c53f5af7053536a5f706cb51cb1255670629 not found: ID does not exist" Oct 13 17:28:23 crc kubenswrapper[4720]: I1013 17:28:23.396472 4720 scope.go:117] "RemoveContainer" containerID="c98c132ce99eded322c8469d1679988dbcf6133d7958fe46a0a0873132c6d7f1" Oct 13 17:28:23 crc kubenswrapper[4720]: I1013 17:28:23.396594 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6n8tc"] Oct 13 17:28:23 crc kubenswrapper[4720]: I1013 17:28:23.399288 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6n8tc"] Oct 13 17:28:23 crc kubenswrapper[4720]: I1013 17:28:23.416638 4720 scope.go:117] "RemoveContainer" containerID="4bca547a8c401420848cad24c228e6afa1c703ff12323a5cf36959502b5072d1" Oct 13 17:28:23 crc kubenswrapper[4720]: I1013 17:28:23.431759 4720 scope.go:117] "RemoveContainer" containerID="3f7a4801a12c4bcad073fec39c0901777fad1bdf0a8c72ee565664766dacf083" Oct 13 17:28:23 crc kubenswrapper[4720]: I1013 17:28:23.444857 4720 scope.go:117] "RemoveContainer" containerID="cf65fe1c986ef26b2cbcd07d95fc4524019664b900ead71f49b7b97a7370ccac" Oct 13 17:28:23 crc kubenswrapper[4720]: I1013 17:28:23.456113 4720 scope.go:117] "RemoveContainer" containerID="89923e28ff7b52cae40ac9293ef7a90e6cb8e7616d954b82130dddad23d6ae55" Oct 13 17:28:23 crc kubenswrapper[4720]: I1013 17:28:23.468014 4720 scope.go:117] "RemoveContainer" containerID="392b3b63152f91c71d0d17a5bc672ed9d54fad598b7732a68363c0d87ceb00c0" Oct 13 17:28:23 crc kubenswrapper[4720]: I1013 17:28:23.479901 4720 scope.go:117] "RemoveContainer" containerID="b1a3a0262a0118f50aac13c998dc0a0112a4fd47dfd4aa7ef890f7df6b4556fd" Oct 13 17:28:23 crc kubenswrapper[4720]: I1013 17:28:23.489362 4720 scope.go:117] "RemoveContainer" containerID="16966728b322fb302e5cc8d1cbd3329c3123673d9d7834071a6b2a14ae159e56" Oct 13 17:28:23 crc kubenswrapper[4720]: I1013 17:28:23.499063 4720 scope.go:117] "RemoveContainer" containerID="41ce6f38cc9fe7460d5633aea179b70f95795f553d935a0122ac68ea30a9e46c" Oct 13 17:28:24 crc kubenswrapper[4720]: I1013 17:28:24.317068 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-69hwq"] Oct 13 17:28:24 crc kubenswrapper[4720]: E1013 17:28:24.317269 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45a4b9f0-8d20-4a69-bd03-2c54f1a66867" containerName="extract-utilities" Oct 13 17:28:24 crc kubenswrapper[4720]: I1013 17:28:24.317282 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="45a4b9f0-8d20-4a69-bd03-2c54f1a66867" containerName="extract-utilities" Oct 13 17:28:24 crc kubenswrapper[4720]: E1013 17:28:24.317292 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45a4b9f0-8d20-4a69-bd03-2c54f1a66867" containerName="extract-content" Oct 13 17:28:24 crc kubenswrapper[4720]: I1013 17:28:24.317299 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="45a4b9f0-8d20-4a69-bd03-2c54f1a66867" containerName="extract-content" Oct 13 17:28:24 crc kubenswrapper[4720]: E1013 17:28:24.317308 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cca499e4-939e-4ba7-b98b-2965e14da5c3" containerName="extract-utilities" Oct 13 17:28:24 crc kubenswrapper[4720]: I1013 17:28:24.317314 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="cca499e4-939e-4ba7-b98b-2965e14da5c3" containerName="extract-utilities" Oct 13 17:28:24 crc kubenswrapper[4720]: E1013 17:28:24.317323 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45a4b9f0-8d20-4a69-bd03-2c54f1a66867" containerName="registry-server" Oct 13 17:28:24 crc kubenswrapper[4720]: I1013 17:28:24.317328 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="45a4b9f0-8d20-4a69-bd03-2c54f1a66867" containerName="registry-server" Oct 13 17:28:24 crc kubenswrapper[4720]: E1013 17:28:24.317335 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cba388bf-d5f4-4a4e-8add-d8e4d7489f14" containerName="extract-utilities" Oct 13 17:28:24 crc kubenswrapper[4720]: I1013 17:28:24.317341 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="cba388bf-d5f4-4a4e-8add-d8e4d7489f14" containerName="extract-utilities" Oct 13 17:28:24 crc kubenswrapper[4720]: E1013 17:28:24.317349 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b9cfa7f-e80a-42b8-b6f0-239165447812" containerName="marketplace-operator" Oct 13 17:28:24 crc kubenswrapper[4720]: I1013 17:28:24.317355 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b9cfa7f-e80a-42b8-b6f0-239165447812" containerName="marketplace-operator" Oct 13 17:28:24 crc kubenswrapper[4720]: E1013 17:28:24.317365 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ec46228-40b7-4dd0-b773-59e2b088ef17" containerName="extract-utilities" Oct 13 17:28:24 crc kubenswrapper[4720]: I1013 17:28:24.317371 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ec46228-40b7-4dd0-b773-59e2b088ef17" containerName="extract-utilities" Oct 13 17:28:24 crc kubenswrapper[4720]: E1013 17:28:24.317380 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cca499e4-939e-4ba7-b98b-2965e14da5c3" containerName="registry-server" Oct 13 17:28:24 crc kubenswrapper[4720]: I1013 17:28:24.317386 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="cca499e4-939e-4ba7-b98b-2965e14da5c3" containerName="registry-server" Oct 13 17:28:24 crc kubenswrapper[4720]: E1013 17:28:24.317395 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cba388bf-d5f4-4a4e-8add-d8e4d7489f14" containerName="extract-content" Oct 13 17:28:24 crc kubenswrapper[4720]: I1013 17:28:24.317400 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="cba388bf-d5f4-4a4e-8add-d8e4d7489f14" containerName="extract-content" Oct 13 17:28:24 crc kubenswrapper[4720]: E1013 17:28:24.317408 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ec46228-40b7-4dd0-b773-59e2b088ef17" containerName="registry-server" Oct 13 17:28:24 crc kubenswrapper[4720]: I1013 17:28:24.317414 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ec46228-40b7-4dd0-b773-59e2b088ef17" containerName="registry-server" Oct 13 17:28:24 crc kubenswrapper[4720]: E1013 17:28:24.317422 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ec46228-40b7-4dd0-b773-59e2b088ef17" containerName="extract-content" Oct 13 17:28:24 crc kubenswrapper[4720]: I1013 17:28:24.317427 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ec46228-40b7-4dd0-b773-59e2b088ef17" containerName="extract-content" Oct 13 17:28:24 crc kubenswrapper[4720]: E1013 17:28:24.317434 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cba388bf-d5f4-4a4e-8add-d8e4d7489f14" containerName="registry-server" Oct 13 17:28:24 crc kubenswrapper[4720]: I1013 17:28:24.317440 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="cba388bf-d5f4-4a4e-8add-d8e4d7489f14" containerName="registry-server" Oct 13 17:28:24 crc kubenswrapper[4720]: E1013 17:28:24.317450 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cca499e4-939e-4ba7-b98b-2965e14da5c3" containerName="extract-content" Oct 13 17:28:24 crc kubenswrapper[4720]: I1013 17:28:24.317456 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="cca499e4-939e-4ba7-b98b-2965e14da5c3" containerName="extract-content" Oct 13 17:28:24 crc kubenswrapper[4720]: I1013 17:28:24.317535 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="45a4b9f0-8d20-4a69-bd03-2c54f1a66867" containerName="registry-server" Oct 13 17:28:24 crc kubenswrapper[4720]: I1013 17:28:24.317543 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b9cfa7f-e80a-42b8-b6f0-239165447812" containerName="marketplace-operator" Oct 13 17:28:24 crc kubenswrapper[4720]: I1013 17:28:24.317551 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ec46228-40b7-4dd0-b773-59e2b088ef17" containerName="registry-server" Oct 13 17:28:24 crc kubenswrapper[4720]: I1013 17:28:24.317558 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="cba388bf-d5f4-4a4e-8add-d8e4d7489f14" containerName="registry-server" Oct 13 17:28:24 crc kubenswrapper[4720]: I1013 17:28:24.317568 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="cca499e4-939e-4ba7-b98b-2965e14da5c3" containerName="registry-server" Oct 13 17:28:24 crc kubenswrapper[4720]: I1013 17:28:24.318891 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-69hwq" Oct 13 17:28:24 crc kubenswrapper[4720]: I1013 17:28:24.321984 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Oct 13 17:28:24 crc kubenswrapper[4720]: I1013 17:28:24.325643 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-69hwq"] Oct 13 17:28:24 crc kubenswrapper[4720]: I1013 17:28:24.477598 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dab7d1c1-75af-4ffa-a15b-2cc516acfabf-catalog-content\") pod \"redhat-marketplace-69hwq\" (UID: \"dab7d1c1-75af-4ffa-a15b-2cc516acfabf\") " pod="openshift-marketplace/redhat-marketplace-69hwq" Oct 13 17:28:24 crc kubenswrapper[4720]: I1013 17:28:24.477650 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4847x\" (UniqueName: \"kubernetes.io/projected/dab7d1c1-75af-4ffa-a15b-2cc516acfabf-kube-api-access-4847x\") pod \"redhat-marketplace-69hwq\" (UID: \"dab7d1c1-75af-4ffa-a15b-2cc516acfabf\") " pod="openshift-marketplace/redhat-marketplace-69hwq" Oct 13 17:28:24 crc kubenswrapper[4720]: I1013 17:28:24.477673 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dab7d1c1-75af-4ffa-a15b-2cc516acfabf-utilities\") pod \"redhat-marketplace-69hwq\" (UID: \"dab7d1c1-75af-4ffa-a15b-2cc516acfabf\") " pod="openshift-marketplace/redhat-marketplace-69hwq" Oct 13 17:28:24 crc kubenswrapper[4720]: I1013 17:28:24.514731 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mdzhd"] Oct 13 17:28:24 crc kubenswrapper[4720]: I1013 17:28:24.515608 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mdzhd" Oct 13 17:28:24 crc kubenswrapper[4720]: I1013 17:28:24.517879 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Oct 13 17:28:24 crc kubenswrapper[4720]: I1013 17:28:24.527295 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mdzhd"] Oct 13 17:28:24 crc kubenswrapper[4720]: I1013 17:28:24.579357 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dab7d1c1-75af-4ffa-a15b-2cc516acfabf-catalog-content\") pod \"redhat-marketplace-69hwq\" (UID: \"dab7d1c1-75af-4ffa-a15b-2cc516acfabf\") " pod="openshift-marketplace/redhat-marketplace-69hwq" Oct 13 17:28:24 crc kubenswrapper[4720]: I1013 17:28:24.579440 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4847x\" (UniqueName: \"kubernetes.io/projected/dab7d1c1-75af-4ffa-a15b-2cc516acfabf-kube-api-access-4847x\") pod \"redhat-marketplace-69hwq\" (UID: \"dab7d1c1-75af-4ffa-a15b-2cc516acfabf\") " pod="openshift-marketplace/redhat-marketplace-69hwq" Oct 13 17:28:24 crc kubenswrapper[4720]: I1013 17:28:24.579526 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dab7d1c1-75af-4ffa-a15b-2cc516acfabf-utilities\") pod \"redhat-marketplace-69hwq\" (UID: \"dab7d1c1-75af-4ffa-a15b-2cc516acfabf\") " pod="openshift-marketplace/redhat-marketplace-69hwq" Oct 13 17:28:24 crc kubenswrapper[4720]: I1013 17:28:24.579832 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dab7d1c1-75af-4ffa-a15b-2cc516acfabf-catalog-content\") pod \"redhat-marketplace-69hwq\" (UID: \"dab7d1c1-75af-4ffa-a15b-2cc516acfabf\") " pod="openshift-marketplace/redhat-marketplace-69hwq" Oct 13 17:28:24 crc kubenswrapper[4720]: I1013 17:28:24.579997 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dab7d1c1-75af-4ffa-a15b-2cc516acfabf-utilities\") pod \"redhat-marketplace-69hwq\" (UID: \"dab7d1c1-75af-4ffa-a15b-2cc516acfabf\") " pod="openshift-marketplace/redhat-marketplace-69hwq" Oct 13 17:28:24 crc kubenswrapper[4720]: I1013 17:28:24.610038 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4847x\" (UniqueName: \"kubernetes.io/projected/dab7d1c1-75af-4ffa-a15b-2cc516acfabf-kube-api-access-4847x\") pod \"redhat-marketplace-69hwq\" (UID: \"dab7d1c1-75af-4ffa-a15b-2cc516acfabf\") " pod="openshift-marketplace/redhat-marketplace-69hwq" Oct 13 17:28:24 crc kubenswrapper[4720]: I1013 17:28:24.636504 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-69hwq" Oct 13 17:28:24 crc kubenswrapper[4720]: I1013 17:28:24.680857 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54894eb4-2aeb-4c93-b8d7-0e22213452f5-utilities\") pod \"redhat-operators-mdzhd\" (UID: \"54894eb4-2aeb-4c93-b8d7-0e22213452f5\") " pod="openshift-marketplace/redhat-operators-mdzhd" Oct 13 17:28:24 crc kubenswrapper[4720]: I1013 17:28:24.681448 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ns6m\" (UniqueName: \"kubernetes.io/projected/54894eb4-2aeb-4c93-b8d7-0e22213452f5-kube-api-access-6ns6m\") pod \"redhat-operators-mdzhd\" (UID: \"54894eb4-2aeb-4c93-b8d7-0e22213452f5\") " pod="openshift-marketplace/redhat-operators-mdzhd" Oct 13 17:28:24 crc kubenswrapper[4720]: I1013 17:28:24.681496 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54894eb4-2aeb-4c93-b8d7-0e22213452f5-catalog-content\") pod \"redhat-operators-mdzhd\" (UID: \"54894eb4-2aeb-4c93-b8d7-0e22213452f5\") " pod="openshift-marketplace/redhat-operators-mdzhd" Oct 13 17:28:24 crc kubenswrapper[4720]: I1013 17:28:24.782439 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ns6m\" (UniqueName: \"kubernetes.io/projected/54894eb4-2aeb-4c93-b8d7-0e22213452f5-kube-api-access-6ns6m\") pod \"redhat-operators-mdzhd\" (UID: \"54894eb4-2aeb-4c93-b8d7-0e22213452f5\") " pod="openshift-marketplace/redhat-operators-mdzhd" Oct 13 17:28:24 crc kubenswrapper[4720]: I1013 17:28:24.782515 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54894eb4-2aeb-4c93-b8d7-0e22213452f5-catalog-content\") pod \"redhat-operators-mdzhd\" (UID: \"54894eb4-2aeb-4c93-b8d7-0e22213452f5\") " pod="openshift-marketplace/redhat-operators-mdzhd" Oct 13 17:28:24 crc kubenswrapper[4720]: I1013 17:28:24.782578 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54894eb4-2aeb-4c93-b8d7-0e22213452f5-utilities\") pod \"redhat-operators-mdzhd\" (UID: \"54894eb4-2aeb-4c93-b8d7-0e22213452f5\") " pod="openshift-marketplace/redhat-operators-mdzhd" Oct 13 17:28:24 crc kubenswrapper[4720]: I1013 17:28:24.783150 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54894eb4-2aeb-4c93-b8d7-0e22213452f5-catalog-content\") pod \"redhat-operators-mdzhd\" (UID: \"54894eb4-2aeb-4c93-b8d7-0e22213452f5\") " pod="openshift-marketplace/redhat-operators-mdzhd" Oct 13 17:28:24 crc kubenswrapper[4720]: I1013 17:28:24.783395 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54894eb4-2aeb-4c93-b8d7-0e22213452f5-utilities\") pod \"redhat-operators-mdzhd\" (UID: \"54894eb4-2aeb-4c93-b8d7-0e22213452f5\") " pod="openshift-marketplace/redhat-operators-mdzhd" Oct 13 17:28:24 crc kubenswrapper[4720]: I1013 17:28:24.805906 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ns6m\" (UniqueName: \"kubernetes.io/projected/54894eb4-2aeb-4c93-b8d7-0e22213452f5-kube-api-access-6ns6m\") pod \"redhat-operators-mdzhd\" (UID: \"54894eb4-2aeb-4c93-b8d7-0e22213452f5\") " pod="openshift-marketplace/redhat-operators-mdzhd" Oct 13 17:28:24 crc kubenswrapper[4720]: I1013 17:28:24.844064 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mdzhd" Oct 13 17:28:24 crc kubenswrapper[4720]: I1013 17:28:24.848495 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-69hwq"] Oct 13 17:28:25 crc kubenswrapper[4720]: I1013 17:28:25.021883 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mdzhd"] Oct 13 17:28:25 crc kubenswrapper[4720]: W1013 17:28:25.101344 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod54894eb4_2aeb_4c93_b8d7_0e22213452f5.slice/crio-61086a2b53d4ee826d77f03644afab0aa70a9d57bc63c76e9a27855be394ce9c WatchSource:0}: Error finding container 61086a2b53d4ee826d77f03644afab0aa70a9d57bc63c76e9a27855be394ce9c: Status 404 returned error can't find the container with id 61086a2b53d4ee826d77f03644afab0aa70a9d57bc63c76e9a27855be394ce9c Oct 13 17:28:25 crc kubenswrapper[4720]: I1013 17:28:25.182884 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45a4b9f0-8d20-4a69-bd03-2c54f1a66867" path="/var/lib/kubelet/pods/45a4b9f0-8d20-4a69-bd03-2c54f1a66867/volumes" Oct 13 17:28:25 crc kubenswrapper[4720]: I1013 17:28:25.183496 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ec46228-40b7-4dd0-b773-59e2b088ef17" path="/var/lib/kubelet/pods/4ec46228-40b7-4dd0-b773-59e2b088ef17/volumes" Oct 13 17:28:25 crc kubenswrapper[4720]: I1013 17:28:25.184088 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b9cfa7f-e80a-42b8-b6f0-239165447812" path="/var/lib/kubelet/pods/5b9cfa7f-e80a-42b8-b6f0-239165447812/volumes" Oct 13 17:28:25 crc kubenswrapper[4720]: I1013 17:28:25.185107 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cba388bf-d5f4-4a4e-8add-d8e4d7489f14" path="/var/lib/kubelet/pods/cba388bf-d5f4-4a4e-8add-d8e4d7489f14/volumes" Oct 13 17:28:25 crc kubenswrapper[4720]: I1013 17:28:25.186108 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cca499e4-939e-4ba7-b98b-2965e14da5c3" path="/var/lib/kubelet/pods/cca499e4-939e-4ba7-b98b-2965e14da5c3/volumes" Oct 13 17:28:25 crc kubenswrapper[4720]: I1013 17:28:25.319090 4720 generic.go:334] "Generic (PLEG): container finished" podID="54894eb4-2aeb-4c93-b8d7-0e22213452f5" containerID="32c6d4ab64ce6bf48fa73ad1c2fd5813406188ad889cfbe389866d07615eb620" exitCode=0 Oct 13 17:28:25 crc kubenswrapper[4720]: I1013 17:28:25.319202 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mdzhd" event={"ID":"54894eb4-2aeb-4c93-b8d7-0e22213452f5","Type":"ContainerDied","Data":"32c6d4ab64ce6bf48fa73ad1c2fd5813406188ad889cfbe389866d07615eb620"} Oct 13 17:28:25 crc kubenswrapper[4720]: I1013 17:28:25.319812 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mdzhd" event={"ID":"54894eb4-2aeb-4c93-b8d7-0e22213452f5","Type":"ContainerStarted","Data":"61086a2b53d4ee826d77f03644afab0aa70a9d57bc63c76e9a27855be394ce9c"} Oct 13 17:28:25 crc kubenswrapper[4720]: I1013 17:28:25.322089 4720 generic.go:334] "Generic (PLEG): container finished" podID="dab7d1c1-75af-4ffa-a15b-2cc516acfabf" containerID="9c0302cb681c30b3478af98ad4ca2c6dea1907272fec5b9808620da5a424d0c8" exitCode=0 Oct 13 17:28:25 crc kubenswrapper[4720]: I1013 17:28:25.322202 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-69hwq" event={"ID":"dab7d1c1-75af-4ffa-a15b-2cc516acfabf","Type":"ContainerDied","Data":"9c0302cb681c30b3478af98ad4ca2c6dea1907272fec5b9808620da5a424d0c8"} Oct 13 17:28:25 crc kubenswrapper[4720]: I1013 17:28:25.322238 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-69hwq" event={"ID":"dab7d1c1-75af-4ffa-a15b-2cc516acfabf","Type":"ContainerStarted","Data":"475cff449f2e9dd5ae4077be29bcece585214110927df325d0fe4b46f22ffb1c"} Oct 13 17:28:26 crc kubenswrapper[4720]: I1013 17:28:26.713673 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-b6t2r"] Oct 13 17:28:26 crc kubenswrapper[4720]: I1013 17:28:26.716494 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b6t2r" Oct 13 17:28:26 crc kubenswrapper[4720]: I1013 17:28:26.722476 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b6t2r"] Oct 13 17:28:26 crc kubenswrapper[4720]: I1013 17:28:26.722923 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Oct 13 17:28:26 crc kubenswrapper[4720]: I1013 17:28:26.908172 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c05230f9-fc44-4ffe-98bd-fcca7521d582-catalog-content\") pod \"certified-operators-b6t2r\" (UID: \"c05230f9-fc44-4ffe-98bd-fcca7521d582\") " pod="openshift-marketplace/certified-operators-b6t2r" Oct 13 17:28:26 crc kubenswrapper[4720]: I1013 17:28:26.908894 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c05230f9-fc44-4ffe-98bd-fcca7521d582-utilities\") pod \"certified-operators-b6t2r\" (UID: \"c05230f9-fc44-4ffe-98bd-fcca7521d582\") " pod="openshift-marketplace/certified-operators-b6t2r" Oct 13 17:28:26 crc kubenswrapper[4720]: I1013 17:28:26.909374 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jjrb\" (UniqueName: \"kubernetes.io/projected/c05230f9-fc44-4ffe-98bd-fcca7521d582-kube-api-access-5jjrb\") pod \"certified-operators-b6t2r\" (UID: \"c05230f9-fc44-4ffe-98bd-fcca7521d582\") " pod="openshift-marketplace/certified-operators-b6t2r" Oct 13 17:28:26 crc kubenswrapper[4720]: I1013 17:28:26.934640 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rj2jd"] Oct 13 17:28:26 crc kubenswrapper[4720]: I1013 17:28:26.940139 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rj2jd"] Oct 13 17:28:26 crc kubenswrapper[4720]: I1013 17:28:26.940491 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rj2jd" Oct 13 17:28:26 crc kubenswrapper[4720]: I1013 17:28:26.944634 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Oct 13 17:28:27 crc kubenswrapper[4720]: I1013 17:28:27.012450 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jjrb\" (UniqueName: \"kubernetes.io/projected/c05230f9-fc44-4ffe-98bd-fcca7521d582-kube-api-access-5jjrb\") pod \"certified-operators-b6t2r\" (UID: \"c05230f9-fc44-4ffe-98bd-fcca7521d582\") " pod="openshift-marketplace/certified-operators-b6t2r" Oct 13 17:28:27 crc kubenswrapper[4720]: I1013 17:28:27.012522 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c05230f9-fc44-4ffe-98bd-fcca7521d582-catalog-content\") pod \"certified-operators-b6t2r\" (UID: \"c05230f9-fc44-4ffe-98bd-fcca7521d582\") " pod="openshift-marketplace/certified-operators-b6t2r" Oct 13 17:28:27 crc kubenswrapper[4720]: I1013 17:28:27.012556 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c05230f9-fc44-4ffe-98bd-fcca7521d582-utilities\") pod \"certified-operators-b6t2r\" (UID: \"c05230f9-fc44-4ffe-98bd-fcca7521d582\") " pod="openshift-marketplace/certified-operators-b6t2r" Oct 13 17:28:27 crc kubenswrapper[4720]: I1013 17:28:27.013123 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c05230f9-fc44-4ffe-98bd-fcca7521d582-utilities\") pod \"certified-operators-b6t2r\" (UID: \"c05230f9-fc44-4ffe-98bd-fcca7521d582\") " pod="openshift-marketplace/certified-operators-b6t2r" Oct 13 17:28:27 crc kubenswrapper[4720]: I1013 17:28:27.013374 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c05230f9-fc44-4ffe-98bd-fcca7521d582-catalog-content\") pod \"certified-operators-b6t2r\" (UID: \"c05230f9-fc44-4ffe-98bd-fcca7521d582\") " pod="openshift-marketplace/certified-operators-b6t2r" Oct 13 17:28:27 crc kubenswrapper[4720]: I1013 17:28:27.037300 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jjrb\" (UniqueName: \"kubernetes.io/projected/c05230f9-fc44-4ffe-98bd-fcca7521d582-kube-api-access-5jjrb\") pod \"certified-operators-b6t2r\" (UID: \"c05230f9-fc44-4ffe-98bd-fcca7521d582\") " pod="openshift-marketplace/certified-operators-b6t2r" Oct 13 17:28:27 crc kubenswrapper[4720]: I1013 17:28:27.045055 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b6t2r" Oct 13 17:28:27 crc kubenswrapper[4720]: I1013 17:28:27.113946 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7af8d26b-d10a-4be8-8773-03204f461fe3-utilities\") pod \"community-operators-rj2jd\" (UID: \"7af8d26b-d10a-4be8-8773-03204f461fe3\") " pod="openshift-marketplace/community-operators-rj2jd" Oct 13 17:28:27 crc kubenswrapper[4720]: I1013 17:28:27.114431 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7af8d26b-d10a-4be8-8773-03204f461fe3-catalog-content\") pod \"community-operators-rj2jd\" (UID: \"7af8d26b-d10a-4be8-8773-03204f461fe3\") " pod="openshift-marketplace/community-operators-rj2jd" Oct 13 17:28:27 crc kubenswrapper[4720]: I1013 17:28:27.114488 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8vbb\" (UniqueName: \"kubernetes.io/projected/7af8d26b-d10a-4be8-8773-03204f461fe3-kube-api-access-x8vbb\") pod \"community-operators-rj2jd\" (UID: \"7af8d26b-d10a-4be8-8773-03204f461fe3\") " pod="openshift-marketplace/community-operators-rj2jd" Oct 13 17:28:27 crc kubenswrapper[4720]: I1013 17:28:27.215346 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7af8d26b-d10a-4be8-8773-03204f461fe3-utilities\") pod \"community-operators-rj2jd\" (UID: \"7af8d26b-d10a-4be8-8773-03204f461fe3\") " pod="openshift-marketplace/community-operators-rj2jd" Oct 13 17:28:27 crc kubenswrapper[4720]: I1013 17:28:27.215413 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7af8d26b-d10a-4be8-8773-03204f461fe3-catalog-content\") pod \"community-operators-rj2jd\" (UID: \"7af8d26b-d10a-4be8-8773-03204f461fe3\") " pod="openshift-marketplace/community-operators-rj2jd" Oct 13 17:28:27 crc kubenswrapper[4720]: I1013 17:28:27.215449 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8vbb\" (UniqueName: \"kubernetes.io/projected/7af8d26b-d10a-4be8-8773-03204f461fe3-kube-api-access-x8vbb\") pod \"community-operators-rj2jd\" (UID: \"7af8d26b-d10a-4be8-8773-03204f461fe3\") " pod="openshift-marketplace/community-operators-rj2jd" Oct 13 17:28:27 crc kubenswrapper[4720]: I1013 17:28:27.216443 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7af8d26b-d10a-4be8-8773-03204f461fe3-utilities\") pod \"community-operators-rj2jd\" (UID: \"7af8d26b-d10a-4be8-8773-03204f461fe3\") " pod="openshift-marketplace/community-operators-rj2jd" Oct 13 17:28:27 crc kubenswrapper[4720]: I1013 17:28:27.216965 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7af8d26b-d10a-4be8-8773-03204f461fe3-catalog-content\") pod \"community-operators-rj2jd\" (UID: \"7af8d26b-d10a-4be8-8773-03204f461fe3\") " pod="openshift-marketplace/community-operators-rj2jd" Oct 13 17:28:27 crc kubenswrapper[4720]: I1013 17:28:27.236326 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8vbb\" (UniqueName: \"kubernetes.io/projected/7af8d26b-d10a-4be8-8773-03204f461fe3-kube-api-access-x8vbb\") pod \"community-operators-rj2jd\" (UID: \"7af8d26b-d10a-4be8-8773-03204f461fe3\") " pod="openshift-marketplace/community-operators-rj2jd" Oct 13 17:28:27 crc kubenswrapper[4720]: I1013 17:28:27.272978 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rj2jd" Oct 13 17:28:27 crc kubenswrapper[4720]: I1013 17:28:27.427410 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b6t2r"] Oct 13 17:28:27 crc kubenswrapper[4720]: W1013 17:28:27.444425 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc05230f9_fc44_4ffe_98bd_fcca7521d582.slice/crio-4563fa0470400ee9719b1668789d51bd60d501d12f5d8a7ee9bba16ea2c1eff5 WatchSource:0}: Error finding container 4563fa0470400ee9719b1668789d51bd60d501d12f5d8a7ee9bba16ea2c1eff5: Status 404 returned error can't find the container with id 4563fa0470400ee9719b1668789d51bd60d501d12f5d8a7ee9bba16ea2c1eff5 Oct 13 17:28:27 crc kubenswrapper[4720]: I1013 17:28:27.453350 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rj2jd"] Oct 13 17:28:27 crc kubenswrapper[4720]: W1013 17:28:27.467333 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7af8d26b_d10a_4be8_8773_03204f461fe3.slice/crio-03da046725b284c83c0c64e1f10de94d213c90fd26659b8dd68c1abbe93a65c0 WatchSource:0}: Error finding container 03da046725b284c83c0c64e1f10de94d213c90fd26659b8dd68c1abbe93a65c0: Status 404 returned error can't find the container with id 03da046725b284c83c0c64e1f10de94d213c90fd26659b8dd68c1abbe93a65c0 Oct 13 17:28:28 crc kubenswrapper[4720]: I1013 17:28:28.353799 4720 generic.go:334] "Generic (PLEG): container finished" podID="7af8d26b-d10a-4be8-8773-03204f461fe3" containerID="92ee777af00add1c32be7be0dec1a97a33aececce2eac8de6913cc47ecfda5ec" exitCode=0 Oct 13 17:28:28 crc kubenswrapper[4720]: I1013 17:28:28.354039 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rj2jd" event={"ID":"7af8d26b-d10a-4be8-8773-03204f461fe3","Type":"ContainerDied","Data":"92ee777af00add1c32be7be0dec1a97a33aececce2eac8de6913cc47ecfda5ec"} Oct 13 17:28:28 crc kubenswrapper[4720]: I1013 17:28:28.354433 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rj2jd" event={"ID":"7af8d26b-d10a-4be8-8773-03204f461fe3","Type":"ContainerStarted","Data":"03da046725b284c83c0c64e1f10de94d213c90fd26659b8dd68c1abbe93a65c0"} Oct 13 17:28:28 crc kubenswrapper[4720]: I1013 17:28:28.356553 4720 generic.go:334] "Generic (PLEG): container finished" podID="c05230f9-fc44-4ffe-98bd-fcca7521d582" containerID="8fb9eb8429009ae01343d9c2d5d08d89637b37a20633dc280105b142db2b0faf" exitCode=0 Oct 13 17:28:28 crc kubenswrapper[4720]: I1013 17:28:28.356646 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b6t2r" event={"ID":"c05230f9-fc44-4ffe-98bd-fcca7521d582","Type":"ContainerDied","Data":"8fb9eb8429009ae01343d9c2d5d08d89637b37a20633dc280105b142db2b0faf"} Oct 13 17:28:28 crc kubenswrapper[4720]: I1013 17:28:28.356672 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b6t2r" event={"ID":"c05230f9-fc44-4ffe-98bd-fcca7521d582","Type":"ContainerStarted","Data":"4563fa0470400ee9719b1668789d51bd60d501d12f5d8a7ee9bba16ea2c1eff5"} Oct 13 17:28:28 crc kubenswrapper[4720]: I1013 17:28:28.361408 4720 generic.go:334] "Generic (PLEG): container finished" podID="54894eb4-2aeb-4c93-b8d7-0e22213452f5" containerID="05c9fcde2a816d7e347160bfc6d1be6918cf8ea4902a171b09a88162ba650d66" exitCode=0 Oct 13 17:28:28 crc kubenswrapper[4720]: I1013 17:28:28.361484 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mdzhd" event={"ID":"54894eb4-2aeb-4c93-b8d7-0e22213452f5","Type":"ContainerDied","Data":"05c9fcde2a816d7e347160bfc6d1be6918cf8ea4902a171b09a88162ba650d66"} Oct 13 17:28:28 crc kubenswrapper[4720]: I1013 17:28:28.364977 4720 generic.go:334] "Generic (PLEG): container finished" podID="dab7d1c1-75af-4ffa-a15b-2cc516acfabf" containerID="629a57f3f6224230c5e46e7bb00e46c48e9d07612684e6aabb349bb7aae5bc50" exitCode=0 Oct 13 17:28:28 crc kubenswrapper[4720]: I1013 17:28:28.365013 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-69hwq" event={"ID":"dab7d1c1-75af-4ffa-a15b-2cc516acfabf","Type":"ContainerDied","Data":"629a57f3f6224230c5e46e7bb00e46c48e9d07612684e6aabb349bb7aae5bc50"} Oct 13 17:28:29 crc kubenswrapper[4720]: I1013 17:28:29.372979 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b6t2r" event={"ID":"c05230f9-fc44-4ffe-98bd-fcca7521d582","Type":"ContainerStarted","Data":"ce8a264a081c49fffbc9ed7c4af381961639023ac029bc3aadc7183546fb82b2"} Oct 13 17:28:29 crc kubenswrapper[4720]: I1013 17:28:29.375716 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mdzhd" event={"ID":"54894eb4-2aeb-4c93-b8d7-0e22213452f5","Type":"ContainerStarted","Data":"fb85e86ee8d9bbf95c0d0b8cfe2189681dc3de20433eab9a3e08b9f2ceab2cc0"} Oct 13 17:28:29 crc kubenswrapper[4720]: I1013 17:28:29.379804 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-69hwq" event={"ID":"dab7d1c1-75af-4ffa-a15b-2cc516acfabf","Type":"ContainerStarted","Data":"acbd2ba0d866a4140a3e97eda188154c1a340d14f5b96d70cdd87d4b67e6e765"} Oct 13 17:28:29 crc kubenswrapper[4720]: I1013 17:28:29.388530 4720 generic.go:334] "Generic (PLEG): container finished" podID="7af8d26b-d10a-4be8-8773-03204f461fe3" containerID="21fac2bd36a28fb16c72178be0e0fe4ba2adf402030d73804e1e9b6edb4676cd" exitCode=0 Oct 13 17:28:29 crc kubenswrapper[4720]: I1013 17:28:29.388600 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rj2jd" event={"ID":"7af8d26b-d10a-4be8-8773-03204f461fe3","Type":"ContainerDied","Data":"21fac2bd36a28fb16c72178be0e0fe4ba2adf402030d73804e1e9b6edb4676cd"} Oct 13 17:28:29 crc kubenswrapper[4720]: I1013 17:28:29.452909 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-69hwq" podStartSLOduration=1.9977298430000001 podStartE2EDuration="5.452890946s" podCreationTimestamp="2025-10-13 17:28:24 +0000 UTC" firstStartedPulling="2025-10-13 17:28:25.323283496 +0000 UTC m=+250.780533628" lastFinishedPulling="2025-10-13 17:28:28.778444609 +0000 UTC m=+254.235694731" observedRunningTime="2025-10-13 17:28:29.450291239 +0000 UTC m=+254.907541381" watchObservedRunningTime="2025-10-13 17:28:29.452890946 +0000 UTC m=+254.910141078" Oct 13 17:28:29 crc kubenswrapper[4720]: I1013 17:28:29.494728 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mdzhd" podStartSLOduration=2.024776585 podStartE2EDuration="5.494710866s" podCreationTimestamp="2025-10-13 17:28:24 +0000 UTC" firstStartedPulling="2025-10-13 17:28:25.322855775 +0000 UTC m=+250.780105907" lastFinishedPulling="2025-10-13 17:28:28.792790056 +0000 UTC m=+254.250040188" observedRunningTime="2025-10-13 17:28:29.491449542 +0000 UTC m=+254.948699694" watchObservedRunningTime="2025-10-13 17:28:29.494710866 +0000 UTC m=+254.951960998" Oct 13 17:28:30 crc kubenswrapper[4720]: I1013 17:28:30.395968 4720 generic.go:334] "Generic (PLEG): container finished" podID="c05230f9-fc44-4ffe-98bd-fcca7521d582" containerID="ce8a264a081c49fffbc9ed7c4af381961639023ac029bc3aadc7183546fb82b2" exitCode=0 Oct 13 17:28:30 crc kubenswrapper[4720]: I1013 17:28:30.396085 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b6t2r" event={"ID":"c05230f9-fc44-4ffe-98bd-fcca7521d582","Type":"ContainerDied","Data":"ce8a264a081c49fffbc9ed7c4af381961639023ac029bc3aadc7183546fb82b2"} Oct 13 17:28:31 crc kubenswrapper[4720]: I1013 17:28:31.404240 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b6t2r" event={"ID":"c05230f9-fc44-4ffe-98bd-fcca7521d582","Type":"ContainerStarted","Data":"90988b96cd031c9966c7c9c333bac005b1c4adaa3d07879f2752515f38a1288f"} Oct 13 17:28:31 crc kubenswrapper[4720]: I1013 17:28:31.406192 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rj2jd" event={"ID":"7af8d26b-d10a-4be8-8773-03204f461fe3","Type":"ContainerStarted","Data":"e1be13d904ea9779d290930513a3ea3dcaf22afeb200a54fce72190968a05f1c"} Oct 13 17:28:31 crc kubenswrapper[4720]: I1013 17:28:31.423590 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-b6t2r" podStartSLOduration=2.976672522 podStartE2EDuration="5.423573418s" podCreationTimestamp="2025-10-13 17:28:26 +0000 UTC" firstStartedPulling="2025-10-13 17:28:28.364093798 +0000 UTC m=+253.821343970" lastFinishedPulling="2025-10-13 17:28:30.810994734 +0000 UTC m=+256.268244866" observedRunningTime="2025-10-13 17:28:31.422196463 +0000 UTC m=+256.879446595" watchObservedRunningTime="2025-10-13 17:28:31.423573418 +0000 UTC m=+256.880823550" Oct 13 17:28:31 crc kubenswrapper[4720]: I1013 17:28:31.463514 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rj2jd" podStartSLOduration=4.011227142 podStartE2EDuration="5.463493249s" podCreationTimestamp="2025-10-13 17:28:26 +0000 UTC" firstStartedPulling="2025-10-13 17:28:28.355455507 +0000 UTC m=+253.812705639" lastFinishedPulling="2025-10-13 17:28:29.807721604 +0000 UTC m=+255.264971746" observedRunningTime="2025-10-13 17:28:31.463194532 +0000 UTC m=+256.920444664" watchObservedRunningTime="2025-10-13 17:28:31.463493249 +0000 UTC m=+256.920743391" Oct 13 17:28:34 crc kubenswrapper[4720]: I1013 17:28:34.637020 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-69hwq" Oct 13 17:28:34 crc kubenswrapper[4720]: I1013 17:28:34.638258 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-69hwq" Oct 13 17:28:34 crc kubenswrapper[4720]: I1013 17:28:34.687088 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-69hwq" Oct 13 17:28:34 crc kubenswrapper[4720]: I1013 17:28:34.845297 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mdzhd" Oct 13 17:28:34 crc kubenswrapper[4720]: I1013 17:28:34.845377 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mdzhd" Oct 13 17:28:34 crc kubenswrapper[4720]: I1013 17:28:34.884333 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mdzhd" Oct 13 17:28:35 crc kubenswrapper[4720]: I1013 17:28:35.473311 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-69hwq" Oct 13 17:28:35 crc kubenswrapper[4720]: I1013 17:28:35.486800 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mdzhd" Oct 13 17:28:37 crc kubenswrapper[4720]: I1013 17:28:37.045442 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-b6t2r" Oct 13 17:28:37 crc kubenswrapper[4720]: I1013 17:28:37.045739 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-b6t2r" Oct 13 17:28:37 crc kubenswrapper[4720]: I1013 17:28:37.088926 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-b6t2r" Oct 13 17:28:37 crc kubenswrapper[4720]: I1013 17:28:37.273705 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rj2jd" Oct 13 17:28:37 crc kubenswrapper[4720]: I1013 17:28:37.273754 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rj2jd" Oct 13 17:28:37 crc kubenswrapper[4720]: I1013 17:28:37.331520 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rj2jd" Oct 13 17:28:37 crc kubenswrapper[4720]: I1013 17:28:37.476830 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-b6t2r" Oct 13 17:28:37 crc kubenswrapper[4720]: I1013 17:28:37.504610 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rj2jd" Oct 13 17:29:45 crc kubenswrapper[4720]: I1013 17:29:45.212835 4720 patch_prober.go:28] interesting pod/machine-config-daemon-htwnl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 17:29:45 crc kubenswrapper[4720]: I1013 17:29:45.213418 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 17:30:00 crc kubenswrapper[4720]: I1013 17:30:00.146776 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29339610-42824"] Oct 13 17:30:00 crc kubenswrapper[4720]: I1013 17:30:00.149426 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29339610-42824" Oct 13 17:30:00 crc kubenswrapper[4720]: I1013 17:30:00.152782 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 13 17:30:00 crc kubenswrapper[4720]: I1013 17:30:00.153060 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 13 17:30:00 crc kubenswrapper[4720]: I1013 17:30:00.159272 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29339610-42824"] Oct 13 17:30:00 crc kubenswrapper[4720]: I1013 17:30:00.273090 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cmjd\" (UniqueName: \"kubernetes.io/projected/e0c11d5a-58cf-4f1e-ada6-0d154e322e44-kube-api-access-5cmjd\") pod \"collect-profiles-29339610-42824\" (UID: \"e0c11d5a-58cf-4f1e-ada6-0d154e322e44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339610-42824" Oct 13 17:30:00 crc kubenswrapper[4720]: I1013 17:30:00.273157 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e0c11d5a-58cf-4f1e-ada6-0d154e322e44-secret-volume\") pod \"collect-profiles-29339610-42824\" (UID: \"e0c11d5a-58cf-4f1e-ada6-0d154e322e44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339610-42824" Oct 13 17:30:00 crc kubenswrapper[4720]: I1013 17:30:00.273238 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e0c11d5a-58cf-4f1e-ada6-0d154e322e44-config-volume\") pod \"collect-profiles-29339610-42824\" (UID: \"e0c11d5a-58cf-4f1e-ada6-0d154e322e44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339610-42824" Oct 13 17:30:00 crc kubenswrapper[4720]: I1013 17:30:00.374586 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cmjd\" (UniqueName: \"kubernetes.io/projected/e0c11d5a-58cf-4f1e-ada6-0d154e322e44-kube-api-access-5cmjd\") pod \"collect-profiles-29339610-42824\" (UID: \"e0c11d5a-58cf-4f1e-ada6-0d154e322e44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339610-42824" Oct 13 17:30:00 crc kubenswrapper[4720]: I1013 17:30:00.374668 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e0c11d5a-58cf-4f1e-ada6-0d154e322e44-secret-volume\") pod \"collect-profiles-29339610-42824\" (UID: \"e0c11d5a-58cf-4f1e-ada6-0d154e322e44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339610-42824" Oct 13 17:30:00 crc kubenswrapper[4720]: I1013 17:30:00.374726 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e0c11d5a-58cf-4f1e-ada6-0d154e322e44-config-volume\") pod \"collect-profiles-29339610-42824\" (UID: \"e0c11d5a-58cf-4f1e-ada6-0d154e322e44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339610-42824" Oct 13 17:30:00 crc kubenswrapper[4720]: I1013 17:30:00.376184 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e0c11d5a-58cf-4f1e-ada6-0d154e322e44-config-volume\") pod \"collect-profiles-29339610-42824\" (UID: \"e0c11d5a-58cf-4f1e-ada6-0d154e322e44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339610-42824" Oct 13 17:30:00 crc kubenswrapper[4720]: I1013 17:30:00.384693 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e0c11d5a-58cf-4f1e-ada6-0d154e322e44-secret-volume\") pod \"collect-profiles-29339610-42824\" (UID: \"e0c11d5a-58cf-4f1e-ada6-0d154e322e44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339610-42824" Oct 13 17:30:00 crc kubenswrapper[4720]: I1013 17:30:00.392820 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cmjd\" (UniqueName: \"kubernetes.io/projected/e0c11d5a-58cf-4f1e-ada6-0d154e322e44-kube-api-access-5cmjd\") pod \"collect-profiles-29339610-42824\" (UID: \"e0c11d5a-58cf-4f1e-ada6-0d154e322e44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339610-42824" Oct 13 17:30:00 crc kubenswrapper[4720]: I1013 17:30:00.480022 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29339610-42824" Oct 13 17:30:00 crc kubenswrapper[4720]: I1013 17:30:00.676105 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29339610-42824"] Oct 13 17:30:01 crc kubenswrapper[4720]: I1013 17:30:01.022283 4720 generic.go:334] "Generic (PLEG): container finished" podID="e0c11d5a-58cf-4f1e-ada6-0d154e322e44" containerID="4a874c56c5fb6521e14e4ce7a7db8ea0b2de796fac747532e158fb5d0afc63dd" exitCode=0 Oct 13 17:30:01 crc kubenswrapper[4720]: I1013 17:30:01.022399 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29339610-42824" event={"ID":"e0c11d5a-58cf-4f1e-ada6-0d154e322e44","Type":"ContainerDied","Data":"4a874c56c5fb6521e14e4ce7a7db8ea0b2de796fac747532e158fb5d0afc63dd"} Oct 13 17:30:01 crc kubenswrapper[4720]: I1013 17:30:01.022732 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29339610-42824" event={"ID":"e0c11d5a-58cf-4f1e-ada6-0d154e322e44","Type":"ContainerStarted","Data":"4c826f74761a7c6ee0850b5d8d01212ac26a17c326dd81d2ea8c751350c744aa"} Oct 13 17:30:02 crc kubenswrapper[4720]: I1013 17:30:02.283544 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29339610-42824" Oct 13 17:30:02 crc kubenswrapper[4720]: I1013 17:30:02.407760 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cmjd\" (UniqueName: \"kubernetes.io/projected/e0c11d5a-58cf-4f1e-ada6-0d154e322e44-kube-api-access-5cmjd\") pod \"e0c11d5a-58cf-4f1e-ada6-0d154e322e44\" (UID: \"e0c11d5a-58cf-4f1e-ada6-0d154e322e44\") " Oct 13 17:30:02 crc kubenswrapper[4720]: I1013 17:30:02.407878 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e0c11d5a-58cf-4f1e-ada6-0d154e322e44-secret-volume\") pod \"e0c11d5a-58cf-4f1e-ada6-0d154e322e44\" (UID: \"e0c11d5a-58cf-4f1e-ada6-0d154e322e44\") " Oct 13 17:30:02 crc kubenswrapper[4720]: I1013 17:30:02.407908 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e0c11d5a-58cf-4f1e-ada6-0d154e322e44-config-volume\") pod \"e0c11d5a-58cf-4f1e-ada6-0d154e322e44\" (UID: \"e0c11d5a-58cf-4f1e-ada6-0d154e322e44\") " Oct 13 17:30:02 crc kubenswrapper[4720]: I1013 17:30:02.408720 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0c11d5a-58cf-4f1e-ada6-0d154e322e44-config-volume" (OuterVolumeSpecName: "config-volume") pod "e0c11d5a-58cf-4f1e-ada6-0d154e322e44" (UID: "e0c11d5a-58cf-4f1e-ada6-0d154e322e44"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:30:02 crc kubenswrapper[4720]: I1013 17:30:02.413520 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0c11d5a-58cf-4f1e-ada6-0d154e322e44-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e0c11d5a-58cf-4f1e-ada6-0d154e322e44" (UID: "e0c11d5a-58cf-4f1e-ada6-0d154e322e44"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:30:02 crc kubenswrapper[4720]: I1013 17:30:02.414175 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0c11d5a-58cf-4f1e-ada6-0d154e322e44-kube-api-access-5cmjd" (OuterVolumeSpecName: "kube-api-access-5cmjd") pod "e0c11d5a-58cf-4f1e-ada6-0d154e322e44" (UID: "e0c11d5a-58cf-4f1e-ada6-0d154e322e44"). InnerVolumeSpecName "kube-api-access-5cmjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:30:02 crc kubenswrapper[4720]: I1013 17:30:02.509470 4720 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e0c11d5a-58cf-4f1e-ada6-0d154e322e44-config-volume\") on node \"crc\" DevicePath \"\"" Oct 13 17:30:02 crc kubenswrapper[4720]: I1013 17:30:02.509782 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cmjd\" (UniqueName: \"kubernetes.io/projected/e0c11d5a-58cf-4f1e-ada6-0d154e322e44-kube-api-access-5cmjd\") on node \"crc\" DevicePath \"\"" Oct 13 17:30:02 crc kubenswrapper[4720]: I1013 17:30:02.509806 4720 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e0c11d5a-58cf-4f1e-ada6-0d154e322e44-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 13 17:30:03 crc kubenswrapper[4720]: I1013 17:30:03.037722 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29339610-42824" event={"ID":"e0c11d5a-58cf-4f1e-ada6-0d154e322e44","Type":"ContainerDied","Data":"4c826f74761a7c6ee0850b5d8d01212ac26a17c326dd81d2ea8c751350c744aa"} Oct 13 17:30:03 crc kubenswrapper[4720]: I1013 17:30:03.037784 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c826f74761a7c6ee0850b5d8d01212ac26a17c326dd81d2ea8c751350c744aa" Oct 13 17:30:03 crc kubenswrapper[4720]: I1013 17:30:03.037781 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29339610-42824" Oct 13 17:30:15 crc kubenswrapper[4720]: I1013 17:30:15.212855 4720 patch_prober.go:28] interesting pod/machine-config-daemon-htwnl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 17:30:15 crc kubenswrapper[4720]: I1013 17:30:15.213266 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 17:30:45 crc kubenswrapper[4720]: I1013 17:30:45.213063 4720 patch_prober.go:28] interesting pod/machine-config-daemon-htwnl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 17:30:45 crc kubenswrapper[4720]: I1013 17:30:45.213755 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 17:30:45 crc kubenswrapper[4720]: I1013 17:30:45.213827 4720 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" Oct 13 17:30:45 crc kubenswrapper[4720]: I1013 17:30:45.214663 4720 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3fec166dfba0a192adca998429f2650ea80001802f6d04f7ec8b13f450a085ff"} pod="openshift-machine-config-operator/machine-config-daemon-htwnl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 13 17:30:45 crc kubenswrapper[4720]: I1013 17:30:45.214763 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerName="machine-config-daemon" containerID="cri-o://3fec166dfba0a192adca998429f2650ea80001802f6d04f7ec8b13f450a085ff" gracePeriod=600 Oct 13 17:30:46 crc kubenswrapper[4720]: I1013 17:30:46.322368 4720 generic.go:334] "Generic (PLEG): container finished" podID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerID="3fec166dfba0a192adca998429f2650ea80001802f6d04f7ec8b13f450a085ff" exitCode=0 Oct 13 17:30:46 crc kubenswrapper[4720]: I1013 17:30:46.322473 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" event={"ID":"ce442c80-fcde-4b79-b6f9-f8f25771dfd4","Type":"ContainerDied","Data":"3fec166dfba0a192adca998429f2650ea80001802f6d04f7ec8b13f450a085ff"} Oct 13 17:30:46 crc kubenswrapper[4720]: I1013 17:30:46.322815 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" event={"ID":"ce442c80-fcde-4b79-b6f9-f8f25771dfd4","Type":"ContainerStarted","Data":"0d9e0d78254b1372630a7f56a5e019b7d8881a622a4a4292a4394c0f2b9be45a"} Oct 13 17:30:46 crc kubenswrapper[4720]: I1013 17:30:46.322847 4720 scope.go:117] "RemoveContainer" containerID="864e330267c8cec8487e1dcf4a50bbb2ba37d81d80361f0873e6bf69de1ed721" Oct 13 17:31:06 crc kubenswrapper[4720]: I1013 17:31:06.975367 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-v7jj4"] Oct 13 17:31:06 crc kubenswrapper[4720]: E1013 17:31:06.976179 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0c11d5a-58cf-4f1e-ada6-0d154e322e44" containerName="collect-profiles" Oct 13 17:31:06 crc kubenswrapper[4720]: I1013 17:31:06.976259 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0c11d5a-58cf-4f1e-ada6-0d154e322e44" containerName="collect-profiles" Oct 13 17:31:06 crc kubenswrapper[4720]: I1013 17:31:06.976390 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0c11d5a-58cf-4f1e-ada6-0d154e322e44" containerName="collect-profiles" Oct 13 17:31:06 crc kubenswrapper[4720]: I1013 17:31:06.976957 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-v7jj4" Oct 13 17:31:06 crc kubenswrapper[4720]: I1013 17:31:06.997967 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-v7jj4"] Oct 13 17:31:07 crc kubenswrapper[4720]: I1013 17:31:07.086994 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3a42c3a4-601a-4529-a7d4-1c4f861696d1-ca-trust-extracted\") pod \"image-registry-66df7c8f76-v7jj4\" (UID: \"3a42c3a4-601a-4529-a7d4-1c4f861696d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-v7jj4" Oct 13 17:31:07 crc kubenswrapper[4720]: I1013 17:31:07.087166 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3a42c3a4-601a-4529-a7d4-1c4f861696d1-registry-tls\") pod \"image-registry-66df7c8f76-v7jj4\" (UID: \"3a42c3a4-601a-4529-a7d4-1c4f861696d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-v7jj4" Oct 13 17:31:07 crc kubenswrapper[4720]: I1013 17:31:07.087268 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3a42c3a4-601a-4529-a7d4-1c4f861696d1-trusted-ca\") pod \"image-registry-66df7c8f76-v7jj4\" (UID: \"3a42c3a4-601a-4529-a7d4-1c4f861696d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-v7jj4" Oct 13 17:31:07 crc kubenswrapper[4720]: I1013 17:31:07.087406 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3a42c3a4-601a-4529-a7d4-1c4f861696d1-bound-sa-token\") pod \"image-registry-66df7c8f76-v7jj4\" (UID: \"3a42c3a4-601a-4529-a7d4-1c4f861696d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-v7jj4" Oct 13 17:31:07 crc kubenswrapper[4720]: I1013 17:31:07.087495 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfh8m\" (UniqueName: \"kubernetes.io/projected/3a42c3a4-601a-4529-a7d4-1c4f861696d1-kube-api-access-rfh8m\") pod \"image-registry-66df7c8f76-v7jj4\" (UID: \"3a42c3a4-601a-4529-a7d4-1c4f861696d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-v7jj4" Oct 13 17:31:07 crc kubenswrapper[4720]: I1013 17:31:07.087552 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3a42c3a4-601a-4529-a7d4-1c4f861696d1-installation-pull-secrets\") pod \"image-registry-66df7c8f76-v7jj4\" (UID: \"3a42c3a4-601a-4529-a7d4-1c4f861696d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-v7jj4" Oct 13 17:31:07 crc kubenswrapper[4720]: I1013 17:31:07.087614 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-v7jj4\" (UID: \"3a42c3a4-601a-4529-a7d4-1c4f861696d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-v7jj4" Oct 13 17:31:07 crc kubenswrapper[4720]: I1013 17:31:07.087656 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3a42c3a4-601a-4529-a7d4-1c4f861696d1-registry-certificates\") pod \"image-registry-66df7c8f76-v7jj4\" (UID: \"3a42c3a4-601a-4529-a7d4-1c4f861696d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-v7jj4" Oct 13 17:31:07 crc kubenswrapper[4720]: I1013 17:31:07.114514 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-v7jj4\" (UID: \"3a42c3a4-601a-4529-a7d4-1c4f861696d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-v7jj4" Oct 13 17:31:07 crc kubenswrapper[4720]: I1013 17:31:07.188831 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3a42c3a4-601a-4529-a7d4-1c4f861696d1-registry-tls\") pod \"image-registry-66df7c8f76-v7jj4\" (UID: \"3a42c3a4-601a-4529-a7d4-1c4f861696d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-v7jj4" Oct 13 17:31:07 crc kubenswrapper[4720]: I1013 17:31:07.188928 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3a42c3a4-601a-4529-a7d4-1c4f861696d1-trusted-ca\") pod \"image-registry-66df7c8f76-v7jj4\" (UID: \"3a42c3a4-601a-4529-a7d4-1c4f861696d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-v7jj4" Oct 13 17:31:07 crc kubenswrapper[4720]: I1013 17:31:07.188966 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3a42c3a4-601a-4529-a7d4-1c4f861696d1-bound-sa-token\") pod \"image-registry-66df7c8f76-v7jj4\" (UID: \"3a42c3a4-601a-4529-a7d4-1c4f861696d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-v7jj4" Oct 13 17:31:07 crc kubenswrapper[4720]: I1013 17:31:07.189007 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfh8m\" (UniqueName: \"kubernetes.io/projected/3a42c3a4-601a-4529-a7d4-1c4f861696d1-kube-api-access-rfh8m\") pod \"image-registry-66df7c8f76-v7jj4\" (UID: \"3a42c3a4-601a-4529-a7d4-1c4f861696d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-v7jj4" Oct 13 17:31:07 crc kubenswrapper[4720]: I1013 17:31:07.189050 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3a42c3a4-601a-4529-a7d4-1c4f861696d1-installation-pull-secrets\") pod \"image-registry-66df7c8f76-v7jj4\" (UID: \"3a42c3a4-601a-4529-a7d4-1c4f861696d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-v7jj4" Oct 13 17:31:07 crc kubenswrapper[4720]: I1013 17:31:07.189086 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3a42c3a4-601a-4529-a7d4-1c4f861696d1-registry-certificates\") pod \"image-registry-66df7c8f76-v7jj4\" (UID: \"3a42c3a4-601a-4529-a7d4-1c4f861696d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-v7jj4" Oct 13 17:31:07 crc kubenswrapper[4720]: I1013 17:31:07.189133 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3a42c3a4-601a-4529-a7d4-1c4f861696d1-ca-trust-extracted\") pod \"image-registry-66df7c8f76-v7jj4\" (UID: \"3a42c3a4-601a-4529-a7d4-1c4f861696d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-v7jj4" Oct 13 17:31:07 crc kubenswrapper[4720]: I1013 17:31:07.189833 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3a42c3a4-601a-4529-a7d4-1c4f861696d1-ca-trust-extracted\") pod \"image-registry-66df7c8f76-v7jj4\" (UID: \"3a42c3a4-601a-4529-a7d4-1c4f861696d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-v7jj4" Oct 13 17:31:07 crc kubenswrapper[4720]: I1013 17:31:07.191220 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3a42c3a4-601a-4529-a7d4-1c4f861696d1-trusted-ca\") pod \"image-registry-66df7c8f76-v7jj4\" (UID: \"3a42c3a4-601a-4529-a7d4-1c4f861696d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-v7jj4" Oct 13 17:31:07 crc kubenswrapper[4720]: I1013 17:31:07.192568 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3a42c3a4-601a-4529-a7d4-1c4f861696d1-registry-certificates\") pod \"image-registry-66df7c8f76-v7jj4\" (UID: \"3a42c3a4-601a-4529-a7d4-1c4f861696d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-v7jj4" Oct 13 17:31:07 crc kubenswrapper[4720]: I1013 17:31:07.197180 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3a42c3a4-601a-4529-a7d4-1c4f861696d1-installation-pull-secrets\") pod \"image-registry-66df7c8f76-v7jj4\" (UID: \"3a42c3a4-601a-4529-a7d4-1c4f861696d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-v7jj4" Oct 13 17:31:07 crc kubenswrapper[4720]: I1013 17:31:07.198240 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3a42c3a4-601a-4529-a7d4-1c4f861696d1-registry-tls\") pod \"image-registry-66df7c8f76-v7jj4\" (UID: \"3a42c3a4-601a-4529-a7d4-1c4f861696d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-v7jj4" Oct 13 17:31:07 crc kubenswrapper[4720]: I1013 17:31:07.210718 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3a42c3a4-601a-4529-a7d4-1c4f861696d1-bound-sa-token\") pod \"image-registry-66df7c8f76-v7jj4\" (UID: \"3a42c3a4-601a-4529-a7d4-1c4f861696d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-v7jj4" Oct 13 17:31:07 crc kubenswrapper[4720]: I1013 17:31:07.218310 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfh8m\" (UniqueName: \"kubernetes.io/projected/3a42c3a4-601a-4529-a7d4-1c4f861696d1-kube-api-access-rfh8m\") pod \"image-registry-66df7c8f76-v7jj4\" (UID: \"3a42c3a4-601a-4529-a7d4-1c4f861696d1\") " pod="openshift-image-registry/image-registry-66df7c8f76-v7jj4" Oct 13 17:31:07 crc kubenswrapper[4720]: I1013 17:31:07.298718 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-v7jj4" Oct 13 17:31:07 crc kubenswrapper[4720]: I1013 17:31:07.558215 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-v7jj4"] Oct 13 17:31:07 crc kubenswrapper[4720]: W1013 17:31:07.561747 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a42c3a4_601a_4529_a7d4_1c4f861696d1.slice/crio-3465092340dad2970cc15f7a6a10ab1572d5093541b640c5c2f0e6f3249f9624 WatchSource:0}: Error finding container 3465092340dad2970cc15f7a6a10ab1572d5093541b640c5c2f0e6f3249f9624: Status 404 returned error can't find the container with id 3465092340dad2970cc15f7a6a10ab1572d5093541b640c5c2f0e6f3249f9624 Oct 13 17:31:08 crc kubenswrapper[4720]: I1013 17:31:08.486373 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-v7jj4" event={"ID":"3a42c3a4-601a-4529-a7d4-1c4f861696d1","Type":"ContainerStarted","Data":"ad3b3c1208b8e0b11efb4307fc88c233146a6fff03639920c0dd066b4dad4b99"} Oct 13 17:31:08 crc kubenswrapper[4720]: I1013 17:31:08.486668 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-v7jj4" event={"ID":"3a42c3a4-601a-4529-a7d4-1c4f861696d1","Type":"ContainerStarted","Data":"3465092340dad2970cc15f7a6a10ab1572d5093541b640c5c2f0e6f3249f9624"} Oct 13 17:31:08 crc kubenswrapper[4720]: I1013 17:31:08.486754 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-v7jj4" Oct 13 17:31:08 crc kubenswrapper[4720]: I1013 17:31:08.520934 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-v7jj4" podStartSLOduration=2.520908606 podStartE2EDuration="2.520908606s" podCreationTimestamp="2025-10-13 17:31:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:31:08.517743715 +0000 UTC m=+413.974993897" watchObservedRunningTime="2025-10-13 17:31:08.520908606 +0000 UTC m=+413.978158768" Oct 13 17:31:27 crc kubenswrapper[4720]: I1013 17:31:27.310005 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-v7jj4" Oct 13 17:31:27 crc kubenswrapper[4720]: I1013 17:31:27.387788 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-w5xfl"] Oct 13 17:31:52 crc kubenswrapper[4720]: I1013 17:31:52.436021 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" podUID="2c0351ce-4eed-4d0e-b9a4-eb2746bf396a" containerName="registry" containerID="cri-o://4f98436f4e49cbf14734d9595bc80f9b873a4c59276150e787c4ddaebeeb3b41" gracePeriod=30 Oct 13 17:31:52 crc kubenswrapper[4720]: I1013 17:31:52.804681 4720 generic.go:334] "Generic (PLEG): container finished" podID="2c0351ce-4eed-4d0e-b9a4-eb2746bf396a" containerID="4f98436f4e49cbf14734d9595bc80f9b873a4c59276150e787c4ddaebeeb3b41" exitCode=0 Oct 13 17:31:52 crc kubenswrapper[4720]: I1013 17:31:52.804736 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" event={"ID":"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a","Type":"ContainerDied","Data":"4f98436f4e49cbf14734d9595bc80f9b873a4c59276150e787c4ddaebeeb3b41"} Oct 13 17:31:52 crc kubenswrapper[4720]: I1013 17:31:52.804769 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" event={"ID":"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a","Type":"ContainerDied","Data":"ea16400664aefae5b0f63ea77546e2bdd2ac92f3fafee8b8691ffcbdee777424"} Oct 13 17:31:52 crc kubenswrapper[4720]: I1013 17:31:52.804783 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea16400664aefae5b0f63ea77546e2bdd2ac92f3fafee8b8691ffcbdee777424" Oct 13 17:31:52 crc kubenswrapper[4720]: I1013 17:31:52.826114 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:31:52 crc kubenswrapper[4720]: I1013 17:31:52.889821 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2c0351ce-4eed-4d0e-b9a4-eb2746bf396a-ca-trust-extracted\") pod \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " Oct 13 17:31:52 crc kubenswrapper[4720]: I1013 17:31:52.889864 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnxnn\" (UniqueName: \"kubernetes.io/projected/2c0351ce-4eed-4d0e-b9a4-eb2746bf396a-kube-api-access-nnxnn\") pod \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " Oct 13 17:31:52 crc kubenswrapper[4720]: I1013 17:31:52.889891 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2c0351ce-4eed-4d0e-b9a4-eb2746bf396a-registry-certificates\") pod \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " Oct 13 17:31:52 crc kubenswrapper[4720]: I1013 17:31:52.889918 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2c0351ce-4eed-4d0e-b9a4-eb2746bf396a-trusted-ca\") pod \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " Oct 13 17:31:52 crc kubenswrapper[4720]: I1013 17:31:52.890093 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " Oct 13 17:31:52 crc kubenswrapper[4720]: I1013 17:31:52.890114 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2c0351ce-4eed-4d0e-b9a4-eb2746bf396a-registry-tls\") pod \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " Oct 13 17:31:52 crc kubenswrapper[4720]: I1013 17:31:52.890149 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2c0351ce-4eed-4d0e-b9a4-eb2746bf396a-bound-sa-token\") pod \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " Oct 13 17:31:52 crc kubenswrapper[4720]: I1013 17:31:52.890225 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2c0351ce-4eed-4d0e-b9a4-eb2746bf396a-installation-pull-secrets\") pod \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\" (UID: \"2c0351ce-4eed-4d0e-b9a4-eb2746bf396a\") " Oct 13 17:31:52 crc kubenswrapper[4720]: I1013 17:31:52.891376 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c0351ce-4eed-4d0e-b9a4-eb2746bf396a-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2c0351ce-4eed-4d0e-b9a4-eb2746bf396a" (UID: "2c0351ce-4eed-4d0e-b9a4-eb2746bf396a"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:31:52 crc kubenswrapper[4720]: I1013 17:31:52.891752 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c0351ce-4eed-4d0e-b9a4-eb2746bf396a-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "2c0351ce-4eed-4d0e-b9a4-eb2746bf396a" (UID: "2c0351ce-4eed-4d0e-b9a4-eb2746bf396a"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:31:52 crc kubenswrapper[4720]: I1013 17:31:52.900564 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c0351ce-4eed-4d0e-b9a4-eb2746bf396a-kube-api-access-nnxnn" (OuterVolumeSpecName: "kube-api-access-nnxnn") pod "2c0351ce-4eed-4d0e-b9a4-eb2746bf396a" (UID: "2c0351ce-4eed-4d0e-b9a4-eb2746bf396a"). InnerVolumeSpecName "kube-api-access-nnxnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:31:52 crc kubenswrapper[4720]: I1013 17:31:52.901775 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c0351ce-4eed-4d0e-b9a4-eb2746bf396a-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "2c0351ce-4eed-4d0e-b9a4-eb2746bf396a" (UID: "2c0351ce-4eed-4d0e-b9a4-eb2746bf396a"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:31:52 crc kubenswrapper[4720]: I1013 17:31:52.903432 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c0351ce-4eed-4d0e-b9a4-eb2746bf396a-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "2c0351ce-4eed-4d0e-b9a4-eb2746bf396a" (UID: "2c0351ce-4eed-4d0e-b9a4-eb2746bf396a"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:31:52 crc kubenswrapper[4720]: I1013 17:31:52.907256 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c0351ce-4eed-4d0e-b9a4-eb2746bf396a-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "2c0351ce-4eed-4d0e-b9a4-eb2746bf396a" (UID: "2c0351ce-4eed-4d0e-b9a4-eb2746bf396a"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:31:52 crc kubenswrapper[4720]: I1013 17:31:52.911385 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c0351ce-4eed-4d0e-b9a4-eb2746bf396a-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "2c0351ce-4eed-4d0e-b9a4-eb2746bf396a" (UID: "2c0351ce-4eed-4d0e-b9a4-eb2746bf396a"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:31:52 crc kubenswrapper[4720]: I1013 17:31:52.914538 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "2c0351ce-4eed-4d0e-b9a4-eb2746bf396a" (UID: "2c0351ce-4eed-4d0e-b9a4-eb2746bf396a"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 13 17:31:52 crc kubenswrapper[4720]: I1013 17:31:52.991601 4720 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2c0351ce-4eed-4d0e-b9a4-eb2746bf396a-registry-tls\") on node \"crc\" DevicePath \"\"" Oct 13 17:31:52 crc kubenswrapper[4720]: I1013 17:31:52.991656 4720 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2c0351ce-4eed-4d0e-b9a4-eb2746bf396a-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 13 17:31:52 crc kubenswrapper[4720]: I1013 17:31:52.991676 4720 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2c0351ce-4eed-4d0e-b9a4-eb2746bf396a-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Oct 13 17:31:52 crc kubenswrapper[4720]: I1013 17:31:52.991697 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnxnn\" (UniqueName: \"kubernetes.io/projected/2c0351ce-4eed-4d0e-b9a4-eb2746bf396a-kube-api-access-nnxnn\") on node \"crc\" DevicePath \"\"" Oct 13 17:31:52 crc kubenswrapper[4720]: I1013 17:31:52.991717 4720 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2c0351ce-4eed-4d0e-b9a4-eb2746bf396a-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Oct 13 17:31:52 crc kubenswrapper[4720]: I1013 17:31:52.991734 4720 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2c0351ce-4eed-4d0e-b9a4-eb2746bf396a-registry-certificates\") on node \"crc\" DevicePath \"\"" Oct 13 17:31:52 crc kubenswrapper[4720]: I1013 17:31:52.991751 4720 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2c0351ce-4eed-4d0e-b9a4-eb2746bf396a-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 13 17:31:53 crc kubenswrapper[4720]: I1013 17:31:53.810401 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-w5xfl" Oct 13 17:31:53 crc kubenswrapper[4720]: I1013 17:31:53.830631 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-w5xfl"] Oct 13 17:31:53 crc kubenswrapper[4720]: I1013 17:31:53.834064 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-w5xfl"] Oct 13 17:31:55 crc kubenswrapper[4720]: I1013 17:31:55.174401 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c0351ce-4eed-4d0e-b9a4-eb2746bf396a" path="/var/lib/kubelet/pods/2c0351ce-4eed-4d0e-b9a4-eb2746bf396a/volumes" Oct 13 17:32:45 crc kubenswrapper[4720]: I1013 17:32:45.212399 4720 patch_prober.go:28] interesting pod/machine-config-daemon-htwnl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 17:32:45 crc kubenswrapper[4720]: I1013 17:32:45.212967 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 17:33:15 crc kubenswrapper[4720]: I1013 17:33:15.212973 4720 patch_prober.go:28] interesting pod/machine-config-daemon-htwnl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 17:33:15 crc kubenswrapper[4720]: I1013 17:33:15.214286 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 17:33:15 crc kubenswrapper[4720]: I1013 17:33:15.307002 4720 scope.go:117] "RemoveContainer" containerID="99604ef1ec0076fd63a03010dc24da695520c71f931a1a737353e42a5a594750" Oct 13 17:33:15 crc kubenswrapper[4720]: I1013 17:33:15.344684 4720 scope.go:117] "RemoveContainer" containerID="c6f37dabd8d97e52205949791e387bd853be569cb24db811983503a3abc556ce" Oct 13 17:33:15 crc kubenswrapper[4720]: I1013 17:33:15.371403 4720 scope.go:117] "RemoveContainer" containerID="790f14a9ca3f07f0d025a75e72f0d6ae03ffbd4e2f10dce66c68bc3edca0cb2f" Oct 13 17:33:15 crc kubenswrapper[4720]: I1013 17:33:15.410011 4720 scope.go:117] "RemoveContainer" containerID="4f98436f4e49cbf14734d9595bc80f9b873a4c59276150e787c4ddaebeeb3b41" Oct 13 17:33:20 crc kubenswrapper[4720]: I1013 17:33:20.794386 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-x8zf7"] Oct 13 17:33:20 crc kubenswrapper[4720]: E1013 17:33:20.795309 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c0351ce-4eed-4d0e-b9a4-eb2746bf396a" containerName="registry" Oct 13 17:33:20 crc kubenswrapper[4720]: I1013 17:33:20.795328 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c0351ce-4eed-4d0e-b9a4-eb2746bf396a" containerName="registry" Oct 13 17:33:20 crc kubenswrapper[4720]: I1013 17:33:20.795453 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c0351ce-4eed-4d0e-b9a4-eb2746bf396a" containerName="registry" Oct 13 17:33:20 crc kubenswrapper[4720]: I1013 17:33:20.795939 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-x8zf7" Oct 13 17:33:20 crc kubenswrapper[4720]: I1013 17:33:20.799078 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Oct 13 17:33:20 crc kubenswrapper[4720]: I1013 17:33:20.799378 4720 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-pntck" Oct 13 17:33:20 crc kubenswrapper[4720]: I1013 17:33:20.800853 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Oct 13 17:33:20 crc kubenswrapper[4720]: I1013 17:33:20.808704 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-hcgbx"] Oct 13 17:33:20 crc kubenswrapper[4720]: I1013 17:33:20.809547 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-hcgbx" Oct 13 17:33:20 crc kubenswrapper[4720]: I1013 17:33:20.811549 4720 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-l4r77" Oct 13 17:33:20 crc kubenswrapper[4720]: I1013 17:33:20.817154 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-x8zf7"] Oct 13 17:33:20 crc kubenswrapper[4720]: I1013 17:33:20.820701 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-hcgbx"] Oct 13 17:33:20 crc kubenswrapper[4720]: I1013 17:33:20.833070 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-mrmx6"] Oct 13 17:33:20 crc kubenswrapper[4720]: I1013 17:33:20.834046 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-mrmx6" Oct 13 17:33:20 crc kubenswrapper[4720]: I1013 17:33:20.838724 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-mrmx6"] Oct 13 17:33:20 crc kubenswrapper[4720]: I1013 17:33:20.843660 4720 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-bphzd" Oct 13 17:33:20 crc kubenswrapper[4720]: I1013 17:33:20.951467 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnn2w\" (UniqueName: \"kubernetes.io/projected/7142c613-395a-40ef-bef1-20ed0b6cdad3-kube-api-access-wnn2w\") pod \"cert-manager-webhook-5655c58dd6-mrmx6\" (UID: \"7142c613-395a-40ef-bef1-20ed0b6cdad3\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-mrmx6" Oct 13 17:33:20 crc kubenswrapper[4720]: I1013 17:33:20.951515 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg2hx\" (UniqueName: \"kubernetes.io/projected/c0e61a1b-8c01-4c98-a6bd-cff432642c53-kube-api-access-dg2hx\") pod \"cert-manager-cainjector-7f985d654d-x8zf7\" (UID: \"c0e61a1b-8c01-4c98-a6bd-cff432642c53\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-x8zf7" Oct 13 17:33:20 crc kubenswrapper[4720]: I1013 17:33:20.951559 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5rzq\" (UniqueName: \"kubernetes.io/projected/468376ef-c1ab-4db7-9006-0ded29f5c690-kube-api-access-g5rzq\") pod \"cert-manager-5b446d88c5-hcgbx\" (UID: \"468376ef-c1ab-4db7-9006-0ded29f5c690\") " pod="cert-manager/cert-manager-5b446d88c5-hcgbx" Oct 13 17:33:21 crc kubenswrapper[4720]: I1013 17:33:21.052636 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5rzq\" (UniqueName: \"kubernetes.io/projected/468376ef-c1ab-4db7-9006-0ded29f5c690-kube-api-access-g5rzq\") pod \"cert-manager-5b446d88c5-hcgbx\" (UID: \"468376ef-c1ab-4db7-9006-0ded29f5c690\") " pod="cert-manager/cert-manager-5b446d88c5-hcgbx" Oct 13 17:33:21 crc kubenswrapper[4720]: I1013 17:33:21.052904 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnn2w\" (UniqueName: \"kubernetes.io/projected/7142c613-395a-40ef-bef1-20ed0b6cdad3-kube-api-access-wnn2w\") pod \"cert-manager-webhook-5655c58dd6-mrmx6\" (UID: \"7142c613-395a-40ef-bef1-20ed0b6cdad3\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-mrmx6" Oct 13 17:33:21 crc kubenswrapper[4720]: I1013 17:33:21.052993 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dg2hx\" (UniqueName: \"kubernetes.io/projected/c0e61a1b-8c01-4c98-a6bd-cff432642c53-kube-api-access-dg2hx\") pod \"cert-manager-cainjector-7f985d654d-x8zf7\" (UID: \"c0e61a1b-8c01-4c98-a6bd-cff432642c53\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-x8zf7" Oct 13 17:33:21 crc kubenswrapper[4720]: I1013 17:33:21.070394 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnn2w\" (UniqueName: \"kubernetes.io/projected/7142c613-395a-40ef-bef1-20ed0b6cdad3-kube-api-access-wnn2w\") pod \"cert-manager-webhook-5655c58dd6-mrmx6\" (UID: \"7142c613-395a-40ef-bef1-20ed0b6cdad3\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-mrmx6" Oct 13 17:33:21 crc kubenswrapper[4720]: I1013 17:33:21.079601 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5rzq\" (UniqueName: \"kubernetes.io/projected/468376ef-c1ab-4db7-9006-0ded29f5c690-kube-api-access-g5rzq\") pod \"cert-manager-5b446d88c5-hcgbx\" (UID: \"468376ef-c1ab-4db7-9006-0ded29f5c690\") " pod="cert-manager/cert-manager-5b446d88c5-hcgbx" Oct 13 17:33:21 crc kubenswrapper[4720]: I1013 17:33:21.084554 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dg2hx\" (UniqueName: \"kubernetes.io/projected/c0e61a1b-8c01-4c98-a6bd-cff432642c53-kube-api-access-dg2hx\") pod \"cert-manager-cainjector-7f985d654d-x8zf7\" (UID: \"c0e61a1b-8c01-4c98-a6bd-cff432642c53\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-x8zf7" Oct 13 17:33:21 crc kubenswrapper[4720]: I1013 17:33:21.128037 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-x8zf7" Oct 13 17:33:21 crc kubenswrapper[4720]: I1013 17:33:21.138131 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-hcgbx" Oct 13 17:33:21 crc kubenswrapper[4720]: I1013 17:33:21.152462 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-mrmx6" Oct 13 17:33:21 crc kubenswrapper[4720]: I1013 17:33:21.358752 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-x8zf7"] Oct 13 17:33:21 crc kubenswrapper[4720]: I1013 17:33:21.370840 4720 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 13 17:33:21 crc kubenswrapper[4720]: I1013 17:33:21.412136 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-x8zf7" event={"ID":"c0e61a1b-8c01-4c98-a6bd-cff432642c53","Type":"ContainerStarted","Data":"4b08f175ecec9d78a46e344e3e1275a4aba5846ff1f36877a432d49c007787dc"} Oct 13 17:33:21 crc kubenswrapper[4720]: I1013 17:33:21.597317 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-mrmx6"] Oct 13 17:33:21 crc kubenswrapper[4720]: W1013 17:33:21.608940 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7142c613_395a_40ef_bef1_20ed0b6cdad3.slice/crio-8f19c0e3d6fc3129d0b0204c3899e3bf25c73aa5c84e083ac80abebfd615723c WatchSource:0}: Error finding container 8f19c0e3d6fc3129d0b0204c3899e3bf25c73aa5c84e083ac80abebfd615723c: Status 404 returned error can't find the container with id 8f19c0e3d6fc3129d0b0204c3899e3bf25c73aa5c84e083ac80abebfd615723c Oct 13 17:33:21 crc kubenswrapper[4720]: I1013 17:33:21.613507 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-hcgbx"] Oct 13 17:33:22 crc kubenswrapper[4720]: I1013 17:33:22.423171 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-mrmx6" event={"ID":"7142c613-395a-40ef-bef1-20ed0b6cdad3","Type":"ContainerStarted","Data":"8f19c0e3d6fc3129d0b0204c3899e3bf25c73aa5c84e083ac80abebfd615723c"} Oct 13 17:33:22 crc kubenswrapper[4720]: I1013 17:33:22.425283 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-hcgbx" event={"ID":"468376ef-c1ab-4db7-9006-0ded29f5c690","Type":"ContainerStarted","Data":"cbb5abfd8958a2abae88a86c51655f8b279ba86ac198edf19700398f6a07288f"} Oct 13 17:33:25 crc kubenswrapper[4720]: I1013 17:33:25.446408 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-hcgbx" event={"ID":"468376ef-c1ab-4db7-9006-0ded29f5c690","Type":"ContainerStarted","Data":"69eff4d09be3d66700ffdd396e3299f30b11ee6188da10dd3fd926b6b63db182"} Oct 13 17:33:25 crc kubenswrapper[4720]: I1013 17:33:25.453122 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-x8zf7" event={"ID":"c0e61a1b-8c01-4c98-a6bd-cff432642c53","Type":"ContainerStarted","Data":"6fded4f252beb0ecc2b87e3c9e64597ba95f6fd9cf643e4d6acb59a86ac0bc42"} Oct 13 17:33:25 crc kubenswrapper[4720]: I1013 17:33:25.455421 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-mrmx6" event={"ID":"7142c613-395a-40ef-bef1-20ed0b6cdad3","Type":"ContainerStarted","Data":"ca6f9a81db4d6b23c41ee08a605eef73a4e16837f1a54b77cea0585dcccded4a"} Oct 13 17:33:25 crc kubenswrapper[4720]: I1013 17:33:25.455625 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-mrmx6" Oct 13 17:33:25 crc kubenswrapper[4720]: I1013 17:33:25.472077 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-hcgbx" podStartSLOduration=2.645807756 podStartE2EDuration="5.472050033s" podCreationTimestamp="2025-10-13 17:33:20 +0000 UTC" firstStartedPulling="2025-10-13 17:33:21.620887563 +0000 UTC m=+547.078137695" lastFinishedPulling="2025-10-13 17:33:24.4471298 +0000 UTC m=+549.904379972" observedRunningTime="2025-10-13 17:33:25.465768373 +0000 UTC m=+550.923018545" watchObservedRunningTime="2025-10-13 17:33:25.472050033 +0000 UTC m=+550.929300205" Oct 13 17:33:25 crc kubenswrapper[4720]: I1013 17:33:25.500102 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-mrmx6" podStartSLOduration=2.574375849 podStartE2EDuration="5.500073476s" podCreationTimestamp="2025-10-13 17:33:20 +0000 UTC" firstStartedPulling="2025-10-13 17:33:21.612589422 +0000 UTC m=+547.069839554" lastFinishedPulling="2025-10-13 17:33:24.538287009 +0000 UTC m=+549.995537181" observedRunningTime="2025-10-13 17:33:25.490213905 +0000 UTC m=+550.947464047" watchObservedRunningTime="2025-10-13 17:33:25.500073476 +0000 UTC m=+550.957323638" Oct 13 17:33:25 crc kubenswrapper[4720]: I1013 17:33:25.510895 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-x8zf7" podStartSLOduration=2.437754465 podStartE2EDuration="5.51087557s" podCreationTimestamp="2025-10-13 17:33:20 +0000 UTC" firstStartedPulling="2025-10-13 17:33:21.370596389 +0000 UTC m=+546.827846521" lastFinishedPulling="2025-10-13 17:33:24.443717484 +0000 UTC m=+549.900967626" observedRunningTime="2025-10-13 17:33:25.509593658 +0000 UTC m=+550.966843820" watchObservedRunningTime="2025-10-13 17:33:25.51087557 +0000 UTC m=+550.968125742" Oct 13 17:33:31 crc kubenswrapper[4720]: I1013 17:33:31.156077 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-mrmx6" Oct 13 17:33:31 crc kubenswrapper[4720]: I1013 17:33:31.564297 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pn6lz"] Oct 13 17:33:31 crc kubenswrapper[4720]: I1013 17:33:31.565160 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" podUID="8064812e-b6aa-4f56-81c9-16154c00abad" containerName="ovn-controller" containerID="cri-o://1aefa46ee989416f0e4fd05f6f12031da637a354fadd7f6caef75ddc2c74ddf3" gracePeriod=30 Oct 13 17:33:31 crc kubenswrapper[4720]: I1013 17:33:31.565298 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" podUID="8064812e-b6aa-4f56-81c9-16154c00abad" containerName="kube-rbac-proxy-node" containerID="cri-o://5c28fde07efa17d20ddecd1c8a0da09a6fa09e98a90b14718565c00aef9c0d25" gracePeriod=30 Oct 13 17:33:31 crc kubenswrapper[4720]: I1013 17:33:31.565367 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" podUID="8064812e-b6aa-4f56-81c9-16154c00abad" containerName="ovn-acl-logging" containerID="cri-o://69d511a66a84fb0fa90f426bd4609c2b3ad915571802c6c024e0370d6e1683a3" gracePeriod=30 Oct 13 17:33:31 crc kubenswrapper[4720]: I1013 17:33:31.565365 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" podUID="8064812e-b6aa-4f56-81c9-16154c00abad" containerName="sbdb" containerID="cri-o://34b1fffbebfbd3599bed22fe3fe80129f5b704560c54d09e9ab5e9ea8bba4513" gracePeriod=30 Oct 13 17:33:31 crc kubenswrapper[4720]: I1013 17:33:31.565306 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" podUID="8064812e-b6aa-4f56-81c9-16154c00abad" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://7bbd41d9b0518c8b060b1d01c335b6c9923e7196624fbd49326133faaa2e36ba" gracePeriod=30 Oct 13 17:33:31 crc kubenswrapper[4720]: I1013 17:33:31.565324 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" podUID="8064812e-b6aa-4f56-81c9-16154c00abad" containerName="northd" containerID="cri-o://013dfade47eccae4339fca0d379d914f63502613f87f1bc621113f7008634475" gracePeriod=30 Oct 13 17:33:31 crc kubenswrapper[4720]: I1013 17:33:31.565253 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" podUID="8064812e-b6aa-4f56-81c9-16154c00abad" containerName="nbdb" containerID="cri-o://fd8a4dfb390a1643edb4eb4bd01970eaffd8dab0f748effac9e7c691ef4a3ef8" gracePeriod=30 Oct 13 17:33:31 crc kubenswrapper[4720]: I1013 17:33:31.625959 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" podUID="8064812e-b6aa-4f56-81c9-16154c00abad" containerName="ovnkube-controller" containerID="cri-o://2cba8cf1a0fc25e74ef53b5ad2f9b7078a9a2aada0f97b16b1dc4a11e0104571" gracePeriod=30 Oct 13 17:33:31 crc kubenswrapper[4720]: E1013 17:33:31.818122 4720 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8064812e_b6aa_4f56_81c9_16154c00abad.slice/crio-69d511a66a84fb0fa90f426bd4609c2b3ad915571802c6c024e0370d6e1683a3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8064812e_b6aa_4f56_81c9_16154c00abad.slice/crio-2cba8cf1a0fc25e74ef53b5ad2f9b7078a9a2aada0f97b16b1dc4a11e0104571.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8064812e_b6aa_4f56_81c9_16154c00abad.slice/crio-conmon-34b1fffbebfbd3599bed22fe3fe80129f5b704560c54d09e9ab5e9ea8bba4513.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8064812e_b6aa_4f56_81c9_16154c00abad.slice/crio-conmon-2cba8cf1a0fc25e74ef53b5ad2f9b7078a9a2aada0f97b16b1dc4a11e0104571.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8064812e_b6aa_4f56_81c9_16154c00abad.slice/crio-fd8a4dfb390a1643edb4eb4bd01970eaffd8dab0f748effac9e7c691ef4a3ef8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8064812e_b6aa_4f56_81c9_16154c00abad.slice/crio-34b1fffbebfbd3599bed22fe3fe80129f5b704560c54d09e9ab5e9ea8bba4513.scope\": RecentStats: unable to find data in memory cache]" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.327696 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pn6lz_8064812e-b6aa-4f56-81c9-16154c00abad/ovnkube-controller/3.log" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.332184 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pn6lz_8064812e-b6aa-4f56-81c9-16154c00abad/ovn-acl-logging/0.log" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.333337 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pn6lz_8064812e-b6aa-4f56-81c9-16154c00abad/ovn-controller/0.log" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.334302 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.420297 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ln4xf"] Oct 13 17:33:32 crc kubenswrapper[4720]: E1013 17:33:32.420754 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8064812e-b6aa-4f56-81c9-16154c00abad" containerName="ovnkube-controller" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.420784 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="8064812e-b6aa-4f56-81c9-16154c00abad" containerName="ovnkube-controller" Oct 13 17:33:32 crc kubenswrapper[4720]: E1013 17:33:32.420806 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8064812e-b6aa-4f56-81c9-16154c00abad" containerName="kube-rbac-proxy-ovn-metrics" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.420823 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="8064812e-b6aa-4f56-81c9-16154c00abad" containerName="kube-rbac-proxy-ovn-metrics" Oct 13 17:33:32 crc kubenswrapper[4720]: E1013 17:33:32.420844 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8064812e-b6aa-4f56-81c9-16154c00abad" containerName="ovnkube-controller" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.420862 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="8064812e-b6aa-4f56-81c9-16154c00abad" containerName="ovnkube-controller" Oct 13 17:33:32 crc kubenswrapper[4720]: E1013 17:33:32.420888 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8064812e-b6aa-4f56-81c9-16154c00abad" containerName="kubecfg-setup" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.420905 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="8064812e-b6aa-4f56-81c9-16154c00abad" containerName="kubecfg-setup" Oct 13 17:33:32 crc kubenswrapper[4720]: E1013 17:33:32.420927 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8064812e-b6aa-4f56-81c9-16154c00abad" containerName="ovnkube-controller" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.420942 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="8064812e-b6aa-4f56-81c9-16154c00abad" containerName="ovnkube-controller" Oct 13 17:33:32 crc kubenswrapper[4720]: E1013 17:33:32.420962 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8064812e-b6aa-4f56-81c9-16154c00abad" containerName="ovn-controller" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.420979 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="8064812e-b6aa-4f56-81c9-16154c00abad" containerName="ovn-controller" Oct 13 17:33:32 crc kubenswrapper[4720]: E1013 17:33:32.421007 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8064812e-b6aa-4f56-81c9-16154c00abad" containerName="kube-rbac-proxy-node" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.421025 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="8064812e-b6aa-4f56-81c9-16154c00abad" containerName="kube-rbac-proxy-node" Oct 13 17:33:32 crc kubenswrapper[4720]: E1013 17:33:32.421043 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8064812e-b6aa-4f56-81c9-16154c00abad" containerName="sbdb" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.421058 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="8064812e-b6aa-4f56-81c9-16154c00abad" containerName="sbdb" Oct 13 17:33:32 crc kubenswrapper[4720]: E1013 17:33:32.421079 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8064812e-b6aa-4f56-81c9-16154c00abad" containerName="ovn-acl-logging" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.421094 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="8064812e-b6aa-4f56-81c9-16154c00abad" containerName="ovn-acl-logging" Oct 13 17:33:32 crc kubenswrapper[4720]: E1013 17:33:32.421111 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8064812e-b6aa-4f56-81c9-16154c00abad" containerName="nbdb" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.421126 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="8064812e-b6aa-4f56-81c9-16154c00abad" containerName="nbdb" Oct 13 17:33:32 crc kubenswrapper[4720]: E1013 17:33:32.421149 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8064812e-b6aa-4f56-81c9-16154c00abad" containerName="northd" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.421165 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="8064812e-b6aa-4f56-81c9-16154c00abad" containerName="northd" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.421430 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="8064812e-b6aa-4f56-81c9-16154c00abad" containerName="ovnkube-controller" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.421456 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="8064812e-b6aa-4f56-81c9-16154c00abad" containerName="northd" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.421479 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="8064812e-b6aa-4f56-81c9-16154c00abad" containerName="kube-rbac-proxy-ovn-metrics" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.421503 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="8064812e-b6aa-4f56-81c9-16154c00abad" containerName="ovnkube-controller" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.421528 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="8064812e-b6aa-4f56-81c9-16154c00abad" containerName="ovn-acl-logging" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.421581 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="8064812e-b6aa-4f56-81c9-16154c00abad" containerName="nbdb" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.421601 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="8064812e-b6aa-4f56-81c9-16154c00abad" containerName="kube-rbac-proxy-node" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.421626 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="8064812e-b6aa-4f56-81c9-16154c00abad" containerName="sbdb" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.421649 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="8064812e-b6aa-4f56-81c9-16154c00abad" containerName="ovn-controller" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.421672 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="8064812e-b6aa-4f56-81c9-16154c00abad" containerName="ovnkube-controller" Oct 13 17:33:32 crc kubenswrapper[4720]: E1013 17:33:32.421878 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8064812e-b6aa-4f56-81c9-16154c00abad" containerName="ovnkube-controller" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.421898 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="8064812e-b6aa-4f56-81c9-16154c00abad" containerName="ovnkube-controller" Oct 13 17:33:32 crc kubenswrapper[4720]: E1013 17:33:32.421936 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8064812e-b6aa-4f56-81c9-16154c00abad" containerName="ovnkube-controller" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.421952 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="8064812e-b6aa-4f56-81c9-16154c00abad" containerName="ovnkube-controller" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.422217 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="8064812e-b6aa-4f56-81c9-16154c00abad" containerName="ovnkube-controller" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.422654 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="8064812e-b6aa-4f56-81c9-16154c00abad" containerName="ovnkube-controller" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.426056 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.509018 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lxmjt_7b45ec2d-5bea-4007-a49f-224a866f93eb/kube-multus/2.log" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.510016 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lxmjt_7b45ec2d-5bea-4007-a49f-224a866f93eb/kube-multus/1.log" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.510064 4720 generic.go:334] "Generic (PLEG): container finished" podID="7b45ec2d-5bea-4007-a49f-224a866f93eb" containerID="8f158c84b32e0700f7fbee0860433b8a5b48b6d18ab5d0d8013224b5fa78ff3a" exitCode=2 Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.510128 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lxmjt" event={"ID":"7b45ec2d-5bea-4007-a49f-224a866f93eb","Type":"ContainerDied","Data":"8f158c84b32e0700f7fbee0860433b8a5b48b6d18ab5d0d8013224b5fa78ff3a"} Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.510166 4720 scope.go:117] "RemoveContainer" containerID="835c6ec8dc7ae3785a23ae45e5c9dc4b3bcc24428ca4c3865a6dd790d5956e74" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.511003 4720 scope.go:117] "RemoveContainer" containerID="8f158c84b32e0700f7fbee0860433b8a5b48b6d18ab5d0d8013224b5fa78ff3a" Oct 13 17:33:32 crc kubenswrapper[4720]: E1013 17:33:32.511621 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-lxmjt_openshift-multus(7b45ec2d-5bea-4007-a49f-224a866f93eb)\"" pod="openshift-multus/multus-lxmjt" podUID="7b45ec2d-5bea-4007-a49f-224a866f93eb" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.513442 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pn6lz_8064812e-b6aa-4f56-81c9-16154c00abad/ovnkube-controller/3.log" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.514035 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-host-cni-netd\") pod \"8064812e-b6aa-4f56-81c9-16154c00abad\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.514117 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-host-run-ovn-kubernetes\") pod \"8064812e-b6aa-4f56-81c9-16154c00abad\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.514180 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8064812e-b6aa-4f56-81c9-16154c00abad-ovnkube-script-lib\") pod \"8064812e-b6aa-4f56-81c9-16154c00abad\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.514241 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "8064812e-b6aa-4f56-81c9-16154c00abad" (UID: "8064812e-b6aa-4f56-81c9-16154c00abad"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.514255 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "8064812e-b6aa-4f56-81c9-16154c00abad" (UID: "8064812e-b6aa-4f56-81c9-16154c00abad"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.514273 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-host-slash\") pod \"8064812e-b6aa-4f56-81c9-16154c00abad\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.514327 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-node-log\") pod \"8064812e-b6aa-4f56-81c9-16154c00abad\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.514369 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-host-var-lib-cni-networks-ovn-kubernetes\") pod \"8064812e-b6aa-4f56-81c9-16154c00abad\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.514400 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-host-slash" (OuterVolumeSpecName: "host-slash") pod "8064812e-b6aa-4f56-81c9-16154c00abad" (UID: "8064812e-b6aa-4f56-81c9-16154c00abad"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.514413 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-node-log" (OuterVolumeSpecName: "node-log") pod "8064812e-b6aa-4f56-81c9-16154c00abad" (UID: "8064812e-b6aa-4f56-81c9-16154c00abad"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.514409 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-log-socket\") pod \"8064812e-b6aa-4f56-81c9-16154c00abad\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.514452 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "8064812e-b6aa-4f56-81c9-16154c00abad" (UID: "8064812e-b6aa-4f56-81c9-16154c00abad"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.514475 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-log-socket" (OuterVolumeSpecName: "log-socket") pod "8064812e-b6aa-4f56-81c9-16154c00abad" (UID: "8064812e-b6aa-4f56-81c9-16154c00abad"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.514515 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-run-ovn\") pod \"8064812e-b6aa-4f56-81c9-16154c00abad\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.514591 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rm5x2\" (UniqueName: \"kubernetes.io/projected/8064812e-b6aa-4f56-81c9-16154c00abad-kube-api-access-rm5x2\") pod \"8064812e-b6aa-4f56-81c9-16154c00abad\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.514626 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-var-lib-openvswitch\") pod \"8064812e-b6aa-4f56-81c9-16154c00abad\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.514645 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "8064812e-b6aa-4f56-81c9-16154c00abad" (UID: "8064812e-b6aa-4f56-81c9-16154c00abad"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.514655 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-host-cni-bin\") pod \"8064812e-b6aa-4f56-81c9-16154c00abad\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.514690 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "8064812e-b6aa-4f56-81c9-16154c00abad" (UID: "8064812e-b6aa-4f56-81c9-16154c00abad"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.514691 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8064812e-b6aa-4f56-81c9-16154c00abad-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "8064812e-b6aa-4f56-81c9-16154c00abad" (UID: "8064812e-b6aa-4f56-81c9-16154c00abad"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.514740 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-run-systemd\") pod \"8064812e-b6aa-4f56-81c9-16154c00abad\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.514752 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "8064812e-b6aa-4f56-81c9-16154c00abad" (UID: "8064812e-b6aa-4f56-81c9-16154c00abad"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.514937 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8064812e-b6aa-4f56-81c9-16154c00abad-ovn-node-metrics-cert\") pod \"8064812e-b6aa-4f56-81c9-16154c00abad\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.514989 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8064812e-b6aa-4f56-81c9-16154c00abad-env-overrides\") pod \"8064812e-b6aa-4f56-81c9-16154c00abad\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.515023 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-etc-openvswitch\") pod \"8064812e-b6aa-4f56-81c9-16154c00abad\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.515054 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-run-openvswitch\") pod \"8064812e-b6aa-4f56-81c9-16154c00abad\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.515096 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-host-kubelet\") pod \"8064812e-b6aa-4f56-81c9-16154c00abad\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.515170 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8064812e-b6aa-4f56-81c9-16154c00abad-ovnkube-config\") pod \"8064812e-b6aa-4f56-81c9-16154c00abad\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.515244 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-systemd-units\") pod \"8064812e-b6aa-4f56-81c9-16154c00abad\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.515284 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-host-run-netns\") pod \"8064812e-b6aa-4f56-81c9-16154c00abad\" (UID: \"8064812e-b6aa-4f56-81c9-16154c00abad\") " Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.515600 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "8064812e-b6aa-4f56-81c9-16154c00abad" (UID: "8064812e-b6aa-4f56-81c9-16154c00abad"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.515700 4720 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-host-cni-netd\") on node \"crc\" DevicePath \"\"" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.515737 4720 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.515764 4720 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8064812e-b6aa-4f56-81c9-16154c00abad-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.515789 4720 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-host-slash\") on node \"crc\" DevicePath \"\"" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.515814 4720 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.515838 4720 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-log-socket\") on node \"crc\" DevicePath \"\"" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.515861 4720 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-node-log\") on node \"crc\" DevicePath \"\"" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.515882 4720 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-run-ovn\") on node \"crc\" DevicePath \"\"" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.515905 4720 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.515931 4720 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-host-cni-bin\") on node \"crc\" DevicePath \"\"" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.515866 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "8064812e-b6aa-4f56-81c9-16154c00abad" (UID: "8064812e-b6aa-4f56-81c9-16154c00abad"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.515943 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "8064812e-b6aa-4f56-81c9-16154c00abad" (UID: "8064812e-b6aa-4f56-81c9-16154c00abad"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.515960 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "8064812e-b6aa-4f56-81c9-16154c00abad" (UID: "8064812e-b6aa-4f56-81c9-16154c00abad"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.516010 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "8064812e-b6aa-4f56-81c9-16154c00abad" (UID: "8064812e-b6aa-4f56-81c9-16154c00abad"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.516600 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8064812e-b6aa-4f56-81c9-16154c00abad-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "8064812e-b6aa-4f56-81c9-16154c00abad" (UID: "8064812e-b6aa-4f56-81c9-16154c00abad"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.516649 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8064812e-b6aa-4f56-81c9-16154c00abad-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "8064812e-b6aa-4f56-81c9-16154c00abad" (UID: "8064812e-b6aa-4f56-81c9-16154c00abad"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.519971 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pn6lz_8064812e-b6aa-4f56-81c9-16154c00abad/ovn-acl-logging/0.log" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.521301 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pn6lz_8064812e-b6aa-4f56-81c9-16154c00abad/ovn-controller/0.log" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.521977 4720 generic.go:334] "Generic (PLEG): container finished" podID="8064812e-b6aa-4f56-81c9-16154c00abad" containerID="2cba8cf1a0fc25e74ef53b5ad2f9b7078a9a2aada0f97b16b1dc4a11e0104571" exitCode=0 Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.522031 4720 generic.go:334] "Generic (PLEG): container finished" podID="8064812e-b6aa-4f56-81c9-16154c00abad" containerID="34b1fffbebfbd3599bed22fe3fe80129f5b704560c54d09e9ab5e9ea8bba4513" exitCode=0 Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.522054 4720 generic.go:334] "Generic (PLEG): container finished" podID="8064812e-b6aa-4f56-81c9-16154c00abad" containerID="fd8a4dfb390a1643edb4eb4bd01970eaffd8dab0f748effac9e7c691ef4a3ef8" exitCode=0 Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.522047 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" event={"ID":"8064812e-b6aa-4f56-81c9-16154c00abad","Type":"ContainerDied","Data":"2cba8cf1a0fc25e74ef53b5ad2f9b7078a9a2aada0f97b16b1dc4a11e0104571"} Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.522128 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" event={"ID":"8064812e-b6aa-4f56-81c9-16154c00abad","Type":"ContainerDied","Data":"34b1fffbebfbd3599bed22fe3fe80129f5b704560c54d09e9ab5e9ea8bba4513"} Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.522075 4720 generic.go:334] "Generic (PLEG): container finished" podID="8064812e-b6aa-4f56-81c9-16154c00abad" containerID="013dfade47eccae4339fca0d379d914f63502613f87f1bc621113f7008634475" exitCode=0 Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.522154 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" event={"ID":"8064812e-b6aa-4f56-81c9-16154c00abad","Type":"ContainerDied","Data":"fd8a4dfb390a1643edb4eb4bd01970eaffd8dab0f748effac9e7c691ef4a3ef8"} Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.522176 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" event={"ID":"8064812e-b6aa-4f56-81c9-16154c00abad","Type":"ContainerDied","Data":"013dfade47eccae4339fca0d379d914f63502613f87f1bc621113f7008634475"} Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.522019 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8064812e-b6aa-4f56-81c9-16154c00abad-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "8064812e-b6aa-4f56-81c9-16154c00abad" (UID: "8064812e-b6aa-4f56-81c9-16154c00abad"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.522181 4720 generic.go:334] "Generic (PLEG): container finished" podID="8064812e-b6aa-4f56-81c9-16154c00abad" containerID="7bbd41d9b0518c8b060b1d01c335b6c9923e7196624fbd49326133faaa2e36ba" exitCode=0 Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.522249 4720 generic.go:334] "Generic (PLEG): container finished" podID="8064812e-b6aa-4f56-81c9-16154c00abad" containerID="5c28fde07efa17d20ddecd1c8a0da09a6fa09e98a90b14718565c00aef9c0d25" exitCode=0 Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.522267 4720 generic.go:334] "Generic (PLEG): container finished" podID="8064812e-b6aa-4f56-81c9-16154c00abad" containerID="69d511a66a84fb0fa90f426bd4609c2b3ad915571802c6c024e0370d6e1683a3" exitCode=143 Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.522281 4720 generic.go:334] "Generic (PLEG): container finished" podID="8064812e-b6aa-4f56-81c9-16154c00abad" containerID="1aefa46ee989416f0e4fd05f6f12031da637a354fadd7f6caef75ddc2c74ddf3" exitCode=143 Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.522139 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.522416 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" event={"ID":"8064812e-b6aa-4f56-81c9-16154c00abad","Type":"ContainerDied","Data":"7bbd41d9b0518c8b060b1d01c335b6c9923e7196624fbd49326133faaa2e36ba"} Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.522467 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" event={"ID":"8064812e-b6aa-4f56-81c9-16154c00abad","Type":"ContainerDied","Data":"5c28fde07efa17d20ddecd1c8a0da09a6fa09e98a90b14718565c00aef9c0d25"} Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.522498 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2cba8cf1a0fc25e74ef53b5ad2f9b7078a9a2aada0f97b16b1dc4a11e0104571"} Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.522522 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c62dd5ae9eb692f394556f2d0b70bf14a7c99fe714ad7084a5d08b324a4435de"} Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.522538 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"34b1fffbebfbd3599bed22fe3fe80129f5b704560c54d09e9ab5e9ea8bba4513"} Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.522554 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fd8a4dfb390a1643edb4eb4bd01970eaffd8dab0f748effac9e7c691ef4a3ef8"} Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.522569 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"013dfade47eccae4339fca0d379d914f63502613f87f1bc621113f7008634475"} Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.522584 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7bbd41d9b0518c8b060b1d01c335b6c9923e7196624fbd49326133faaa2e36ba"} Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.522599 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5c28fde07efa17d20ddecd1c8a0da09a6fa09e98a90b14718565c00aef9c0d25"} Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.522614 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"69d511a66a84fb0fa90f426bd4609c2b3ad915571802c6c024e0370d6e1683a3"} Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.522629 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1aefa46ee989416f0e4fd05f6f12031da637a354fadd7f6caef75ddc2c74ddf3"} Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.522644 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a"} Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.522666 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" event={"ID":"8064812e-b6aa-4f56-81c9-16154c00abad","Type":"ContainerDied","Data":"69d511a66a84fb0fa90f426bd4609c2b3ad915571802c6c024e0370d6e1683a3"} Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.522688 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2cba8cf1a0fc25e74ef53b5ad2f9b7078a9a2aada0f97b16b1dc4a11e0104571"} Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.522705 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c62dd5ae9eb692f394556f2d0b70bf14a7c99fe714ad7084a5d08b324a4435de"} Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.522720 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"34b1fffbebfbd3599bed22fe3fe80129f5b704560c54d09e9ab5e9ea8bba4513"} Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.522734 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fd8a4dfb390a1643edb4eb4bd01970eaffd8dab0f748effac9e7c691ef4a3ef8"} Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.522749 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"013dfade47eccae4339fca0d379d914f63502613f87f1bc621113f7008634475"} Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.522763 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7bbd41d9b0518c8b060b1d01c335b6c9923e7196624fbd49326133faaa2e36ba"} Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.522781 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5c28fde07efa17d20ddecd1c8a0da09a6fa09e98a90b14718565c00aef9c0d25"} Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.522796 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"69d511a66a84fb0fa90f426bd4609c2b3ad915571802c6c024e0370d6e1683a3"} Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.522810 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1aefa46ee989416f0e4fd05f6f12031da637a354fadd7f6caef75ddc2c74ddf3"} Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.522827 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a"} Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.522846 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" event={"ID":"8064812e-b6aa-4f56-81c9-16154c00abad","Type":"ContainerDied","Data":"1aefa46ee989416f0e4fd05f6f12031da637a354fadd7f6caef75ddc2c74ddf3"} Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.522869 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2cba8cf1a0fc25e74ef53b5ad2f9b7078a9a2aada0f97b16b1dc4a11e0104571"} Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.522886 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c62dd5ae9eb692f394556f2d0b70bf14a7c99fe714ad7084a5d08b324a4435de"} Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.522900 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"34b1fffbebfbd3599bed22fe3fe80129f5b704560c54d09e9ab5e9ea8bba4513"} Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.522916 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fd8a4dfb390a1643edb4eb4bd01970eaffd8dab0f748effac9e7c691ef4a3ef8"} Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.522932 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"013dfade47eccae4339fca0d379d914f63502613f87f1bc621113f7008634475"} Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.522947 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7bbd41d9b0518c8b060b1d01c335b6c9923e7196624fbd49326133faaa2e36ba"} Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.522962 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5c28fde07efa17d20ddecd1c8a0da09a6fa09e98a90b14718565c00aef9c0d25"} Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.522976 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"69d511a66a84fb0fa90f426bd4609c2b3ad915571802c6c024e0370d6e1683a3"} Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.522991 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1aefa46ee989416f0e4fd05f6f12031da637a354fadd7f6caef75ddc2c74ddf3"} Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.523005 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a"} Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.523029 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pn6lz" event={"ID":"8064812e-b6aa-4f56-81c9-16154c00abad","Type":"ContainerDied","Data":"54e24a19ce5ce94b646652618294cdf42d51c0f94f5972c9d9f4274575ff8d6d"} Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.523051 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2cba8cf1a0fc25e74ef53b5ad2f9b7078a9a2aada0f97b16b1dc4a11e0104571"} Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.523067 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c62dd5ae9eb692f394556f2d0b70bf14a7c99fe714ad7084a5d08b324a4435de"} Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.523082 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"34b1fffbebfbd3599bed22fe3fe80129f5b704560c54d09e9ab5e9ea8bba4513"} Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.523096 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fd8a4dfb390a1643edb4eb4bd01970eaffd8dab0f748effac9e7c691ef4a3ef8"} Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.523110 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"013dfade47eccae4339fca0d379d914f63502613f87f1bc621113f7008634475"} Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.523125 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7bbd41d9b0518c8b060b1d01c335b6c9923e7196624fbd49326133faaa2e36ba"} Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.523139 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5c28fde07efa17d20ddecd1c8a0da09a6fa09e98a90b14718565c00aef9c0d25"} Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.523153 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"69d511a66a84fb0fa90f426bd4609c2b3ad915571802c6c024e0370d6e1683a3"} Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.523168 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1aefa46ee989416f0e4fd05f6f12031da637a354fadd7f6caef75ddc2c74ddf3"} Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.523182 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a"} Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.526235 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8064812e-b6aa-4f56-81c9-16154c00abad-kube-api-access-rm5x2" (OuterVolumeSpecName: "kube-api-access-rm5x2") pod "8064812e-b6aa-4f56-81c9-16154c00abad" (UID: "8064812e-b6aa-4f56-81c9-16154c00abad"). InnerVolumeSpecName "kube-api-access-rm5x2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.540610 4720 scope.go:117] "RemoveContainer" containerID="2cba8cf1a0fc25e74ef53b5ad2f9b7078a9a2aada0f97b16b1dc4a11e0104571" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.555431 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "8064812e-b6aa-4f56-81c9-16154c00abad" (UID: "8064812e-b6aa-4f56-81c9-16154c00abad"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.576651 4720 scope.go:117] "RemoveContainer" containerID="c62dd5ae9eb692f394556f2d0b70bf14a7c99fe714ad7084a5d08b324a4435de" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.598387 4720 scope.go:117] "RemoveContainer" containerID="34b1fffbebfbd3599bed22fe3fe80129f5b704560c54d09e9ab5e9ea8bba4513" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.608913 4720 scope.go:117] "RemoveContainer" containerID="fd8a4dfb390a1643edb4eb4bd01970eaffd8dab0f748effac9e7c691ef4a3ef8" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.616802 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/18b8d3b1-b090-4386-b2de-30893f390c15-log-socket\") pod \"ovnkube-node-ln4xf\" (UID: \"18b8d3b1-b090-4386-b2de-30893f390c15\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.616851 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/18b8d3b1-b090-4386-b2de-30893f390c15-run-openvswitch\") pod \"ovnkube-node-ln4xf\" (UID: \"18b8d3b1-b090-4386-b2de-30893f390c15\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.616882 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/18b8d3b1-b090-4386-b2de-30893f390c15-ovnkube-script-lib\") pod \"ovnkube-node-ln4xf\" (UID: \"18b8d3b1-b090-4386-b2de-30893f390c15\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.616900 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/18b8d3b1-b090-4386-b2de-30893f390c15-var-lib-openvswitch\") pod \"ovnkube-node-ln4xf\" (UID: \"18b8d3b1-b090-4386-b2de-30893f390c15\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.616918 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/18b8d3b1-b090-4386-b2de-30893f390c15-run-ovn\") pod \"ovnkube-node-ln4xf\" (UID: \"18b8d3b1-b090-4386-b2de-30893f390c15\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.616932 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/18b8d3b1-b090-4386-b2de-30893f390c15-host-run-netns\") pod \"ovnkube-node-ln4xf\" (UID: \"18b8d3b1-b090-4386-b2de-30893f390c15\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.616945 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/18b8d3b1-b090-4386-b2de-30893f390c15-host-cni-netd\") pod \"ovnkube-node-ln4xf\" (UID: \"18b8d3b1-b090-4386-b2de-30893f390c15\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.616963 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/18b8d3b1-b090-4386-b2de-30893f390c15-ovnkube-config\") pod \"ovnkube-node-ln4xf\" (UID: \"18b8d3b1-b090-4386-b2de-30893f390c15\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.616979 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/18b8d3b1-b090-4386-b2de-30893f390c15-env-overrides\") pod \"ovnkube-node-ln4xf\" (UID: \"18b8d3b1-b090-4386-b2de-30893f390c15\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.617008 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/18b8d3b1-b090-4386-b2de-30893f390c15-systemd-units\") pod \"ovnkube-node-ln4xf\" (UID: \"18b8d3b1-b090-4386-b2de-30893f390c15\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.617023 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/18b8d3b1-b090-4386-b2de-30893f390c15-etc-openvswitch\") pod \"ovnkube-node-ln4xf\" (UID: \"18b8d3b1-b090-4386-b2de-30893f390c15\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.617037 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/18b8d3b1-b090-4386-b2de-30893f390c15-host-kubelet\") pod \"ovnkube-node-ln4xf\" (UID: \"18b8d3b1-b090-4386-b2de-30893f390c15\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.617056 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ppnf\" (UniqueName: \"kubernetes.io/projected/18b8d3b1-b090-4386-b2de-30893f390c15-kube-api-access-6ppnf\") pod \"ovnkube-node-ln4xf\" (UID: \"18b8d3b1-b090-4386-b2de-30893f390c15\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.617072 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/18b8d3b1-b090-4386-b2de-30893f390c15-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ln4xf\" (UID: \"18b8d3b1-b090-4386-b2de-30893f390c15\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.617089 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/18b8d3b1-b090-4386-b2de-30893f390c15-host-cni-bin\") pod \"ovnkube-node-ln4xf\" (UID: \"18b8d3b1-b090-4386-b2de-30893f390c15\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.617103 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/18b8d3b1-b090-4386-b2de-30893f390c15-host-run-ovn-kubernetes\") pod \"ovnkube-node-ln4xf\" (UID: \"18b8d3b1-b090-4386-b2de-30893f390c15\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.617117 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/18b8d3b1-b090-4386-b2de-30893f390c15-ovn-node-metrics-cert\") pod \"ovnkube-node-ln4xf\" (UID: \"18b8d3b1-b090-4386-b2de-30893f390c15\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.617135 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/18b8d3b1-b090-4386-b2de-30893f390c15-host-slash\") pod \"ovnkube-node-ln4xf\" (UID: \"18b8d3b1-b090-4386-b2de-30893f390c15\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.617163 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/18b8d3b1-b090-4386-b2de-30893f390c15-node-log\") pod \"ovnkube-node-ln4xf\" (UID: \"18b8d3b1-b090-4386-b2de-30893f390c15\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.617202 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/18b8d3b1-b090-4386-b2de-30893f390c15-run-systemd\") pod \"ovnkube-node-ln4xf\" (UID: \"18b8d3b1-b090-4386-b2de-30893f390c15\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.617237 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rm5x2\" (UniqueName: \"kubernetes.io/projected/8064812e-b6aa-4f56-81c9-16154c00abad-kube-api-access-rm5x2\") on node \"crc\" DevicePath \"\"" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.617247 4720 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-run-systemd\") on node \"crc\" DevicePath \"\"" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.617256 4720 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8064812e-b6aa-4f56-81c9-16154c00abad-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.617265 4720 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8064812e-b6aa-4f56-81c9-16154c00abad-env-overrides\") on node \"crc\" DevicePath \"\"" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.617273 4720 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.617281 4720 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-run-openvswitch\") on node \"crc\" DevicePath \"\"" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.617290 4720 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-host-kubelet\") on node \"crc\" DevicePath \"\"" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.617298 4720 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8064812e-b6aa-4f56-81c9-16154c00abad-ovnkube-config\") on node \"crc\" DevicePath \"\"" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.617306 4720 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-systemd-units\") on node \"crc\" DevicePath \"\"" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.617314 4720 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8064812e-b6aa-4f56-81c9-16154c00abad-host-run-netns\") on node \"crc\" DevicePath \"\"" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.622336 4720 scope.go:117] "RemoveContainer" containerID="013dfade47eccae4339fca0d379d914f63502613f87f1bc621113f7008634475" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.633928 4720 scope.go:117] "RemoveContainer" containerID="7bbd41d9b0518c8b060b1d01c335b6c9923e7196624fbd49326133faaa2e36ba" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.647681 4720 scope.go:117] "RemoveContainer" containerID="5c28fde07efa17d20ddecd1c8a0da09a6fa09e98a90b14718565c00aef9c0d25" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.666365 4720 scope.go:117] "RemoveContainer" containerID="69d511a66a84fb0fa90f426bd4609c2b3ad915571802c6c024e0370d6e1683a3" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.684494 4720 scope.go:117] "RemoveContainer" containerID="1aefa46ee989416f0e4fd05f6f12031da637a354fadd7f6caef75ddc2c74ddf3" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.713533 4720 scope.go:117] "RemoveContainer" containerID="be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.718801 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/18b8d3b1-b090-4386-b2de-30893f390c15-host-slash\") pod \"ovnkube-node-ln4xf\" (UID: \"18b8d3b1-b090-4386-b2de-30893f390c15\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.718885 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/18b8d3b1-b090-4386-b2de-30893f390c15-node-log\") pod \"ovnkube-node-ln4xf\" (UID: \"18b8d3b1-b090-4386-b2de-30893f390c15\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.718944 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/18b8d3b1-b090-4386-b2de-30893f390c15-run-systemd\") pod \"ovnkube-node-ln4xf\" (UID: \"18b8d3b1-b090-4386-b2de-30893f390c15\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.718974 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/18b8d3b1-b090-4386-b2de-30893f390c15-host-slash\") pod \"ovnkube-node-ln4xf\" (UID: \"18b8d3b1-b090-4386-b2de-30893f390c15\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.718992 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/18b8d3b1-b090-4386-b2de-30893f390c15-log-socket\") pod \"ovnkube-node-ln4xf\" (UID: \"18b8d3b1-b090-4386-b2de-30893f390c15\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.719046 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/18b8d3b1-b090-4386-b2de-30893f390c15-log-socket\") pod \"ovnkube-node-ln4xf\" (UID: \"18b8d3b1-b090-4386-b2de-30893f390c15\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.719079 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/18b8d3b1-b090-4386-b2de-30893f390c15-run-systemd\") pod \"ovnkube-node-ln4xf\" (UID: \"18b8d3b1-b090-4386-b2de-30893f390c15\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.719136 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/18b8d3b1-b090-4386-b2de-30893f390c15-node-log\") pod \"ovnkube-node-ln4xf\" (UID: \"18b8d3b1-b090-4386-b2de-30893f390c15\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.720001 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/18b8d3b1-b090-4386-b2de-30893f390c15-run-openvswitch\") pod \"ovnkube-node-ln4xf\" (UID: \"18b8d3b1-b090-4386-b2de-30893f390c15\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.720045 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/18b8d3b1-b090-4386-b2de-30893f390c15-ovnkube-script-lib\") pod \"ovnkube-node-ln4xf\" (UID: \"18b8d3b1-b090-4386-b2de-30893f390c15\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.720073 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/18b8d3b1-b090-4386-b2de-30893f390c15-var-lib-openvswitch\") pod \"ovnkube-node-ln4xf\" (UID: \"18b8d3b1-b090-4386-b2de-30893f390c15\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.720093 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/18b8d3b1-b090-4386-b2de-30893f390c15-host-run-netns\") pod \"ovnkube-node-ln4xf\" (UID: \"18b8d3b1-b090-4386-b2de-30893f390c15\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.720115 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/18b8d3b1-b090-4386-b2de-30893f390c15-run-ovn\") pod \"ovnkube-node-ln4xf\" (UID: \"18b8d3b1-b090-4386-b2de-30893f390c15\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.720141 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/18b8d3b1-b090-4386-b2de-30893f390c15-host-cni-netd\") pod \"ovnkube-node-ln4xf\" (UID: \"18b8d3b1-b090-4386-b2de-30893f390c15\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.720181 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/18b8d3b1-b090-4386-b2de-30893f390c15-ovnkube-config\") pod \"ovnkube-node-ln4xf\" (UID: \"18b8d3b1-b090-4386-b2de-30893f390c15\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.720226 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/18b8d3b1-b090-4386-b2de-30893f390c15-env-overrides\") pod \"ovnkube-node-ln4xf\" (UID: \"18b8d3b1-b090-4386-b2de-30893f390c15\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.720277 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/18b8d3b1-b090-4386-b2de-30893f390c15-host-run-netns\") pod \"ovnkube-node-ln4xf\" (UID: \"18b8d3b1-b090-4386-b2de-30893f390c15\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.720288 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/18b8d3b1-b090-4386-b2de-30893f390c15-systemd-units\") pod \"ovnkube-node-ln4xf\" (UID: \"18b8d3b1-b090-4386-b2de-30893f390c15\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.720325 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/18b8d3b1-b090-4386-b2de-30893f390c15-systemd-units\") pod \"ovnkube-node-ln4xf\" (UID: \"18b8d3b1-b090-4386-b2de-30893f390c15\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.720360 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/18b8d3b1-b090-4386-b2de-30893f390c15-run-ovn\") pod \"ovnkube-node-ln4xf\" (UID: \"18b8d3b1-b090-4386-b2de-30893f390c15\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.720377 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/18b8d3b1-b090-4386-b2de-30893f390c15-etc-openvswitch\") pod \"ovnkube-node-ln4xf\" (UID: \"18b8d3b1-b090-4386-b2de-30893f390c15\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.720391 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/18b8d3b1-b090-4386-b2de-30893f390c15-host-cni-netd\") pod \"ovnkube-node-ln4xf\" (UID: \"18b8d3b1-b090-4386-b2de-30893f390c15\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.720410 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/18b8d3b1-b090-4386-b2de-30893f390c15-host-kubelet\") pod \"ovnkube-node-ln4xf\" (UID: \"18b8d3b1-b090-4386-b2de-30893f390c15\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.720463 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ppnf\" (UniqueName: \"kubernetes.io/projected/18b8d3b1-b090-4386-b2de-30893f390c15-kube-api-access-6ppnf\") pod \"ovnkube-node-ln4xf\" (UID: \"18b8d3b1-b090-4386-b2de-30893f390c15\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.720496 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/18b8d3b1-b090-4386-b2de-30893f390c15-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ln4xf\" (UID: \"18b8d3b1-b090-4386-b2de-30893f390c15\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.720522 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/18b8d3b1-b090-4386-b2de-30893f390c15-host-cni-bin\") pod \"ovnkube-node-ln4xf\" (UID: \"18b8d3b1-b090-4386-b2de-30893f390c15\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.720553 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/18b8d3b1-b090-4386-b2de-30893f390c15-host-run-ovn-kubernetes\") pod \"ovnkube-node-ln4xf\" (UID: \"18b8d3b1-b090-4386-b2de-30893f390c15\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.720578 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/18b8d3b1-b090-4386-b2de-30893f390c15-ovn-node-metrics-cert\") pod \"ovnkube-node-ln4xf\" (UID: \"18b8d3b1-b090-4386-b2de-30893f390c15\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.720671 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/18b8d3b1-b090-4386-b2de-30893f390c15-etc-openvswitch\") pod \"ovnkube-node-ln4xf\" (UID: \"18b8d3b1-b090-4386-b2de-30893f390c15\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.720801 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/18b8d3b1-b090-4386-b2de-30893f390c15-host-cni-bin\") pod \"ovnkube-node-ln4xf\" (UID: \"18b8d3b1-b090-4386-b2de-30893f390c15\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.720823 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/18b8d3b1-b090-4386-b2de-30893f390c15-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ln4xf\" (UID: \"18b8d3b1-b090-4386-b2de-30893f390c15\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.720866 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/18b8d3b1-b090-4386-b2de-30893f390c15-host-kubelet\") pod \"ovnkube-node-ln4xf\" (UID: \"18b8d3b1-b090-4386-b2de-30893f390c15\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.720897 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/18b8d3b1-b090-4386-b2de-30893f390c15-host-run-ovn-kubernetes\") pod \"ovnkube-node-ln4xf\" (UID: \"18b8d3b1-b090-4386-b2de-30893f390c15\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.720934 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/18b8d3b1-b090-4386-b2de-30893f390c15-var-lib-openvswitch\") pod \"ovnkube-node-ln4xf\" (UID: \"18b8d3b1-b090-4386-b2de-30893f390c15\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.720913 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/18b8d3b1-b090-4386-b2de-30893f390c15-run-openvswitch\") pod \"ovnkube-node-ln4xf\" (UID: \"18b8d3b1-b090-4386-b2de-30893f390c15\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.721736 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/18b8d3b1-b090-4386-b2de-30893f390c15-ovnkube-script-lib\") pod \"ovnkube-node-ln4xf\" (UID: \"18b8d3b1-b090-4386-b2de-30893f390c15\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.722135 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/18b8d3b1-b090-4386-b2de-30893f390c15-ovnkube-config\") pod \"ovnkube-node-ln4xf\" (UID: \"18b8d3b1-b090-4386-b2de-30893f390c15\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.722236 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/18b8d3b1-b090-4386-b2de-30893f390c15-env-overrides\") pod \"ovnkube-node-ln4xf\" (UID: \"18b8d3b1-b090-4386-b2de-30893f390c15\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.728601 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/18b8d3b1-b090-4386-b2de-30893f390c15-ovn-node-metrics-cert\") pod \"ovnkube-node-ln4xf\" (UID: \"18b8d3b1-b090-4386-b2de-30893f390c15\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.732560 4720 scope.go:117] "RemoveContainer" containerID="2cba8cf1a0fc25e74ef53b5ad2f9b7078a9a2aada0f97b16b1dc4a11e0104571" Oct 13 17:33:32 crc kubenswrapper[4720]: E1013 17:33:32.733040 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cba8cf1a0fc25e74ef53b5ad2f9b7078a9a2aada0f97b16b1dc4a11e0104571\": container with ID starting with 2cba8cf1a0fc25e74ef53b5ad2f9b7078a9a2aada0f97b16b1dc4a11e0104571 not found: ID does not exist" containerID="2cba8cf1a0fc25e74ef53b5ad2f9b7078a9a2aada0f97b16b1dc4a11e0104571" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.734329 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cba8cf1a0fc25e74ef53b5ad2f9b7078a9a2aada0f97b16b1dc4a11e0104571"} err="failed to get container status \"2cba8cf1a0fc25e74ef53b5ad2f9b7078a9a2aada0f97b16b1dc4a11e0104571\": rpc error: code = NotFound desc = could not find container \"2cba8cf1a0fc25e74ef53b5ad2f9b7078a9a2aada0f97b16b1dc4a11e0104571\": container with ID starting with 2cba8cf1a0fc25e74ef53b5ad2f9b7078a9a2aada0f97b16b1dc4a11e0104571 not found: ID does not exist" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.734495 4720 scope.go:117] "RemoveContainer" containerID="c62dd5ae9eb692f394556f2d0b70bf14a7c99fe714ad7084a5d08b324a4435de" Oct 13 17:33:32 crc kubenswrapper[4720]: E1013 17:33:32.735063 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c62dd5ae9eb692f394556f2d0b70bf14a7c99fe714ad7084a5d08b324a4435de\": container with ID starting with c62dd5ae9eb692f394556f2d0b70bf14a7c99fe714ad7084a5d08b324a4435de not found: ID does not exist" containerID="c62dd5ae9eb692f394556f2d0b70bf14a7c99fe714ad7084a5d08b324a4435de" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.735224 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c62dd5ae9eb692f394556f2d0b70bf14a7c99fe714ad7084a5d08b324a4435de"} err="failed to get container status \"c62dd5ae9eb692f394556f2d0b70bf14a7c99fe714ad7084a5d08b324a4435de\": rpc error: code = NotFound desc = could not find container \"c62dd5ae9eb692f394556f2d0b70bf14a7c99fe714ad7084a5d08b324a4435de\": container with ID starting with c62dd5ae9eb692f394556f2d0b70bf14a7c99fe714ad7084a5d08b324a4435de not found: ID does not exist" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.735324 4720 scope.go:117] "RemoveContainer" containerID="34b1fffbebfbd3599bed22fe3fe80129f5b704560c54d09e9ab5e9ea8bba4513" Oct 13 17:33:32 crc kubenswrapper[4720]: E1013 17:33:32.735809 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34b1fffbebfbd3599bed22fe3fe80129f5b704560c54d09e9ab5e9ea8bba4513\": container with ID starting with 34b1fffbebfbd3599bed22fe3fe80129f5b704560c54d09e9ab5e9ea8bba4513 not found: ID does not exist" containerID="34b1fffbebfbd3599bed22fe3fe80129f5b704560c54d09e9ab5e9ea8bba4513" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.735922 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34b1fffbebfbd3599bed22fe3fe80129f5b704560c54d09e9ab5e9ea8bba4513"} err="failed to get container status \"34b1fffbebfbd3599bed22fe3fe80129f5b704560c54d09e9ab5e9ea8bba4513\": rpc error: code = NotFound desc = could not find container \"34b1fffbebfbd3599bed22fe3fe80129f5b704560c54d09e9ab5e9ea8bba4513\": container with ID starting with 34b1fffbebfbd3599bed22fe3fe80129f5b704560c54d09e9ab5e9ea8bba4513 not found: ID does not exist" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.736007 4720 scope.go:117] "RemoveContainer" containerID="fd8a4dfb390a1643edb4eb4bd01970eaffd8dab0f748effac9e7c691ef4a3ef8" Oct 13 17:33:32 crc kubenswrapper[4720]: E1013 17:33:32.737100 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd8a4dfb390a1643edb4eb4bd01970eaffd8dab0f748effac9e7c691ef4a3ef8\": container with ID starting with fd8a4dfb390a1643edb4eb4bd01970eaffd8dab0f748effac9e7c691ef4a3ef8 not found: ID does not exist" containerID="fd8a4dfb390a1643edb4eb4bd01970eaffd8dab0f748effac9e7c691ef4a3ef8" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.737220 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd8a4dfb390a1643edb4eb4bd01970eaffd8dab0f748effac9e7c691ef4a3ef8"} err="failed to get container status \"fd8a4dfb390a1643edb4eb4bd01970eaffd8dab0f748effac9e7c691ef4a3ef8\": rpc error: code = NotFound desc = could not find container \"fd8a4dfb390a1643edb4eb4bd01970eaffd8dab0f748effac9e7c691ef4a3ef8\": container with ID starting with fd8a4dfb390a1643edb4eb4bd01970eaffd8dab0f748effac9e7c691ef4a3ef8 not found: ID does not exist" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.737321 4720 scope.go:117] "RemoveContainer" containerID="013dfade47eccae4339fca0d379d914f63502613f87f1bc621113f7008634475" Oct 13 17:33:32 crc kubenswrapper[4720]: E1013 17:33:32.738509 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"013dfade47eccae4339fca0d379d914f63502613f87f1bc621113f7008634475\": container with ID starting with 013dfade47eccae4339fca0d379d914f63502613f87f1bc621113f7008634475 not found: ID does not exist" containerID="013dfade47eccae4339fca0d379d914f63502613f87f1bc621113f7008634475" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.738576 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"013dfade47eccae4339fca0d379d914f63502613f87f1bc621113f7008634475"} err="failed to get container status \"013dfade47eccae4339fca0d379d914f63502613f87f1bc621113f7008634475\": rpc error: code = NotFound desc = could not find container \"013dfade47eccae4339fca0d379d914f63502613f87f1bc621113f7008634475\": container with ID starting with 013dfade47eccae4339fca0d379d914f63502613f87f1bc621113f7008634475 not found: ID does not exist" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.738615 4720 scope.go:117] "RemoveContainer" containerID="7bbd41d9b0518c8b060b1d01c335b6c9923e7196624fbd49326133faaa2e36ba" Oct 13 17:33:32 crc kubenswrapper[4720]: E1013 17:33:32.738961 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bbd41d9b0518c8b060b1d01c335b6c9923e7196624fbd49326133faaa2e36ba\": container with ID starting with 7bbd41d9b0518c8b060b1d01c335b6c9923e7196624fbd49326133faaa2e36ba not found: ID does not exist" containerID="7bbd41d9b0518c8b060b1d01c335b6c9923e7196624fbd49326133faaa2e36ba" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.739013 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bbd41d9b0518c8b060b1d01c335b6c9923e7196624fbd49326133faaa2e36ba"} err="failed to get container status \"7bbd41d9b0518c8b060b1d01c335b6c9923e7196624fbd49326133faaa2e36ba\": rpc error: code = NotFound desc = could not find container \"7bbd41d9b0518c8b060b1d01c335b6c9923e7196624fbd49326133faaa2e36ba\": container with ID starting with 7bbd41d9b0518c8b060b1d01c335b6c9923e7196624fbd49326133faaa2e36ba not found: ID does not exist" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.739052 4720 scope.go:117] "RemoveContainer" containerID="5c28fde07efa17d20ddecd1c8a0da09a6fa09e98a90b14718565c00aef9c0d25" Oct 13 17:33:32 crc kubenswrapper[4720]: E1013 17:33:32.739309 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c28fde07efa17d20ddecd1c8a0da09a6fa09e98a90b14718565c00aef9c0d25\": container with ID starting with 5c28fde07efa17d20ddecd1c8a0da09a6fa09e98a90b14718565c00aef9c0d25 not found: ID does not exist" containerID="5c28fde07efa17d20ddecd1c8a0da09a6fa09e98a90b14718565c00aef9c0d25" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.739359 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c28fde07efa17d20ddecd1c8a0da09a6fa09e98a90b14718565c00aef9c0d25"} err="failed to get container status \"5c28fde07efa17d20ddecd1c8a0da09a6fa09e98a90b14718565c00aef9c0d25\": rpc error: code = NotFound desc = could not find container \"5c28fde07efa17d20ddecd1c8a0da09a6fa09e98a90b14718565c00aef9c0d25\": container with ID starting with 5c28fde07efa17d20ddecd1c8a0da09a6fa09e98a90b14718565c00aef9c0d25 not found: ID does not exist" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.739391 4720 scope.go:117] "RemoveContainer" containerID="69d511a66a84fb0fa90f426bd4609c2b3ad915571802c6c024e0370d6e1683a3" Oct 13 17:33:32 crc kubenswrapper[4720]: E1013 17:33:32.739801 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69d511a66a84fb0fa90f426bd4609c2b3ad915571802c6c024e0370d6e1683a3\": container with ID starting with 69d511a66a84fb0fa90f426bd4609c2b3ad915571802c6c024e0370d6e1683a3 not found: ID does not exist" containerID="69d511a66a84fb0fa90f426bd4609c2b3ad915571802c6c024e0370d6e1683a3" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.739848 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69d511a66a84fb0fa90f426bd4609c2b3ad915571802c6c024e0370d6e1683a3"} err="failed to get container status \"69d511a66a84fb0fa90f426bd4609c2b3ad915571802c6c024e0370d6e1683a3\": rpc error: code = NotFound desc = could not find container \"69d511a66a84fb0fa90f426bd4609c2b3ad915571802c6c024e0370d6e1683a3\": container with ID starting with 69d511a66a84fb0fa90f426bd4609c2b3ad915571802c6c024e0370d6e1683a3 not found: ID does not exist" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.739886 4720 scope.go:117] "RemoveContainer" containerID="1aefa46ee989416f0e4fd05f6f12031da637a354fadd7f6caef75ddc2c74ddf3" Oct 13 17:33:32 crc kubenswrapper[4720]: E1013 17:33:32.740218 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1aefa46ee989416f0e4fd05f6f12031da637a354fadd7f6caef75ddc2c74ddf3\": container with ID starting with 1aefa46ee989416f0e4fd05f6f12031da637a354fadd7f6caef75ddc2c74ddf3 not found: ID does not exist" containerID="1aefa46ee989416f0e4fd05f6f12031da637a354fadd7f6caef75ddc2c74ddf3" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.740264 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1aefa46ee989416f0e4fd05f6f12031da637a354fadd7f6caef75ddc2c74ddf3"} err="failed to get container status \"1aefa46ee989416f0e4fd05f6f12031da637a354fadd7f6caef75ddc2c74ddf3\": rpc error: code = NotFound desc = could not find container \"1aefa46ee989416f0e4fd05f6f12031da637a354fadd7f6caef75ddc2c74ddf3\": container with ID starting with 1aefa46ee989416f0e4fd05f6f12031da637a354fadd7f6caef75ddc2c74ddf3 not found: ID does not exist" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.740298 4720 scope.go:117] "RemoveContainer" containerID="be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.740402 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ppnf\" (UniqueName: \"kubernetes.io/projected/18b8d3b1-b090-4386-b2de-30893f390c15-kube-api-access-6ppnf\") pod \"ovnkube-node-ln4xf\" (UID: \"18b8d3b1-b090-4386-b2de-30893f390c15\") " pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:32 crc kubenswrapper[4720]: E1013 17:33:32.740655 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\": container with ID starting with be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a not found: ID does not exist" containerID="be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.740709 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a"} err="failed to get container status \"be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\": rpc error: code = NotFound desc = could not find container \"be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\": container with ID starting with be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a not found: ID does not exist" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.740744 4720 scope.go:117] "RemoveContainer" containerID="2cba8cf1a0fc25e74ef53b5ad2f9b7078a9a2aada0f97b16b1dc4a11e0104571" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.741319 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cba8cf1a0fc25e74ef53b5ad2f9b7078a9a2aada0f97b16b1dc4a11e0104571"} err="failed to get container status \"2cba8cf1a0fc25e74ef53b5ad2f9b7078a9a2aada0f97b16b1dc4a11e0104571\": rpc error: code = NotFound desc = could not find container \"2cba8cf1a0fc25e74ef53b5ad2f9b7078a9a2aada0f97b16b1dc4a11e0104571\": container with ID starting with 2cba8cf1a0fc25e74ef53b5ad2f9b7078a9a2aada0f97b16b1dc4a11e0104571 not found: ID does not exist" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.741367 4720 scope.go:117] "RemoveContainer" containerID="c62dd5ae9eb692f394556f2d0b70bf14a7c99fe714ad7084a5d08b324a4435de" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.741644 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c62dd5ae9eb692f394556f2d0b70bf14a7c99fe714ad7084a5d08b324a4435de"} err="failed to get container status \"c62dd5ae9eb692f394556f2d0b70bf14a7c99fe714ad7084a5d08b324a4435de\": rpc error: code = NotFound desc = could not find container \"c62dd5ae9eb692f394556f2d0b70bf14a7c99fe714ad7084a5d08b324a4435de\": container with ID starting with c62dd5ae9eb692f394556f2d0b70bf14a7c99fe714ad7084a5d08b324a4435de not found: ID does not exist" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.741690 4720 scope.go:117] "RemoveContainer" containerID="34b1fffbebfbd3599bed22fe3fe80129f5b704560c54d09e9ab5e9ea8bba4513" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.741993 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34b1fffbebfbd3599bed22fe3fe80129f5b704560c54d09e9ab5e9ea8bba4513"} err="failed to get container status \"34b1fffbebfbd3599bed22fe3fe80129f5b704560c54d09e9ab5e9ea8bba4513\": rpc error: code = NotFound desc = could not find container \"34b1fffbebfbd3599bed22fe3fe80129f5b704560c54d09e9ab5e9ea8bba4513\": container with ID starting with 34b1fffbebfbd3599bed22fe3fe80129f5b704560c54d09e9ab5e9ea8bba4513 not found: ID does not exist" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.742099 4720 scope.go:117] "RemoveContainer" containerID="fd8a4dfb390a1643edb4eb4bd01970eaffd8dab0f748effac9e7c691ef4a3ef8" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.742475 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd8a4dfb390a1643edb4eb4bd01970eaffd8dab0f748effac9e7c691ef4a3ef8"} err="failed to get container status \"fd8a4dfb390a1643edb4eb4bd01970eaffd8dab0f748effac9e7c691ef4a3ef8\": rpc error: code = NotFound desc = could not find container \"fd8a4dfb390a1643edb4eb4bd01970eaffd8dab0f748effac9e7c691ef4a3ef8\": container with ID starting with fd8a4dfb390a1643edb4eb4bd01970eaffd8dab0f748effac9e7c691ef4a3ef8 not found: ID does not exist" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.742513 4720 scope.go:117] "RemoveContainer" containerID="013dfade47eccae4339fca0d379d914f63502613f87f1bc621113f7008634475" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.742947 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"013dfade47eccae4339fca0d379d914f63502613f87f1bc621113f7008634475"} err="failed to get container status \"013dfade47eccae4339fca0d379d914f63502613f87f1bc621113f7008634475\": rpc error: code = NotFound desc = could not find container \"013dfade47eccae4339fca0d379d914f63502613f87f1bc621113f7008634475\": container with ID starting with 013dfade47eccae4339fca0d379d914f63502613f87f1bc621113f7008634475 not found: ID does not exist" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.742990 4720 scope.go:117] "RemoveContainer" containerID="7bbd41d9b0518c8b060b1d01c335b6c9923e7196624fbd49326133faaa2e36ba" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.743692 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bbd41d9b0518c8b060b1d01c335b6c9923e7196624fbd49326133faaa2e36ba"} err="failed to get container status \"7bbd41d9b0518c8b060b1d01c335b6c9923e7196624fbd49326133faaa2e36ba\": rpc error: code = NotFound desc = could not find container \"7bbd41d9b0518c8b060b1d01c335b6c9923e7196624fbd49326133faaa2e36ba\": container with ID starting with 7bbd41d9b0518c8b060b1d01c335b6c9923e7196624fbd49326133faaa2e36ba not found: ID does not exist" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.743726 4720 scope.go:117] "RemoveContainer" containerID="5c28fde07efa17d20ddecd1c8a0da09a6fa09e98a90b14718565c00aef9c0d25" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.743973 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c28fde07efa17d20ddecd1c8a0da09a6fa09e98a90b14718565c00aef9c0d25"} err="failed to get container status \"5c28fde07efa17d20ddecd1c8a0da09a6fa09e98a90b14718565c00aef9c0d25\": rpc error: code = NotFound desc = could not find container \"5c28fde07efa17d20ddecd1c8a0da09a6fa09e98a90b14718565c00aef9c0d25\": container with ID starting with 5c28fde07efa17d20ddecd1c8a0da09a6fa09e98a90b14718565c00aef9c0d25 not found: ID does not exist" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.744000 4720 scope.go:117] "RemoveContainer" containerID="69d511a66a84fb0fa90f426bd4609c2b3ad915571802c6c024e0370d6e1683a3" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.744267 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69d511a66a84fb0fa90f426bd4609c2b3ad915571802c6c024e0370d6e1683a3"} err="failed to get container status \"69d511a66a84fb0fa90f426bd4609c2b3ad915571802c6c024e0370d6e1683a3\": rpc error: code = NotFound desc = could not find container \"69d511a66a84fb0fa90f426bd4609c2b3ad915571802c6c024e0370d6e1683a3\": container with ID starting with 69d511a66a84fb0fa90f426bd4609c2b3ad915571802c6c024e0370d6e1683a3 not found: ID does not exist" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.744343 4720 scope.go:117] "RemoveContainer" containerID="1aefa46ee989416f0e4fd05f6f12031da637a354fadd7f6caef75ddc2c74ddf3" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.744702 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1aefa46ee989416f0e4fd05f6f12031da637a354fadd7f6caef75ddc2c74ddf3"} err="failed to get container status \"1aefa46ee989416f0e4fd05f6f12031da637a354fadd7f6caef75ddc2c74ddf3\": rpc error: code = NotFound desc = could not find container \"1aefa46ee989416f0e4fd05f6f12031da637a354fadd7f6caef75ddc2c74ddf3\": container with ID starting with 1aefa46ee989416f0e4fd05f6f12031da637a354fadd7f6caef75ddc2c74ddf3 not found: ID does not exist" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.744771 4720 scope.go:117] "RemoveContainer" containerID="be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.745131 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a"} err="failed to get container status \"be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\": rpc error: code = NotFound desc = could not find container \"be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\": container with ID starting with be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a not found: ID does not exist" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.745165 4720 scope.go:117] "RemoveContainer" containerID="2cba8cf1a0fc25e74ef53b5ad2f9b7078a9a2aada0f97b16b1dc4a11e0104571" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.745609 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cba8cf1a0fc25e74ef53b5ad2f9b7078a9a2aada0f97b16b1dc4a11e0104571"} err="failed to get container status \"2cba8cf1a0fc25e74ef53b5ad2f9b7078a9a2aada0f97b16b1dc4a11e0104571\": rpc error: code = NotFound desc = could not find container \"2cba8cf1a0fc25e74ef53b5ad2f9b7078a9a2aada0f97b16b1dc4a11e0104571\": container with ID starting with 2cba8cf1a0fc25e74ef53b5ad2f9b7078a9a2aada0f97b16b1dc4a11e0104571 not found: ID does not exist" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.745643 4720 scope.go:117] "RemoveContainer" containerID="c62dd5ae9eb692f394556f2d0b70bf14a7c99fe714ad7084a5d08b324a4435de" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.746072 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c62dd5ae9eb692f394556f2d0b70bf14a7c99fe714ad7084a5d08b324a4435de"} err="failed to get container status \"c62dd5ae9eb692f394556f2d0b70bf14a7c99fe714ad7084a5d08b324a4435de\": rpc error: code = NotFound desc = could not find container \"c62dd5ae9eb692f394556f2d0b70bf14a7c99fe714ad7084a5d08b324a4435de\": container with ID starting with c62dd5ae9eb692f394556f2d0b70bf14a7c99fe714ad7084a5d08b324a4435de not found: ID does not exist" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.746110 4720 scope.go:117] "RemoveContainer" containerID="34b1fffbebfbd3599bed22fe3fe80129f5b704560c54d09e9ab5e9ea8bba4513" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.746555 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34b1fffbebfbd3599bed22fe3fe80129f5b704560c54d09e9ab5e9ea8bba4513"} err="failed to get container status \"34b1fffbebfbd3599bed22fe3fe80129f5b704560c54d09e9ab5e9ea8bba4513\": rpc error: code = NotFound desc = could not find container \"34b1fffbebfbd3599bed22fe3fe80129f5b704560c54d09e9ab5e9ea8bba4513\": container with ID starting with 34b1fffbebfbd3599bed22fe3fe80129f5b704560c54d09e9ab5e9ea8bba4513 not found: ID does not exist" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.746616 4720 scope.go:117] "RemoveContainer" containerID="fd8a4dfb390a1643edb4eb4bd01970eaffd8dab0f748effac9e7c691ef4a3ef8" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.747039 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd8a4dfb390a1643edb4eb4bd01970eaffd8dab0f748effac9e7c691ef4a3ef8"} err="failed to get container status \"fd8a4dfb390a1643edb4eb4bd01970eaffd8dab0f748effac9e7c691ef4a3ef8\": rpc error: code = NotFound desc = could not find container \"fd8a4dfb390a1643edb4eb4bd01970eaffd8dab0f748effac9e7c691ef4a3ef8\": container with ID starting with fd8a4dfb390a1643edb4eb4bd01970eaffd8dab0f748effac9e7c691ef4a3ef8 not found: ID does not exist" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.747089 4720 scope.go:117] "RemoveContainer" containerID="013dfade47eccae4339fca0d379d914f63502613f87f1bc621113f7008634475" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.747489 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"013dfade47eccae4339fca0d379d914f63502613f87f1bc621113f7008634475"} err="failed to get container status \"013dfade47eccae4339fca0d379d914f63502613f87f1bc621113f7008634475\": rpc error: code = NotFound desc = could not find container \"013dfade47eccae4339fca0d379d914f63502613f87f1bc621113f7008634475\": container with ID starting with 013dfade47eccae4339fca0d379d914f63502613f87f1bc621113f7008634475 not found: ID does not exist" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.747797 4720 scope.go:117] "RemoveContainer" containerID="7bbd41d9b0518c8b060b1d01c335b6c9923e7196624fbd49326133faaa2e36ba" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.748153 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bbd41d9b0518c8b060b1d01c335b6c9923e7196624fbd49326133faaa2e36ba"} err="failed to get container status \"7bbd41d9b0518c8b060b1d01c335b6c9923e7196624fbd49326133faaa2e36ba\": rpc error: code = NotFound desc = could not find container \"7bbd41d9b0518c8b060b1d01c335b6c9923e7196624fbd49326133faaa2e36ba\": container with ID starting with 7bbd41d9b0518c8b060b1d01c335b6c9923e7196624fbd49326133faaa2e36ba not found: ID does not exist" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.748228 4720 scope.go:117] "RemoveContainer" containerID="5c28fde07efa17d20ddecd1c8a0da09a6fa09e98a90b14718565c00aef9c0d25" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.748494 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c28fde07efa17d20ddecd1c8a0da09a6fa09e98a90b14718565c00aef9c0d25"} err="failed to get container status \"5c28fde07efa17d20ddecd1c8a0da09a6fa09e98a90b14718565c00aef9c0d25\": rpc error: code = NotFound desc = could not find container \"5c28fde07efa17d20ddecd1c8a0da09a6fa09e98a90b14718565c00aef9c0d25\": container with ID starting with 5c28fde07efa17d20ddecd1c8a0da09a6fa09e98a90b14718565c00aef9c0d25 not found: ID does not exist" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.748531 4720 scope.go:117] "RemoveContainer" containerID="69d511a66a84fb0fa90f426bd4609c2b3ad915571802c6c024e0370d6e1683a3" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.748748 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69d511a66a84fb0fa90f426bd4609c2b3ad915571802c6c024e0370d6e1683a3"} err="failed to get container status \"69d511a66a84fb0fa90f426bd4609c2b3ad915571802c6c024e0370d6e1683a3\": rpc error: code = NotFound desc = could not find container \"69d511a66a84fb0fa90f426bd4609c2b3ad915571802c6c024e0370d6e1683a3\": container with ID starting with 69d511a66a84fb0fa90f426bd4609c2b3ad915571802c6c024e0370d6e1683a3 not found: ID does not exist" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.748782 4720 scope.go:117] "RemoveContainer" containerID="1aefa46ee989416f0e4fd05f6f12031da637a354fadd7f6caef75ddc2c74ddf3" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.748995 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1aefa46ee989416f0e4fd05f6f12031da637a354fadd7f6caef75ddc2c74ddf3"} err="failed to get container status \"1aefa46ee989416f0e4fd05f6f12031da637a354fadd7f6caef75ddc2c74ddf3\": rpc error: code = NotFound desc = could not find container \"1aefa46ee989416f0e4fd05f6f12031da637a354fadd7f6caef75ddc2c74ddf3\": container with ID starting with 1aefa46ee989416f0e4fd05f6f12031da637a354fadd7f6caef75ddc2c74ddf3 not found: ID does not exist" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.749027 4720 scope.go:117] "RemoveContainer" containerID="be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.749293 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a"} err="failed to get container status \"be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\": rpc error: code = NotFound desc = could not find container \"be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\": container with ID starting with be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a not found: ID does not exist" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.749329 4720 scope.go:117] "RemoveContainer" containerID="2cba8cf1a0fc25e74ef53b5ad2f9b7078a9a2aada0f97b16b1dc4a11e0104571" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.749558 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cba8cf1a0fc25e74ef53b5ad2f9b7078a9a2aada0f97b16b1dc4a11e0104571"} err="failed to get container status \"2cba8cf1a0fc25e74ef53b5ad2f9b7078a9a2aada0f97b16b1dc4a11e0104571\": rpc error: code = NotFound desc = could not find container \"2cba8cf1a0fc25e74ef53b5ad2f9b7078a9a2aada0f97b16b1dc4a11e0104571\": container with ID starting with 2cba8cf1a0fc25e74ef53b5ad2f9b7078a9a2aada0f97b16b1dc4a11e0104571 not found: ID does not exist" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.749589 4720 scope.go:117] "RemoveContainer" containerID="c62dd5ae9eb692f394556f2d0b70bf14a7c99fe714ad7084a5d08b324a4435de" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.749973 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c62dd5ae9eb692f394556f2d0b70bf14a7c99fe714ad7084a5d08b324a4435de"} err="failed to get container status \"c62dd5ae9eb692f394556f2d0b70bf14a7c99fe714ad7084a5d08b324a4435de\": rpc error: code = NotFound desc = could not find container \"c62dd5ae9eb692f394556f2d0b70bf14a7c99fe714ad7084a5d08b324a4435de\": container with ID starting with c62dd5ae9eb692f394556f2d0b70bf14a7c99fe714ad7084a5d08b324a4435de not found: ID does not exist" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.750005 4720 scope.go:117] "RemoveContainer" containerID="34b1fffbebfbd3599bed22fe3fe80129f5b704560c54d09e9ab5e9ea8bba4513" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.750252 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34b1fffbebfbd3599bed22fe3fe80129f5b704560c54d09e9ab5e9ea8bba4513"} err="failed to get container status \"34b1fffbebfbd3599bed22fe3fe80129f5b704560c54d09e9ab5e9ea8bba4513\": rpc error: code = NotFound desc = could not find container \"34b1fffbebfbd3599bed22fe3fe80129f5b704560c54d09e9ab5e9ea8bba4513\": container with ID starting with 34b1fffbebfbd3599bed22fe3fe80129f5b704560c54d09e9ab5e9ea8bba4513 not found: ID does not exist" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.750283 4720 scope.go:117] "RemoveContainer" containerID="fd8a4dfb390a1643edb4eb4bd01970eaffd8dab0f748effac9e7c691ef4a3ef8" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.750557 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd8a4dfb390a1643edb4eb4bd01970eaffd8dab0f748effac9e7c691ef4a3ef8"} err="failed to get container status \"fd8a4dfb390a1643edb4eb4bd01970eaffd8dab0f748effac9e7c691ef4a3ef8\": rpc error: code = NotFound desc = could not find container \"fd8a4dfb390a1643edb4eb4bd01970eaffd8dab0f748effac9e7c691ef4a3ef8\": container with ID starting with fd8a4dfb390a1643edb4eb4bd01970eaffd8dab0f748effac9e7c691ef4a3ef8 not found: ID does not exist" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.750592 4720 scope.go:117] "RemoveContainer" containerID="013dfade47eccae4339fca0d379d914f63502613f87f1bc621113f7008634475" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.750927 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.751069 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"013dfade47eccae4339fca0d379d914f63502613f87f1bc621113f7008634475"} err="failed to get container status \"013dfade47eccae4339fca0d379d914f63502613f87f1bc621113f7008634475\": rpc error: code = NotFound desc = could not find container \"013dfade47eccae4339fca0d379d914f63502613f87f1bc621113f7008634475\": container with ID starting with 013dfade47eccae4339fca0d379d914f63502613f87f1bc621113f7008634475 not found: ID does not exist" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.751098 4720 scope.go:117] "RemoveContainer" containerID="7bbd41d9b0518c8b060b1d01c335b6c9923e7196624fbd49326133faaa2e36ba" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.751411 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bbd41d9b0518c8b060b1d01c335b6c9923e7196624fbd49326133faaa2e36ba"} err="failed to get container status \"7bbd41d9b0518c8b060b1d01c335b6c9923e7196624fbd49326133faaa2e36ba\": rpc error: code = NotFound desc = could not find container \"7bbd41d9b0518c8b060b1d01c335b6c9923e7196624fbd49326133faaa2e36ba\": container with ID starting with 7bbd41d9b0518c8b060b1d01c335b6c9923e7196624fbd49326133faaa2e36ba not found: ID does not exist" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.751457 4720 scope.go:117] "RemoveContainer" containerID="5c28fde07efa17d20ddecd1c8a0da09a6fa09e98a90b14718565c00aef9c0d25" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.751762 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c28fde07efa17d20ddecd1c8a0da09a6fa09e98a90b14718565c00aef9c0d25"} err="failed to get container status \"5c28fde07efa17d20ddecd1c8a0da09a6fa09e98a90b14718565c00aef9c0d25\": rpc error: code = NotFound desc = could not find container \"5c28fde07efa17d20ddecd1c8a0da09a6fa09e98a90b14718565c00aef9c0d25\": container with ID starting with 5c28fde07efa17d20ddecd1c8a0da09a6fa09e98a90b14718565c00aef9c0d25 not found: ID does not exist" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.751790 4720 scope.go:117] "RemoveContainer" containerID="69d511a66a84fb0fa90f426bd4609c2b3ad915571802c6c024e0370d6e1683a3" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.751997 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69d511a66a84fb0fa90f426bd4609c2b3ad915571802c6c024e0370d6e1683a3"} err="failed to get container status \"69d511a66a84fb0fa90f426bd4609c2b3ad915571802c6c024e0370d6e1683a3\": rpc error: code = NotFound desc = could not find container \"69d511a66a84fb0fa90f426bd4609c2b3ad915571802c6c024e0370d6e1683a3\": container with ID starting with 69d511a66a84fb0fa90f426bd4609c2b3ad915571802c6c024e0370d6e1683a3 not found: ID does not exist" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.752022 4720 scope.go:117] "RemoveContainer" containerID="1aefa46ee989416f0e4fd05f6f12031da637a354fadd7f6caef75ddc2c74ddf3" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.752294 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1aefa46ee989416f0e4fd05f6f12031da637a354fadd7f6caef75ddc2c74ddf3"} err="failed to get container status \"1aefa46ee989416f0e4fd05f6f12031da637a354fadd7f6caef75ddc2c74ddf3\": rpc error: code = NotFound desc = could not find container \"1aefa46ee989416f0e4fd05f6f12031da637a354fadd7f6caef75ddc2c74ddf3\": container with ID starting with 1aefa46ee989416f0e4fd05f6f12031da637a354fadd7f6caef75ddc2c74ddf3 not found: ID does not exist" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.752320 4720 scope.go:117] "RemoveContainer" containerID="be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.752773 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a"} err="failed to get container status \"be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\": rpc error: code = NotFound desc = could not find container \"be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a\": container with ID starting with be10f2a5ac2f1a287605b186b5154bd49437dbacace4b9f64767a134d0501a4a not found: ID does not exist" Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.873301 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pn6lz"] Oct 13 17:33:32 crc kubenswrapper[4720]: I1013 17:33:32.879997 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pn6lz"] Oct 13 17:33:33 crc kubenswrapper[4720]: I1013 17:33:33.179669 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8064812e-b6aa-4f56-81c9-16154c00abad" path="/var/lib/kubelet/pods/8064812e-b6aa-4f56-81c9-16154c00abad/volumes" Oct 13 17:33:33 crc kubenswrapper[4720]: I1013 17:33:33.534924 4720 generic.go:334] "Generic (PLEG): container finished" podID="18b8d3b1-b090-4386-b2de-30893f390c15" containerID="e4bfd327f067f37ae7ebe992d5635d02bccc29a6576865b0f6a3c822b0900661" exitCode=0 Oct 13 17:33:33 crc kubenswrapper[4720]: I1013 17:33:33.534982 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" event={"ID":"18b8d3b1-b090-4386-b2de-30893f390c15","Type":"ContainerDied","Data":"e4bfd327f067f37ae7ebe992d5635d02bccc29a6576865b0f6a3c822b0900661"} Oct 13 17:33:33 crc kubenswrapper[4720]: I1013 17:33:33.535056 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" event={"ID":"18b8d3b1-b090-4386-b2de-30893f390c15","Type":"ContainerStarted","Data":"345d1843ef92e78dffbe93a514f5ca822eddbcface0a8ae38ceabf1205d4cde2"} Oct 13 17:33:33 crc kubenswrapper[4720]: I1013 17:33:33.538783 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lxmjt_7b45ec2d-5bea-4007-a49f-224a866f93eb/kube-multus/2.log" Oct 13 17:33:34 crc kubenswrapper[4720]: I1013 17:33:34.548702 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" event={"ID":"18b8d3b1-b090-4386-b2de-30893f390c15","Type":"ContainerStarted","Data":"370b3bffc4d2b681b0520fec9a502e4efaa17e11be0ce7ae75344bf7a35d6e21"} Oct 13 17:33:34 crc kubenswrapper[4720]: I1013 17:33:34.549029 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" event={"ID":"18b8d3b1-b090-4386-b2de-30893f390c15","Type":"ContainerStarted","Data":"6df69c227c7936fe5ff2a9768e36def3143666b8718d47969dc6e62e3f723102"} Oct 13 17:33:34 crc kubenswrapper[4720]: I1013 17:33:34.549041 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" event={"ID":"18b8d3b1-b090-4386-b2de-30893f390c15","Type":"ContainerStarted","Data":"c993af27aa3c0f48274d2c2c87ff647f39ac4f317ccbaf04ad79aa8286e18d55"} Oct 13 17:33:34 crc kubenswrapper[4720]: I1013 17:33:34.549048 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" event={"ID":"18b8d3b1-b090-4386-b2de-30893f390c15","Type":"ContainerStarted","Data":"cace0b581bf33a54cd0a5b2d0b39f37728c757282960f7229b24f9cc8407e1ae"} Oct 13 17:33:34 crc kubenswrapper[4720]: I1013 17:33:34.549057 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" event={"ID":"18b8d3b1-b090-4386-b2de-30893f390c15","Type":"ContainerStarted","Data":"3804d0a8fbcb08bb92bdad70ada00c5f49c15e7261ef6ed540a4e9a3eb6b8653"} Oct 13 17:33:34 crc kubenswrapper[4720]: I1013 17:33:34.549065 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" event={"ID":"18b8d3b1-b090-4386-b2de-30893f390c15","Type":"ContainerStarted","Data":"1c3a551e6880ed95cae34fe186de42e6c65186e599ad012e76b893ccd2fb8cca"} Oct 13 17:33:37 crc kubenswrapper[4720]: I1013 17:33:37.577108 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" event={"ID":"18b8d3b1-b090-4386-b2de-30893f390c15","Type":"ContainerStarted","Data":"6c54b8553102792afd29911d76b23af56d4a1bbeb84f26bc8899aad0df3fa1af"} Oct 13 17:33:39 crc kubenswrapper[4720]: I1013 17:33:39.591860 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" event={"ID":"18b8d3b1-b090-4386-b2de-30893f390c15","Type":"ContainerStarted","Data":"b0dfb1f48806f34ff0249d5f8c4278aab7cff0e1ffad65af94b9468c478f747b"} Oct 13 17:33:39 crc kubenswrapper[4720]: I1013 17:33:39.592359 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:39 crc kubenswrapper[4720]: I1013 17:33:39.592398 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:39 crc kubenswrapper[4720]: I1013 17:33:39.592412 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:39 crc kubenswrapper[4720]: I1013 17:33:39.660266 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" podStartSLOduration=7.660246843 podStartE2EDuration="7.660246843s" podCreationTimestamp="2025-10-13 17:33:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:33:39.655342648 +0000 UTC m=+565.112592830" watchObservedRunningTime="2025-10-13 17:33:39.660246843 +0000 UTC m=+565.117496975" Oct 13 17:33:39 crc kubenswrapper[4720]: I1013 17:33:39.661613 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:39 crc kubenswrapper[4720]: I1013 17:33:39.664986 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:33:45 crc kubenswrapper[4720]: I1013 17:33:45.213015 4720 patch_prober.go:28] interesting pod/machine-config-daemon-htwnl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 17:33:45 crc kubenswrapper[4720]: I1013 17:33:45.213728 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 17:33:45 crc kubenswrapper[4720]: I1013 17:33:45.213784 4720 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" Oct 13 17:33:45 crc kubenswrapper[4720]: I1013 17:33:45.214637 4720 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0d9e0d78254b1372630a7f56a5e019b7d8881a622a4a4292a4394c0f2b9be45a"} pod="openshift-machine-config-operator/machine-config-daemon-htwnl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 13 17:33:45 crc kubenswrapper[4720]: I1013 17:33:45.214729 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerName="machine-config-daemon" containerID="cri-o://0d9e0d78254b1372630a7f56a5e019b7d8881a622a4a4292a4394c0f2b9be45a" gracePeriod=600 Oct 13 17:33:45 crc kubenswrapper[4720]: I1013 17:33:45.635700 4720 generic.go:334] "Generic (PLEG): container finished" podID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerID="0d9e0d78254b1372630a7f56a5e019b7d8881a622a4a4292a4394c0f2b9be45a" exitCode=0 Oct 13 17:33:45 crc kubenswrapper[4720]: I1013 17:33:45.635766 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" event={"ID":"ce442c80-fcde-4b79-b6f9-f8f25771dfd4","Type":"ContainerDied","Data":"0d9e0d78254b1372630a7f56a5e019b7d8881a622a4a4292a4394c0f2b9be45a"} Oct 13 17:33:45 crc kubenswrapper[4720]: I1013 17:33:45.636121 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" event={"ID":"ce442c80-fcde-4b79-b6f9-f8f25771dfd4","Type":"ContainerStarted","Data":"08f406006acd7f2a5ccd32367b83e6ce328ee80787fc6b3f0206a4c41af2f48b"} Oct 13 17:33:45 crc kubenswrapper[4720]: I1013 17:33:45.636156 4720 scope.go:117] "RemoveContainer" containerID="3fec166dfba0a192adca998429f2650ea80001802f6d04f7ec8b13f450a085ff" Oct 13 17:33:47 crc kubenswrapper[4720]: I1013 17:33:47.169332 4720 scope.go:117] "RemoveContainer" containerID="8f158c84b32e0700f7fbee0860433b8a5b48b6d18ab5d0d8013224b5fa78ff3a" Oct 13 17:33:47 crc kubenswrapper[4720]: E1013 17:33:47.170183 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-lxmjt_openshift-multus(7b45ec2d-5bea-4007-a49f-224a866f93eb)\"" pod="openshift-multus/multus-lxmjt" podUID="7b45ec2d-5bea-4007-a49f-224a866f93eb" Oct 13 17:34:00 crc kubenswrapper[4720]: I1013 17:34:00.168451 4720 scope.go:117] "RemoveContainer" containerID="8f158c84b32e0700f7fbee0860433b8a5b48b6d18ab5d0d8013224b5fa78ff3a" Oct 13 17:34:00 crc kubenswrapper[4720]: I1013 17:34:00.745280 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lxmjt_7b45ec2d-5bea-4007-a49f-224a866f93eb/kube-multus/2.log" Oct 13 17:34:00 crc kubenswrapper[4720]: I1013 17:34:00.745855 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lxmjt" event={"ID":"7b45ec2d-5bea-4007-a49f-224a866f93eb","Type":"ContainerStarted","Data":"5b8a122faac141875748ebd97c891de2cd229acf54c1772c4c0ff7ee9aba1ada"} Oct 13 17:34:02 crc kubenswrapper[4720]: I1013 17:34:02.783698 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ln4xf" Oct 13 17:34:11 crc kubenswrapper[4720]: I1013 17:34:11.578242 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cfj2bw"] Oct 13 17:34:11 crc kubenswrapper[4720]: I1013 17:34:11.579722 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cfj2bw" Oct 13 17:34:11 crc kubenswrapper[4720]: I1013 17:34:11.589678 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cfj2bw"] Oct 13 17:34:11 crc kubenswrapper[4720]: I1013 17:34:11.597087 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Oct 13 17:34:11 crc kubenswrapper[4720]: I1013 17:34:11.732717 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sznnj\" (UniqueName: \"kubernetes.io/projected/5042638b-5850-4492-a98d-62479bd6624b-kube-api-access-sznnj\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cfj2bw\" (UID: \"5042638b-5850-4492-a98d-62479bd6624b\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cfj2bw" Oct 13 17:34:11 crc kubenswrapper[4720]: I1013 17:34:11.732789 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5042638b-5850-4492-a98d-62479bd6624b-util\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cfj2bw\" (UID: \"5042638b-5850-4492-a98d-62479bd6624b\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cfj2bw" Oct 13 17:34:11 crc kubenswrapper[4720]: I1013 17:34:11.732842 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5042638b-5850-4492-a98d-62479bd6624b-bundle\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cfj2bw\" (UID: \"5042638b-5850-4492-a98d-62479bd6624b\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cfj2bw" Oct 13 17:34:11 crc kubenswrapper[4720]: I1013 17:34:11.834879 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5042638b-5850-4492-a98d-62479bd6624b-bundle\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cfj2bw\" (UID: \"5042638b-5850-4492-a98d-62479bd6624b\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cfj2bw" Oct 13 17:34:11 crc kubenswrapper[4720]: I1013 17:34:11.835082 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sznnj\" (UniqueName: \"kubernetes.io/projected/5042638b-5850-4492-a98d-62479bd6624b-kube-api-access-sznnj\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cfj2bw\" (UID: \"5042638b-5850-4492-a98d-62479bd6624b\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cfj2bw" Oct 13 17:34:11 crc kubenswrapper[4720]: I1013 17:34:11.835179 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5042638b-5850-4492-a98d-62479bd6624b-util\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cfj2bw\" (UID: \"5042638b-5850-4492-a98d-62479bd6624b\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cfj2bw" Oct 13 17:34:11 crc kubenswrapper[4720]: I1013 17:34:11.835418 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5042638b-5850-4492-a98d-62479bd6624b-bundle\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cfj2bw\" (UID: \"5042638b-5850-4492-a98d-62479bd6624b\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cfj2bw" Oct 13 17:34:11 crc kubenswrapper[4720]: I1013 17:34:11.835871 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5042638b-5850-4492-a98d-62479bd6624b-util\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cfj2bw\" (UID: \"5042638b-5850-4492-a98d-62479bd6624b\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cfj2bw" Oct 13 17:34:11 crc kubenswrapper[4720]: I1013 17:34:11.857033 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sznnj\" (UniqueName: \"kubernetes.io/projected/5042638b-5850-4492-a98d-62479bd6624b-kube-api-access-sznnj\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cfj2bw\" (UID: \"5042638b-5850-4492-a98d-62479bd6624b\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cfj2bw" Oct 13 17:34:11 crc kubenswrapper[4720]: I1013 17:34:11.898983 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cfj2bw" Oct 13 17:34:12 crc kubenswrapper[4720]: I1013 17:34:12.203125 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cfj2bw"] Oct 13 17:34:12 crc kubenswrapper[4720]: W1013 17:34:12.210856 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5042638b_5850_4492_a98d_62479bd6624b.slice/crio-04b99fb6a61810a9eefd361b44edb5f8b0d9ab82a5feb8766b84a82d013ae326 WatchSource:0}: Error finding container 04b99fb6a61810a9eefd361b44edb5f8b0d9ab82a5feb8766b84a82d013ae326: Status 404 returned error can't find the container with id 04b99fb6a61810a9eefd361b44edb5f8b0d9ab82a5feb8766b84a82d013ae326 Oct 13 17:34:12 crc kubenswrapper[4720]: I1013 17:34:12.826235 4720 generic.go:334] "Generic (PLEG): container finished" podID="5042638b-5850-4492-a98d-62479bd6624b" containerID="8b42f55c75561153ab5c1cce578c95e5978b3869cb38cc241f8a1308ae113bf6" exitCode=0 Oct 13 17:34:12 crc kubenswrapper[4720]: I1013 17:34:12.826340 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cfj2bw" event={"ID":"5042638b-5850-4492-a98d-62479bd6624b","Type":"ContainerDied","Data":"8b42f55c75561153ab5c1cce578c95e5978b3869cb38cc241f8a1308ae113bf6"} Oct 13 17:34:12 crc kubenswrapper[4720]: I1013 17:34:12.826705 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cfj2bw" event={"ID":"5042638b-5850-4492-a98d-62479bd6624b","Type":"ContainerStarted","Data":"04b99fb6a61810a9eefd361b44edb5f8b0d9ab82a5feb8766b84a82d013ae326"} Oct 13 17:34:14 crc kubenswrapper[4720]: I1013 17:34:14.840870 4720 generic.go:334] "Generic (PLEG): container finished" podID="5042638b-5850-4492-a98d-62479bd6624b" containerID="977aaee78f30c2b3dcdceb526fbde593743c4c94f4724072f500052b5b1ff646" exitCode=0 Oct 13 17:34:14 crc kubenswrapper[4720]: I1013 17:34:14.840982 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cfj2bw" event={"ID":"5042638b-5850-4492-a98d-62479bd6624b","Type":"ContainerDied","Data":"977aaee78f30c2b3dcdceb526fbde593743c4c94f4724072f500052b5b1ff646"} Oct 13 17:34:15 crc kubenswrapper[4720]: I1013 17:34:15.852012 4720 generic.go:334] "Generic (PLEG): container finished" podID="5042638b-5850-4492-a98d-62479bd6624b" containerID="ff7bdd638179f9131d4267a1bdc1ff75e8edd2e0647891d56d82d07644b227cf" exitCode=0 Oct 13 17:34:15 crc kubenswrapper[4720]: I1013 17:34:15.852143 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cfj2bw" event={"ID":"5042638b-5850-4492-a98d-62479bd6624b","Type":"ContainerDied","Data":"ff7bdd638179f9131d4267a1bdc1ff75e8edd2e0647891d56d82d07644b227cf"} Oct 13 17:34:17 crc kubenswrapper[4720]: I1013 17:34:17.221134 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cfj2bw" Oct 13 17:34:17 crc kubenswrapper[4720]: I1013 17:34:17.313811 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5042638b-5850-4492-a98d-62479bd6624b-bundle\") pod \"5042638b-5850-4492-a98d-62479bd6624b\" (UID: \"5042638b-5850-4492-a98d-62479bd6624b\") " Oct 13 17:34:17 crc kubenswrapper[4720]: I1013 17:34:17.313922 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5042638b-5850-4492-a98d-62479bd6624b-util\") pod \"5042638b-5850-4492-a98d-62479bd6624b\" (UID: \"5042638b-5850-4492-a98d-62479bd6624b\") " Oct 13 17:34:17 crc kubenswrapper[4720]: I1013 17:34:17.313977 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sznnj\" (UniqueName: \"kubernetes.io/projected/5042638b-5850-4492-a98d-62479bd6624b-kube-api-access-sznnj\") pod \"5042638b-5850-4492-a98d-62479bd6624b\" (UID: \"5042638b-5850-4492-a98d-62479bd6624b\") " Oct 13 17:34:17 crc kubenswrapper[4720]: I1013 17:34:17.316253 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5042638b-5850-4492-a98d-62479bd6624b-bundle" (OuterVolumeSpecName: "bundle") pod "5042638b-5850-4492-a98d-62479bd6624b" (UID: "5042638b-5850-4492-a98d-62479bd6624b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:34:17 crc kubenswrapper[4720]: I1013 17:34:17.324493 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5042638b-5850-4492-a98d-62479bd6624b-kube-api-access-sznnj" (OuterVolumeSpecName: "kube-api-access-sznnj") pod "5042638b-5850-4492-a98d-62479bd6624b" (UID: "5042638b-5850-4492-a98d-62479bd6624b"). InnerVolumeSpecName "kube-api-access-sznnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:34:17 crc kubenswrapper[4720]: I1013 17:34:17.332129 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5042638b-5850-4492-a98d-62479bd6624b-util" (OuterVolumeSpecName: "util") pod "5042638b-5850-4492-a98d-62479bd6624b" (UID: "5042638b-5850-4492-a98d-62479bd6624b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:34:17 crc kubenswrapper[4720]: I1013 17:34:17.415778 4720 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5042638b-5850-4492-a98d-62479bd6624b-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 17:34:17 crc kubenswrapper[4720]: I1013 17:34:17.415817 4720 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5042638b-5850-4492-a98d-62479bd6624b-util\") on node \"crc\" DevicePath \"\"" Oct 13 17:34:17 crc kubenswrapper[4720]: I1013 17:34:17.415837 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sznnj\" (UniqueName: \"kubernetes.io/projected/5042638b-5850-4492-a98d-62479bd6624b-kube-api-access-sznnj\") on node \"crc\" DevicePath \"\"" Oct 13 17:34:17 crc kubenswrapper[4720]: I1013 17:34:17.869152 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cfj2bw" event={"ID":"5042638b-5850-4492-a98d-62479bd6624b","Type":"ContainerDied","Data":"04b99fb6a61810a9eefd361b44edb5f8b0d9ab82a5feb8766b84a82d013ae326"} Oct 13 17:34:17 crc kubenswrapper[4720]: I1013 17:34:17.869247 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04b99fb6a61810a9eefd361b44edb5f8b0d9ab82a5feb8766b84a82d013ae326" Oct 13 17:34:17 crc kubenswrapper[4720]: I1013 17:34:17.869661 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cfj2bw" Oct 13 17:34:20 crc kubenswrapper[4720]: I1013 17:34:20.260044 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-858ddd8f98-jprhs"] Oct 13 17:34:20 crc kubenswrapper[4720]: E1013 17:34:20.260331 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5042638b-5850-4492-a98d-62479bd6624b" containerName="pull" Oct 13 17:34:20 crc kubenswrapper[4720]: I1013 17:34:20.260345 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="5042638b-5850-4492-a98d-62479bd6624b" containerName="pull" Oct 13 17:34:20 crc kubenswrapper[4720]: E1013 17:34:20.260355 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5042638b-5850-4492-a98d-62479bd6624b" containerName="extract" Oct 13 17:34:20 crc kubenswrapper[4720]: I1013 17:34:20.260363 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="5042638b-5850-4492-a98d-62479bd6624b" containerName="extract" Oct 13 17:34:20 crc kubenswrapper[4720]: E1013 17:34:20.260376 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5042638b-5850-4492-a98d-62479bd6624b" containerName="util" Oct 13 17:34:20 crc kubenswrapper[4720]: I1013 17:34:20.260385 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="5042638b-5850-4492-a98d-62479bd6624b" containerName="util" Oct 13 17:34:20 crc kubenswrapper[4720]: I1013 17:34:20.260517 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="5042638b-5850-4492-a98d-62479bd6624b" containerName="extract" Oct 13 17:34:20 crc kubenswrapper[4720]: I1013 17:34:20.260948 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-858ddd8f98-jprhs" Oct 13 17:34:20 crc kubenswrapper[4720]: I1013 17:34:20.263428 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Oct 13 17:34:20 crc kubenswrapper[4720]: I1013 17:34:20.263882 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Oct 13 17:34:20 crc kubenswrapper[4720]: I1013 17:34:20.263955 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-dnndz" Oct 13 17:34:20 crc kubenswrapper[4720]: I1013 17:34:20.272690 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-858ddd8f98-jprhs"] Oct 13 17:34:20 crc kubenswrapper[4720]: I1013 17:34:20.362914 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7trs\" (UniqueName: \"kubernetes.io/projected/ffdb8c39-acdf-40d9-9c23-bb881eb0b755-kube-api-access-l7trs\") pod \"nmstate-operator-858ddd8f98-jprhs\" (UID: \"ffdb8c39-acdf-40d9-9c23-bb881eb0b755\") " pod="openshift-nmstate/nmstate-operator-858ddd8f98-jprhs" Oct 13 17:34:20 crc kubenswrapper[4720]: I1013 17:34:20.464092 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7trs\" (UniqueName: \"kubernetes.io/projected/ffdb8c39-acdf-40d9-9c23-bb881eb0b755-kube-api-access-l7trs\") pod \"nmstate-operator-858ddd8f98-jprhs\" (UID: \"ffdb8c39-acdf-40d9-9c23-bb881eb0b755\") " pod="openshift-nmstate/nmstate-operator-858ddd8f98-jprhs" Oct 13 17:34:20 crc kubenswrapper[4720]: I1013 17:34:20.496832 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7trs\" (UniqueName: \"kubernetes.io/projected/ffdb8c39-acdf-40d9-9c23-bb881eb0b755-kube-api-access-l7trs\") pod \"nmstate-operator-858ddd8f98-jprhs\" (UID: \"ffdb8c39-acdf-40d9-9c23-bb881eb0b755\") " pod="openshift-nmstate/nmstate-operator-858ddd8f98-jprhs" Oct 13 17:34:20 crc kubenswrapper[4720]: I1013 17:34:20.574634 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-858ddd8f98-jprhs" Oct 13 17:34:20 crc kubenswrapper[4720]: I1013 17:34:20.797051 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-858ddd8f98-jprhs"] Oct 13 17:34:20 crc kubenswrapper[4720]: I1013 17:34:20.890013 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-858ddd8f98-jprhs" event={"ID":"ffdb8c39-acdf-40d9-9c23-bb881eb0b755","Type":"ContainerStarted","Data":"98fcfae738ded145461a43df58c1f1cbf13c56aca28a57c902db3162dc7e5e72"} Oct 13 17:34:23 crc kubenswrapper[4720]: I1013 17:34:23.911906 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-858ddd8f98-jprhs" event={"ID":"ffdb8c39-acdf-40d9-9c23-bb881eb0b755","Type":"ContainerStarted","Data":"a61a605d97281efa42703e1e36aa08125bcdd241a5ab68a6fecb2dd863f1674a"} Oct 13 17:34:23 crc kubenswrapper[4720]: I1013 17:34:23.936352 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-858ddd8f98-jprhs" podStartSLOduration=1.827582504 podStartE2EDuration="3.936302035s" podCreationTimestamp="2025-10-13 17:34:20 +0000 UTC" firstStartedPulling="2025-10-13 17:34:20.809946606 +0000 UTC m=+606.267196738" lastFinishedPulling="2025-10-13 17:34:22.918666137 +0000 UTC m=+608.375916269" observedRunningTime="2025-10-13 17:34:23.931378189 +0000 UTC m=+609.388628341" watchObservedRunningTime="2025-10-13 17:34:23.936302035 +0000 UTC m=+609.393552207" Oct 13 17:34:24 crc kubenswrapper[4720]: I1013 17:34:24.965838 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-fdff9cb8d-k7h8j"] Oct 13 17:34:24 crc kubenswrapper[4720]: I1013 17:34:24.966936 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-k7h8j" Oct 13 17:34:24 crc kubenswrapper[4720]: I1013 17:34:24.979461 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-fdff9cb8d-k7h8j"] Oct 13 17:34:24 crc kubenswrapper[4720]: I1013 17:34:24.979989 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-vktnb" Oct 13 17:34:24 crc kubenswrapper[4720]: I1013 17:34:24.982401 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-6cdbc54649-66xwd"] Oct 13 17:34:24 crc kubenswrapper[4720]: I1013 17:34:24.983034 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-66xwd" Oct 13 17:34:24 crc kubenswrapper[4720]: I1013 17:34:24.991647 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.000836 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6cdbc54649-66xwd"] Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.003171 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-r6sg8"] Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.003793 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-r6sg8" Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.031515 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9dw7\" (UniqueName: \"kubernetes.io/projected/fa98b66b-f1b2-4d57-8386-b449bf1076ec-kube-api-access-p9dw7\") pod \"nmstate-metrics-fdff9cb8d-k7h8j\" (UID: \"fa98b66b-f1b2-4d57-8386-b449bf1076ec\") " pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-k7h8j" Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.031564 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/420f8bbd-5b94-4775-8248-68220b91202f-ovs-socket\") pod \"nmstate-handler-r6sg8\" (UID: \"420f8bbd-5b94-4775-8248-68220b91202f\") " pod="openshift-nmstate/nmstate-handler-r6sg8" Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.031588 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7xqq\" (UniqueName: \"kubernetes.io/projected/420f8bbd-5b94-4775-8248-68220b91202f-kube-api-access-k7xqq\") pod \"nmstate-handler-r6sg8\" (UID: \"420f8bbd-5b94-4775-8248-68220b91202f\") " pod="openshift-nmstate/nmstate-handler-r6sg8" Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.031607 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/420f8bbd-5b94-4775-8248-68220b91202f-dbus-socket\") pod \"nmstate-handler-r6sg8\" (UID: \"420f8bbd-5b94-4775-8248-68220b91202f\") " pod="openshift-nmstate/nmstate-handler-r6sg8" Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.031628 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/e6844590-4dcb-4007-9e33-12ded957f55b-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-66xwd\" (UID: \"e6844590-4dcb-4007-9e33-12ded957f55b\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-66xwd" Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.031652 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vp5j\" (UniqueName: \"kubernetes.io/projected/e6844590-4dcb-4007-9e33-12ded957f55b-kube-api-access-5vp5j\") pod \"nmstate-webhook-6cdbc54649-66xwd\" (UID: \"e6844590-4dcb-4007-9e33-12ded957f55b\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-66xwd" Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.031757 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/420f8bbd-5b94-4775-8248-68220b91202f-nmstate-lock\") pod \"nmstate-handler-r6sg8\" (UID: \"420f8bbd-5b94-4775-8248-68220b91202f\") " pod="openshift-nmstate/nmstate-handler-r6sg8" Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.100988 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6b874cbd85-jt5jd"] Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.101585 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-jt5jd" Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.109605 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-mm68l" Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.109611 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.109640 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.112981 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6b874cbd85-jt5jd"] Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.133062 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9dw7\" (UniqueName: \"kubernetes.io/projected/fa98b66b-f1b2-4d57-8386-b449bf1076ec-kube-api-access-p9dw7\") pod \"nmstate-metrics-fdff9cb8d-k7h8j\" (UID: \"fa98b66b-f1b2-4d57-8386-b449bf1076ec\") " pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-k7h8j" Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.133104 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/420f8bbd-5b94-4775-8248-68220b91202f-ovs-socket\") pod \"nmstate-handler-r6sg8\" (UID: \"420f8bbd-5b94-4775-8248-68220b91202f\") " pod="openshift-nmstate/nmstate-handler-r6sg8" Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.133125 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7xqq\" (UniqueName: \"kubernetes.io/projected/420f8bbd-5b94-4775-8248-68220b91202f-kube-api-access-k7xqq\") pod \"nmstate-handler-r6sg8\" (UID: \"420f8bbd-5b94-4775-8248-68220b91202f\") " pod="openshift-nmstate/nmstate-handler-r6sg8" Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.133145 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b8sh\" (UniqueName: \"kubernetes.io/projected/f1c52489-5f05-43ca-a79c-db2a69061eac-kube-api-access-2b8sh\") pod \"nmstate-console-plugin-6b874cbd85-jt5jd\" (UID: \"f1c52489-5f05-43ca-a79c-db2a69061eac\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-jt5jd" Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.133163 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/420f8bbd-5b94-4775-8248-68220b91202f-dbus-socket\") pod \"nmstate-handler-r6sg8\" (UID: \"420f8bbd-5b94-4775-8248-68220b91202f\") " pod="openshift-nmstate/nmstate-handler-r6sg8" Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.133200 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/e6844590-4dcb-4007-9e33-12ded957f55b-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-66xwd\" (UID: \"e6844590-4dcb-4007-9e33-12ded957f55b\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-66xwd" Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.133223 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vp5j\" (UniqueName: \"kubernetes.io/projected/e6844590-4dcb-4007-9e33-12ded957f55b-kube-api-access-5vp5j\") pod \"nmstate-webhook-6cdbc54649-66xwd\" (UID: \"e6844590-4dcb-4007-9e33-12ded957f55b\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-66xwd" Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.133220 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/420f8bbd-5b94-4775-8248-68220b91202f-ovs-socket\") pod \"nmstate-handler-r6sg8\" (UID: \"420f8bbd-5b94-4775-8248-68220b91202f\") " pod="openshift-nmstate/nmstate-handler-r6sg8" Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.133241 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/420f8bbd-5b94-4775-8248-68220b91202f-nmstate-lock\") pod \"nmstate-handler-r6sg8\" (UID: \"420f8bbd-5b94-4775-8248-68220b91202f\") " pod="openshift-nmstate/nmstate-handler-r6sg8" Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.133300 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/f1c52489-5f05-43ca-a79c-db2a69061eac-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-jt5jd\" (UID: \"f1c52489-5f05-43ca-a79c-db2a69061eac\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-jt5jd" Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.133317 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/f1c52489-5f05-43ca-a79c-db2a69061eac-nginx-conf\") pod \"nmstate-console-plugin-6b874cbd85-jt5jd\" (UID: \"f1c52489-5f05-43ca-a79c-db2a69061eac\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-jt5jd" Oct 13 17:34:25 crc kubenswrapper[4720]: E1013 17:34:25.133427 4720 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.133448 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/420f8bbd-5b94-4775-8248-68220b91202f-dbus-socket\") pod \"nmstate-handler-r6sg8\" (UID: \"420f8bbd-5b94-4775-8248-68220b91202f\") " pod="openshift-nmstate/nmstate-handler-r6sg8" Oct 13 17:34:25 crc kubenswrapper[4720]: E1013 17:34:25.133466 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e6844590-4dcb-4007-9e33-12ded957f55b-tls-key-pair podName:e6844590-4dcb-4007-9e33-12ded957f55b nodeName:}" failed. No retries permitted until 2025-10-13 17:34:25.633451711 +0000 UTC m=+611.090701843 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/e6844590-4dcb-4007-9e33-12ded957f55b-tls-key-pair") pod "nmstate-webhook-6cdbc54649-66xwd" (UID: "e6844590-4dcb-4007-9e33-12ded957f55b") : secret "openshift-nmstate-webhook" not found Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.133646 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/420f8bbd-5b94-4775-8248-68220b91202f-nmstate-lock\") pod \"nmstate-handler-r6sg8\" (UID: \"420f8bbd-5b94-4775-8248-68220b91202f\") " pod="openshift-nmstate/nmstate-handler-r6sg8" Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.150367 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7xqq\" (UniqueName: \"kubernetes.io/projected/420f8bbd-5b94-4775-8248-68220b91202f-kube-api-access-k7xqq\") pod \"nmstate-handler-r6sg8\" (UID: \"420f8bbd-5b94-4775-8248-68220b91202f\") " pod="openshift-nmstate/nmstate-handler-r6sg8" Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.150429 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9dw7\" (UniqueName: \"kubernetes.io/projected/fa98b66b-f1b2-4d57-8386-b449bf1076ec-kube-api-access-p9dw7\") pod \"nmstate-metrics-fdff9cb8d-k7h8j\" (UID: \"fa98b66b-f1b2-4d57-8386-b449bf1076ec\") " pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-k7h8j" Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.152172 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vp5j\" (UniqueName: \"kubernetes.io/projected/e6844590-4dcb-4007-9e33-12ded957f55b-kube-api-access-5vp5j\") pod \"nmstate-webhook-6cdbc54649-66xwd\" (UID: \"e6844590-4dcb-4007-9e33-12ded957f55b\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-66xwd" Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.234591 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/f1c52489-5f05-43ca-a79c-db2a69061eac-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-jt5jd\" (UID: \"f1c52489-5f05-43ca-a79c-db2a69061eac\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-jt5jd" Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.234623 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/f1c52489-5f05-43ca-a79c-db2a69061eac-nginx-conf\") pod \"nmstate-console-plugin-6b874cbd85-jt5jd\" (UID: \"f1c52489-5f05-43ca-a79c-db2a69061eac\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-jt5jd" Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.234688 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2b8sh\" (UniqueName: \"kubernetes.io/projected/f1c52489-5f05-43ca-a79c-db2a69061eac-kube-api-access-2b8sh\") pod \"nmstate-console-plugin-6b874cbd85-jt5jd\" (UID: \"f1c52489-5f05-43ca-a79c-db2a69061eac\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-jt5jd" Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.237063 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/f1c52489-5f05-43ca-a79c-db2a69061eac-nginx-conf\") pod \"nmstate-console-plugin-6b874cbd85-jt5jd\" (UID: \"f1c52489-5f05-43ca-a79c-db2a69061eac\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-jt5jd" Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.250711 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/f1c52489-5f05-43ca-a79c-db2a69061eac-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-jt5jd\" (UID: \"f1c52489-5f05-43ca-a79c-db2a69061eac\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-jt5jd" Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.256746 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2b8sh\" (UniqueName: \"kubernetes.io/projected/f1c52489-5f05-43ca-a79c-db2a69061eac-kube-api-access-2b8sh\") pod \"nmstate-console-plugin-6b874cbd85-jt5jd\" (UID: \"f1c52489-5f05-43ca-a79c-db2a69061eac\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-jt5jd" Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.280035 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-k7h8j" Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.292856 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-58d7d9b477-h56fg"] Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.293775 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-58d7d9b477-h56fg" Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.303523 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-58d7d9b477-h56fg"] Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.348549 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-r6sg8" Oct 13 17:34:25 crc kubenswrapper[4720]: W1013 17:34:25.371694 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod420f8bbd_5b94_4775_8248_68220b91202f.slice/crio-7a3d6b15e94eb0830f9c26c31aa892b532ef4c855088b52d406f923213345cda WatchSource:0}: Error finding container 7a3d6b15e94eb0830f9c26c31aa892b532ef4c855088b52d406f923213345cda: Status 404 returned error can't find the container with id 7a3d6b15e94eb0830f9c26c31aa892b532ef4c855088b52d406f923213345cda Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.424270 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-jt5jd" Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.447694 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7d554798-d482-4bb7-baca-c6bb7a3b5f9e-console-serving-cert\") pod \"console-58d7d9b477-h56fg\" (UID: \"7d554798-d482-4bb7-baca-c6bb7a3b5f9e\") " pod="openshift-console/console-58d7d9b477-h56fg" Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.447729 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d554798-d482-4bb7-baca-c6bb7a3b5f9e-trusted-ca-bundle\") pod \"console-58d7d9b477-h56fg\" (UID: \"7d554798-d482-4bb7-baca-c6bb7a3b5f9e\") " pod="openshift-console/console-58d7d9b477-h56fg" Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.447765 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7d554798-d482-4bb7-baca-c6bb7a3b5f9e-service-ca\") pod \"console-58d7d9b477-h56fg\" (UID: \"7d554798-d482-4bb7-baca-c6bb7a3b5f9e\") " pod="openshift-console/console-58d7d9b477-h56fg" Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.447785 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7d554798-d482-4bb7-baca-c6bb7a3b5f9e-console-config\") pod \"console-58d7d9b477-h56fg\" (UID: \"7d554798-d482-4bb7-baca-c6bb7a3b5f9e\") " pod="openshift-console/console-58d7d9b477-h56fg" Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.447820 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcfw8\" (UniqueName: \"kubernetes.io/projected/7d554798-d482-4bb7-baca-c6bb7a3b5f9e-kube-api-access-bcfw8\") pod \"console-58d7d9b477-h56fg\" (UID: \"7d554798-d482-4bb7-baca-c6bb7a3b5f9e\") " pod="openshift-console/console-58d7d9b477-h56fg" Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.449383 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7d554798-d482-4bb7-baca-c6bb7a3b5f9e-oauth-serving-cert\") pod \"console-58d7d9b477-h56fg\" (UID: \"7d554798-d482-4bb7-baca-c6bb7a3b5f9e\") " pod="openshift-console/console-58d7d9b477-h56fg" Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.449434 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7d554798-d482-4bb7-baca-c6bb7a3b5f9e-console-oauth-config\") pod \"console-58d7d9b477-h56fg\" (UID: \"7d554798-d482-4bb7-baca-c6bb7a3b5f9e\") " pod="openshift-console/console-58d7d9b477-h56fg" Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.493319 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-fdff9cb8d-k7h8j"] Oct 13 17:34:25 crc kubenswrapper[4720]: W1013 17:34:25.507118 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa98b66b_f1b2_4d57_8386_b449bf1076ec.slice/crio-c8a361b96d8e6e2b8deb1f06458a7e4710f04f1e251732995064d875ed44e5e6 WatchSource:0}: Error finding container c8a361b96d8e6e2b8deb1f06458a7e4710f04f1e251732995064d875ed44e5e6: Status 404 returned error can't find the container with id c8a361b96d8e6e2b8deb1f06458a7e4710f04f1e251732995064d875ed44e5e6 Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.550833 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7d554798-d482-4bb7-baca-c6bb7a3b5f9e-console-config\") pod \"console-58d7d9b477-h56fg\" (UID: \"7d554798-d482-4bb7-baca-c6bb7a3b5f9e\") " pod="openshift-console/console-58d7d9b477-h56fg" Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.550865 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7d554798-d482-4bb7-baca-c6bb7a3b5f9e-service-ca\") pod \"console-58d7d9b477-h56fg\" (UID: \"7d554798-d482-4bb7-baca-c6bb7a3b5f9e\") " pod="openshift-console/console-58d7d9b477-h56fg" Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.550905 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcfw8\" (UniqueName: \"kubernetes.io/projected/7d554798-d482-4bb7-baca-c6bb7a3b5f9e-kube-api-access-bcfw8\") pod \"console-58d7d9b477-h56fg\" (UID: \"7d554798-d482-4bb7-baca-c6bb7a3b5f9e\") " pod="openshift-console/console-58d7d9b477-h56fg" Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.550955 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7d554798-d482-4bb7-baca-c6bb7a3b5f9e-oauth-serving-cert\") pod \"console-58d7d9b477-h56fg\" (UID: \"7d554798-d482-4bb7-baca-c6bb7a3b5f9e\") " pod="openshift-console/console-58d7d9b477-h56fg" Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.550971 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7d554798-d482-4bb7-baca-c6bb7a3b5f9e-console-oauth-config\") pod \"console-58d7d9b477-h56fg\" (UID: \"7d554798-d482-4bb7-baca-c6bb7a3b5f9e\") " pod="openshift-console/console-58d7d9b477-h56fg" Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.550986 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7d554798-d482-4bb7-baca-c6bb7a3b5f9e-console-serving-cert\") pod \"console-58d7d9b477-h56fg\" (UID: \"7d554798-d482-4bb7-baca-c6bb7a3b5f9e\") " pod="openshift-console/console-58d7d9b477-h56fg" Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.551001 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d554798-d482-4bb7-baca-c6bb7a3b5f9e-trusted-ca-bundle\") pod \"console-58d7d9b477-h56fg\" (UID: \"7d554798-d482-4bb7-baca-c6bb7a3b5f9e\") " pod="openshift-console/console-58d7d9b477-h56fg" Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.552119 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d554798-d482-4bb7-baca-c6bb7a3b5f9e-trusted-ca-bundle\") pod \"console-58d7d9b477-h56fg\" (UID: \"7d554798-d482-4bb7-baca-c6bb7a3b5f9e\") " pod="openshift-console/console-58d7d9b477-h56fg" Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.552806 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7d554798-d482-4bb7-baca-c6bb7a3b5f9e-console-config\") pod \"console-58d7d9b477-h56fg\" (UID: \"7d554798-d482-4bb7-baca-c6bb7a3b5f9e\") " pod="openshift-console/console-58d7d9b477-h56fg" Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.552879 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7d554798-d482-4bb7-baca-c6bb7a3b5f9e-service-ca\") pod \"console-58d7d9b477-h56fg\" (UID: \"7d554798-d482-4bb7-baca-c6bb7a3b5f9e\") " pod="openshift-console/console-58d7d9b477-h56fg" Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.553381 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7d554798-d482-4bb7-baca-c6bb7a3b5f9e-oauth-serving-cert\") pod \"console-58d7d9b477-h56fg\" (UID: \"7d554798-d482-4bb7-baca-c6bb7a3b5f9e\") " pod="openshift-console/console-58d7d9b477-h56fg" Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.557058 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7d554798-d482-4bb7-baca-c6bb7a3b5f9e-console-oauth-config\") pod \"console-58d7d9b477-h56fg\" (UID: \"7d554798-d482-4bb7-baca-c6bb7a3b5f9e\") " pod="openshift-console/console-58d7d9b477-h56fg" Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.557492 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7d554798-d482-4bb7-baca-c6bb7a3b5f9e-console-serving-cert\") pod \"console-58d7d9b477-h56fg\" (UID: \"7d554798-d482-4bb7-baca-c6bb7a3b5f9e\") " pod="openshift-console/console-58d7d9b477-h56fg" Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.568760 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcfw8\" (UniqueName: \"kubernetes.io/projected/7d554798-d482-4bb7-baca-c6bb7a3b5f9e-kube-api-access-bcfw8\") pod \"console-58d7d9b477-h56fg\" (UID: \"7d554798-d482-4bb7-baca-c6bb7a3b5f9e\") " pod="openshift-console/console-58d7d9b477-h56fg" Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.608207 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6b874cbd85-jt5jd"] Oct 13 17:34:25 crc kubenswrapper[4720]: W1013 17:34:25.611912 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1c52489_5f05_43ca_a79c_db2a69061eac.slice/crio-6273ddbb9391dcf45788714b6d3795cd708f63c22844a0032ad0dbb8f8cbf667 WatchSource:0}: Error finding container 6273ddbb9391dcf45788714b6d3795cd708f63c22844a0032ad0dbb8f8cbf667: Status 404 returned error can't find the container with id 6273ddbb9391dcf45788714b6d3795cd708f63c22844a0032ad0dbb8f8cbf667 Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.652585 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/e6844590-4dcb-4007-9e33-12ded957f55b-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-66xwd\" (UID: \"e6844590-4dcb-4007-9e33-12ded957f55b\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-66xwd" Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.655901 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/e6844590-4dcb-4007-9e33-12ded957f55b-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-66xwd\" (UID: \"e6844590-4dcb-4007-9e33-12ded957f55b\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-66xwd" Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.661046 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-58d7d9b477-h56fg" Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.897175 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-58d7d9b477-h56fg"] Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.897492 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-66xwd" Oct 13 17:34:25 crc kubenswrapper[4720]: W1013 17:34:25.909737 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d554798_d482_4bb7_baca_c6bb7a3b5f9e.slice/crio-8b29ec6b8ee587e61fab409445e83d344405ba0f287c060c0442d231ab49b171 WatchSource:0}: Error finding container 8b29ec6b8ee587e61fab409445e83d344405ba0f287c060c0442d231ab49b171: Status 404 returned error can't find the container with id 8b29ec6b8ee587e61fab409445e83d344405ba0f287c060c0442d231ab49b171 Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.921208 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-58d7d9b477-h56fg" event={"ID":"7d554798-d482-4bb7-baca-c6bb7a3b5f9e","Type":"ContainerStarted","Data":"8b29ec6b8ee587e61fab409445e83d344405ba0f287c060c0442d231ab49b171"} Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.922194 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-jt5jd" event={"ID":"f1c52489-5f05-43ca-a79c-db2a69061eac","Type":"ContainerStarted","Data":"6273ddbb9391dcf45788714b6d3795cd708f63c22844a0032ad0dbb8f8cbf667"} Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.923159 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-k7h8j" event={"ID":"fa98b66b-f1b2-4d57-8386-b449bf1076ec","Type":"ContainerStarted","Data":"c8a361b96d8e6e2b8deb1f06458a7e4710f04f1e251732995064d875ed44e5e6"} Oct 13 17:34:25 crc kubenswrapper[4720]: I1013 17:34:25.923914 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-r6sg8" event={"ID":"420f8bbd-5b94-4775-8248-68220b91202f","Type":"ContainerStarted","Data":"7a3d6b15e94eb0830f9c26c31aa892b532ef4c855088b52d406f923213345cda"} Oct 13 17:34:26 crc kubenswrapper[4720]: I1013 17:34:26.164998 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6cdbc54649-66xwd"] Oct 13 17:34:26 crc kubenswrapper[4720]: W1013 17:34:26.178788 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6844590_4dcb_4007_9e33_12ded957f55b.slice/crio-051597d2749fbff771ab71094f6430e36addd7d0513c6ba14f9819ebce5c8ed1 WatchSource:0}: Error finding container 051597d2749fbff771ab71094f6430e36addd7d0513c6ba14f9819ebce5c8ed1: Status 404 returned error can't find the container with id 051597d2749fbff771ab71094f6430e36addd7d0513c6ba14f9819ebce5c8ed1 Oct 13 17:34:26 crc kubenswrapper[4720]: I1013 17:34:26.939237 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-58d7d9b477-h56fg" event={"ID":"7d554798-d482-4bb7-baca-c6bb7a3b5f9e","Type":"ContainerStarted","Data":"4686df90fa1346b6aca5c4694ec4a4bae62460f6fe07eb62ecb4d02a4c8ea73c"} Oct 13 17:34:26 crc kubenswrapper[4720]: I1013 17:34:26.955932 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-66xwd" event={"ID":"e6844590-4dcb-4007-9e33-12ded957f55b","Type":"ContainerStarted","Data":"051597d2749fbff771ab71094f6430e36addd7d0513c6ba14f9819ebce5c8ed1"} Oct 13 17:34:26 crc kubenswrapper[4720]: I1013 17:34:26.975327 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-58d7d9b477-h56fg" podStartSLOduration=1.975300826 podStartE2EDuration="1.975300826s" podCreationTimestamp="2025-10-13 17:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:34:26.964969233 +0000 UTC m=+612.422219405" watchObservedRunningTime="2025-10-13 17:34:26.975300826 +0000 UTC m=+612.432550998" Oct 13 17:34:29 crc kubenswrapper[4720]: I1013 17:34:29.979530 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-jt5jd" event={"ID":"f1c52489-5f05-43ca-a79c-db2a69061eac","Type":"ContainerStarted","Data":"9b591863a2867f30fd801caf0f734ca48b1fd6f0a86772310d5e05abb50349cd"} Oct 13 17:34:29 crc kubenswrapper[4720]: I1013 17:34:29.984030 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-66xwd" event={"ID":"e6844590-4dcb-4007-9e33-12ded957f55b","Type":"ContainerStarted","Data":"3b66361ad916c174fa247aa7d92d8e42bb7ca96d5ffd5337c247318fb31daf60"} Oct 13 17:34:29 crc kubenswrapper[4720]: I1013 17:34:29.984298 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-66xwd" Oct 13 17:34:29 crc kubenswrapper[4720]: I1013 17:34:29.990089 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-k7h8j" event={"ID":"fa98b66b-f1b2-4d57-8386-b449bf1076ec","Type":"ContainerStarted","Data":"e317f97a4a743d815640405bbea9a20650dfed085396ae2cd7b90ad48d2b4cde"} Oct 13 17:34:29 crc kubenswrapper[4720]: I1013 17:34:29.993093 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-r6sg8" event={"ID":"420f8bbd-5b94-4775-8248-68220b91202f","Type":"ContainerStarted","Data":"ad72f99729ab42421a1218f93c610c5dc7ca8dad0e13febcb55a4e3cd9efc8dc"} Oct 13 17:34:29 crc kubenswrapper[4720]: I1013 17:34:29.993373 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-r6sg8" Oct 13 17:34:30 crc kubenswrapper[4720]: I1013 17:34:30.039654 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-66xwd" podStartSLOduration=3.3471252590000002 podStartE2EDuration="6.039625894s" podCreationTimestamp="2025-10-13 17:34:24 +0000 UTC" firstStartedPulling="2025-10-13 17:34:26.180512631 +0000 UTC m=+611.637762773" lastFinishedPulling="2025-10-13 17:34:28.873013236 +0000 UTC m=+614.330263408" observedRunningTime="2025-10-13 17:34:30.035991982 +0000 UTC m=+615.493242154" watchObservedRunningTime="2025-10-13 17:34:30.039625894 +0000 UTC m=+615.496876066" Oct 13 17:34:30 crc kubenswrapper[4720]: I1013 17:34:30.040458 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-jt5jd" podStartSLOduration=1.788344429 podStartE2EDuration="5.040449205s" podCreationTimestamp="2025-10-13 17:34:25 +0000 UTC" firstStartedPulling="2025-10-13 17:34:25.613708177 +0000 UTC m=+611.070958309" lastFinishedPulling="2025-10-13 17:34:28.865812953 +0000 UTC m=+614.323063085" observedRunningTime="2025-10-13 17:34:30.003029691 +0000 UTC m=+615.460279863" watchObservedRunningTime="2025-10-13 17:34:30.040449205 +0000 UTC m=+615.497699367" Oct 13 17:34:30 crc kubenswrapper[4720]: I1013 17:34:30.065066 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-r6sg8" podStartSLOduration=2.517222286 podStartE2EDuration="6.065042252s" podCreationTimestamp="2025-10-13 17:34:24 +0000 UTC" firstStartedPulling="2025-10-13 17:34:25.3749726 +0000 UTC m=+610.832222732" lastFinishedPulling="2025-10-13 17:34:28.922792566 +0000 UTC m=+614.380042698" observedRunningTime="2025-10-13 17:34:30.063222406 +0000 UTC m=+615.520472558" watchObservedRunningTime="2025-10-13 17:34:30.065042252 +0000 UTC m=+615.522292394" Oct 13 17:34:32 crc kubenswrapper[4720]: I1013 17:34:32.006182 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-k7h8j" event={"ID":"fa98b66b-f1b2-4d57-8386-b449bf1076ec","Type":"ContainerStarted","Data":"3823d4711825ac7edcfc7c2ef71004e37347df921b1df1ad27241bd8baa31f2a"} Oct 13 17:34:32 crc kubenswrapper[4720]: I1013 17:34:32.038344 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-k7h8j" podStartSLOduration=1.9561506290000001 podStartE2EDuration="8.038311498s" podCreationTimestamp="2025-10-13 17:34:24 +0000 UTC" firstStartedPulling="2025-10-13 17:34:25.508904235 +0000 UTC m=+610.966154367" lastFinishedPulling="2025-10-13 17:34:31.591065104 +0000 UTC m=+617.048315236" observedRunningTime="2025-10-13 17:34:32.031979577 +0000 UTC m=+617.489229709" watchObservedRunningTime="2025-10-13 17:34:32.038311498 +0000 UTC m=+617.495561690" Oct 13 17:34:35 crc kubenswrapper[4720]: I1013 17:34:35.390933 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-r6sg8" Oct 13 17:34:35 crc kubenswrapper[4720]: I1013 17:34:35.662145 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-58d7d9b477-h56fg" Oct 13 17:34:35 crc kubenswrapper[4720]: I1013 17:34:35.662258 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-58d7d9b477-h56fg" Oct 13 17:34:35 crc kubenswrapper[4720]: I1013 17:34:35.671765 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-58d7d9b477-h56fg" Oct 13 17:34:36 crc kubenswrapper[4720]: I1013 17:34:36.044450 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-58d7d9b477-h56fg" Oct 13 17:34:36 crc kubenswrapper[4720]: I1013 17:34:36.138870 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-5cfzh"] Oct 13 17:34:45 crc kubenswrapper[4720]: I1013 17:34:45.907054 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-66xwd" Oct 13 17:35:01 crc kubenswrapper[4720]: I1013 17:35:01.184772 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-5cfzh" podUID="b0af5887-2244-4dfb-8e2a-a66ac6bf6762" containerName="console" containerID="cri-o://d09626cc28c6245e7b640ba479f51568cced22328b6223d2a977bf58fb9e334a" gracePeriod=15 Oct 13 17:35:01 crc kubenswrapper[4720]: I1013 17:35:01.635031 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-5cfzh_b0af5887-2244-4dfb-8e2a-a66ac6bf6762/console/0.log" Oct 13 17:35:01 crc kubenswrapper[4720]: I1013 17:35:01.635568 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-5cfzh" Oct 13 17:35:01 crc kubenswrapper[4720]: I1013 17:35:01.788392 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b0af5887-2244-4dfb-8e2a-a66ac6bf6762-oauth-serving-cert\") pod \"b0af5887-2244-4dfb-8e2a-a66ac6bf6762\" (UID: \"b0af5887-2244-4dfb-8e2a-a66ac6bf6762\") " Oct 13 17:35:01 crc kubenswrapper[4720]: I1013 17:35:01.788451 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b0af5887-2244-4dfb-8e2a-a66ac6bf6762-console-config\") pod \"b0af5887-2244-4dfb-8e2a-a66ac6bf6762\" (UID: \"b0af5887-2244-4dfb-8e2a-a66ac6bf6762\") " Oct 13 17:35:01 crc kubenswrapper[4720]: I1013 17:35:01.788528 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b0af5887-2244-4dfb-8e2a-a66ac6bf6762-console-oauth-config\") pod \"b0af5887-2244-4dfb-8e2a-a66ac6bf6762\" (UID: \"b0af5887-2244-4dfb-8e2a-a66ac6bf6762\") " Oct 13 17:35:01 crc kubenswrapper[4720]: I1013 17:35:01.789356 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0af5887-2244-4dfb-8e2a-a66ac6bf6762-console-config" (OuterVolumeSpecName: "console-config") pod "b0af5887-2244-4dfb-8e2a-a66ac6bf6762" (UID: "b0af5887-2244-4dfb-8e2a-a66ac6bf6762"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:35:01 crc kubenswrapper[4720]: I1013 17:35:01.789410 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0af5887-2244-4dfb-8e2a-a66ac6bf6762-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "b0af5887-2244-4dfb-8e2a-a66ac6bf6762" (UID: "b0af5887-2244-4dfb-8e2a-a66ac6bf6762"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:35:01 crc kubenswrapper[4720]: I1013 17:35:01.789436 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b0af5887-2244-4dfb-8e2a-a66ac6bf6762-console-serving-cert\") pod \"b0af5887-2244-4dfb-8e2a-a66ac6bf6762\" (UID: \"b0af5887-2244-4dfb-8e2a-a66ac6bf6762\") " Oct 13 17:35:01 crc kubenswrapper[4720]: I1013 17:35:01.789586 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b0af5887-2244-4dfb-8e2a-a66ac6bf6762-service-ca\") pod \"b0af5887-2244-4dfb-8e2a-a66ac6bf6762\" (UID: \"b0af5887-2244-4dfb-8e2a-a66ac6bf6762\") " Oct 13 17:35:01 crc kubenswrapper[4720]: I1013 17:35:01.789648 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0af5887-2244-4dfb-8e2a-a66ac6bf6762-trusted-ca-bundle\") pod \"b0af5887-2244-4dfb-8e2a-a66ac6bf6762\" (UID: \"b0af5887-2244-4dfb-8e2a-a66ac6bf6762\") " Oct 13 17:35:01 crc kubenswrapper[4720]: I1013 17:35:01.789715 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-972nv\" (UniqueName: \"kubernetes.io/projected/b0af5887-2244-4dfb-8e2a-a66ac6bf6762-kube-api-access-972nv\") pod \"b0af5887-2244-4dfb-8e2a-a66ac6bf6762\" (UID: \"b0af5887-2244-4dfb-8e2a-a66ac6bf6762\") " Oct 13 17:35:01 crc kubenswrapper[4720]: I1013 17:35:01.790307 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0af5887-2244-4dfb-8e2a-a66ac6bf6762-service-ca" (OuterVolumeSpecName: "service-ca") pod "b0af5887-2244-4dfb-8e2a-a66ac6bf6762" (UID: "b0af5887-2244-4dfb-8e2a-a66ac6bf6762"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:35:01 crc kubenswrapper[4720]: I1013 17:35:01.790462 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0af5887-2244-4dfb-8e2a-a66ac6bf6762-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "b0af5887-2244-4dfb-8e2a-a66ac6bf6762" (UID: "b0af5887-2244-4dfb-8e2a-a66ac6bf6762"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:35:01 crc kubenswrapper[4720]: I1013 17:35:01.790480 4720 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b0af5887-2244-4dfb-8e2a-a66ac6bf6762-service-ca\") on node \"crc\" DevicePath \"\"" Oct 13 17:35:01 crc kubenswrapper[4720]: I1013 17:35:01.790553 4720 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b0af5887-2244-4dfb-8e2a-a66ac6bf6762-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 13 17:35:01 crc kubenswrapper[4720]: I1013 17:35:01.790570 4720 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b0af5887-2244-4dfb-8e2a-a66ac6bf6762-console-config\") on node \"crc\" DevicePath \"\"" Oct 13 17:35:01 crc kubenswrapper[4720]: I1013 17:35:01.800889 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0af5887-2244-4dfb-8e2a-a66ac6bf6762-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "b0af5887-2244-4dfb-8e2a-a66ac6bf6762" (UID: "b0af5887-2244-4dfb-8e2a-a66ac6bf6762"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:35:01 crc kubenswrapper[4720]: I1013 17:35:01.801607 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0af5887-2244-4dfb-8e2a-a66ac6bf6762-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "b0af5887-2244-4dfb-8e2a-a66ac6bf6762" (UID: "b0af5887-2244-4dfb-8e2a-a66ac6bf6762"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:35:01 crc kubenswrapper[4720]: I1013 17:35:01.806507 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0af5887-2244-4dfb-8e2a-a66ac6bf6762-kube-api-access-972nv" (OuterVolumeSpecName: "kube-api-access-972nv") pod "b0af5887-2244-4dfb-8e2a-a66ac6bf6762" (UID: "b0af5887-2244-4dfb-8e2a-a66ac6bf6762"). InnerVolumeSpecName "kube-api-access-972nv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:35:01 crc kubenswrapper[4720]: I1013 17:35:01.891804 4720 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b0af5887-2244-4dfb-8e2a-a66ac6bf6762-console-oauth-config\") on node \"crc\" DevicePath \"\"" Oct 13 17:35:01 crc kubenswrapper[4720]: I1013 17:35:01.891836 4720 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b0af5887-2244-4dfb-8e2a-a66ac6bf6762-console-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 13 17:35:01 crc kubenswrapper[4720]: I1013 17:35:01.891846 4720 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0af5887-2244-4dfb-8e2a-a66ac6bf6762-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 17:35:01 crc kubenswrapper[4720]: I1013 17:35:01.891854 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-972nv\" (UniqueName: \"kubernetes.io/projected/b0af5887-2244-4dfb-8e2a-a66ac6bf6762-kube-api-access-972nv\") on node \"crc\" DevicePath \"\"" Oct 13 17:35:02 crc kubenswrapper[4720]: I1013 17:35:02.227157 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-5cfzh_b0af5887-2244-4dfb-8e2a-a66ac6bf6762/console/0.log" Oct 13 17:35:02 crc kubenswrapper[4720]: I1013 17:35:02.227243 4720 generic.go:334] "Generic (PLEG): container finished" podID="b0af5887-2244-4dfb-8e2a-a66ac6bf6762" containerID="d09626cc28c6245e7b640ba479f51568cced22328b6223d2a977bf58fb9e334a" exitCode=2 Oct 13 17:35:02 crc kubenswrapper[4720]: I1013 17:35:02.227279 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-5cfzh" event={"ID":"b0af5887-2244-4dfb-8e2a-a66ac6bf6762","Type":"ContainerDied","Data":"d09626cc28c6245e7b640ba479f51568cced22328b6223d2a977bf58fb9e334a"} Oct 13 17:35:02 crc kubenswrapper[4720]: I1013 17:35:02.227319 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-5cfzh" event={"ID":"b0af5887-2244-4dfb-8e2a-a66ac6bf6762","Type":"ContainerDied","Data":"109189c6bf8a97d8ba71c641e9b6b9561511f1a45bff64bac6b11db07ba971c9"} Oct 13 17:35:02 crc kubenswrapper[4720]: I1013 17:35:02.227359 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-5cfzh" Oct 13 17:35:02 crc kubenswrapper[4720]: I1013 17:35:02.227360 4720 scope.go:117] "RemoveContainer" containerID="d09626cc28c6245e7b640ba479f51568cced22328b6223d2a977bf58fb9e334a" Oct 13 17:35:02 crc kubenswrapper[4720]: I1013 17:35:02.267975 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ss4c8"] Oct 13 17:35:02 crc kubenswrapper[4720]: I1013 17:35:02.268137 4720 scope.go:117] "RemoveContainer" containerID="d09626cc28c6245e7b640ba479f51568cced22328b6223d2a977bf58fb9e334a" Oct 13 17:35:02 crc kubenswrapper[4720]: E1013 17:35:02.268223 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0af5887-2244-4dfb-8e2a-a66ac6bf6762" containerName="console" Oct 13 17:35:02 crc kubenswrapper[4720]: I1013 17:35:02.268235 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0af5887-2244-4dfb-8e2a-a66ac6bf6762" containerName="console" Oct 13 17:35:02 crc kubenswrapper[4720]: I1013 17:35:02.268357 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0af5887-2244-4dfb-8e2a-a66ac6bf6762" containerName="console" Oct 13 17:35:02 crc kubenswrapper[4720]: I1013 17:35:02.269074 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ss4c8" Oct 13 17:35:02 crc kubenswrapper[4720]: I1013 17:35:02.271015 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Oct 13 17:35:02 crc kubenswrapper[4720]: I1013 17:35:02.273040 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-5cfzh"] Oct 13 17:35:02 crc kubenswrapper[4720]: E1013 17:35:02.281498 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d09626cc28c6245e7b640ba479f51568cced22328b6223d2a977bf58fb9e334a\": container with ID starting with d09626cc28c6245e7b640ba479f51568cced22328b6223d2a977bf58fb9e334a not found: ID does not exist" containerID="d09626cc28c6245e7b640ba479f51568cced22328b6223d2a977bf58fb9e334a" Oct 13 17:35:02 crc kubenswrapper[4720]: I1013 17:35:02.281537 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d09626cc28c6245e7b640ba479f51568cced22328b6223d2a977bf58fb9e334a"} err="failed to get container status \"d09626cc28c6245e7b640ba479f51568cced22328b6223d2a977bf58fb9e334a\": rpc error: code = NotFound desc = could not find container \"d09626cc28c6245e7b640ba479f51568cced22328b6223d2a977bf58fb9e334a\": container with ID starting with d09626cc28c6245e7b640ba479f51568cced22328b6223d2a977bf58fb9e334a not found: ID does not exist" Oct 13 17:35:02 crc kubenswrapper[4720]: I1013 17:35:02.282843 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-5cfzh"] Oct 13 17:35:02 crc kubenswrapper[4720]: I1013 17:35:02.288333 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ss4c8"] Oct 13 17:35:02 crc kubenswrapper[4720]: I1013 17:35:02.398026 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d5bc3e7b-d845-48f4-9387-94904ed3b983-util\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ss4c8\" (UID: \"d5bc3e7b-d845-48f4-9387-94904ed3b983\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ss4c8" Oct 13 17:35:02 crc kubenswrapper[4720]: I1013 17:35:02.398120 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9sqn\" (UniqueName: \"kubernetes.io/projected/d5bc3e7b-d845-48f4-9387-94904ed3b983-kube-api-access-v9sqn\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ss4c8\" (UID: \"d5bc3e7b-d845-48f4-9387-94904ed3b983\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ss4c8" Oct 13 17:35:02 crc kubenswrapper[4720]: I1013 17:35:02.398151 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d5bc3e7b-d845-48f4-9387-94904ed3b983-bundle\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ss4c8\" (UID: \"d5bc3e7b-d845-48f4-9387-94904ed3b983\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ss4c8" Oct 13 17:35:02 crc kubenswrapper[4720]: I1013 17:35:02.499643 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d5bc3e7b-d845-48f4-9387-94904ed3b983-util\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ss4c8\" (UID: \"d5bc3e7b-d845-48f4-9387-94904ed3b983\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ss4c8" Oct 13 17:35:02 crc kubenswrapper[4720]: I1013 17:35:02.499779 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9sqn\" (UniqueName: \"kubernetes.io/projected/d5bc3e7b-d845-48f4-9387-94904ed3b983-kube-api-access-v9sqn\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ss4c8\" (UID: \"d5bc3e7b-d845-48f4-9387-94904ed3b983\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ss4c8" Oct 13 17:35:02 crc kubenswrapper[4720]: I1013 17:35:02.499826 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d5bc3e7b-d845-48f4-9387-94904ed3b983-bundle\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ss4c8\" (UID: \"d5bc3e7b-d845-48f4-9387-94904ed3b983\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ss4c8" Oct 13 17:35:02 crc kubenswrapper[4720]: I1013 17:35:02.500478 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d5bc3e7b-d845-48f4-9387-94904ed3b983-util\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ss4c8\" (UID: \"d5bc3e7b-d845-48f4-9387-94904ed3b983\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ss4c8" Oct 13 17:35:02 crc kubenswrapper[4720]: I1013 17:35:02.500651 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d5bc3e7b-d845-48f4-9387-94904ed3b983-bundle\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ss4c8\" (UID: \"d5bc3e7b-d845-48f4-9387-94904ed3b983\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ss4c8" Oct 13 17:35:02 crc kubenswrapper[4720]: I1013 17:35:02.520100 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9sqn\" (UniqueName: \"kubernetes.io/projected/d5bc3e7b-d845-48f4-9387-94904ed3b983-kube-api-access-v9sqn\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ss4c8\" (UID: \"d5bc3e7b-d845-48f4-9387-94904ed3b983\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ss4c8" Oct 13 17:35:02 crc kubenswrapper[4720]: I1013 17:35:02.583789 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ss4c8" Oct 13 17:35:03 crc kubenswrapper[4720]: I1013 17:35:03.077733 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ss4c8"] Oct 13 17:35:03 crc kubenswrapper[4720]: W1013 17:35:03.091956 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5bc3e7b_d845_48f4_9387_94904ed3b983.slice/crio-2329bbd104dc8145b2c9bfb1a6bcb4d71c631ddd274c1388f615b7186d08c204 WatchSource:0}: Error finding container 2329bbd104dc8145b2c9bfb1a6bcb4d71c631ddd274c1388f615b7186d08c204: Status 404 returned error can't find the container with id 2329bbd104dc8145b2c9bfb1a6bcb4d71c631ddd274c1388f615b7186d08c204 Oct 13 17:35:03 crc kubenswrapper[4720]: I1013 17:35:03.181754 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0af5887-2244-4dfb-8e2a-a66ac6bf6762" path="/var/lib/kubelet/pods/b0af5887-2244-4dfb-8e2a-a66ac6bf6762/volumes" Oct 13 17:35:03 crc kubenswrapper[4720]: I1013 17:35:03.233527 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ss4c8" event={"ID":"d5bc3e7b-d845-48f4-9387-94904ed3b983","Type":"ContainerStarted","Data":"2329bbd104dc8145b2c9bfb1a6bcb4d71c631ddd274c1388f615b7186d08c204"} Oct 13 17:35:04 crc kubenswrapper[4720]: I1013 17:35:04.246162 4720 generic.go:334] "Generic (PLEG): container finished" podID="d5bc3e7b-d845-48f4-9387-94904ed3b983" containerID="46b0de0b8270ffe337787c74f88468a20c7a03bd561ebe06b16b531411690cdd" exitCode=0 Oct 13 17:35:04 crc kubenswrapper[4720]: I1013 17:35:04.246617 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ss4c8" event={"ID":"d5bc3e7b-d845-48f4-9387-94904ed3b983","Type":"ContainerDied","Data":"46b0de0b8270ffe337787c74f88468a20c7a03bd561ebe06b16b531411690cdd"} Oct 13 17:35:07 crc kubenswrapper[4720]: I1013 17:35:07.310676 4720 generic.go:334] "Generic (PLEG): container finished" podID="d5bc3e7b-d845-48f4-9387-94904ed3b983" containerID="6dcdd8483787ae1f8ce1f4261b1b69517012829cf804fb07278f4bf1663321d8" exitCode=0 Oct 13 17:35:07 crc kubenswrapper[4720]: I1013 17:35:07.311553 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ss4c8" event={"ID":"d5bc3e7b-d845-48f4-9387-94904ed3b983","Type":"ContainerDied","Data":"6dcdd8483787ae1f8ce1f4261b1b69517012829cf804fb07278f4bf1663321d8"} Oct 13 17:35:08 crc kubenswrapper[4720]: I1013 17:35:08.323145 4720 generic.go:334] "Generic (PLEG): container finished" podID="d5bc3e7b-d845-48f4-9387-94904ed3b983" containerID="32e8d74847376795f1af9dca294c90fc5952884192cf1094d8dc330080695aef" exitCode=0 Oct 13 17:35:08 crc kubenswrapper[4720]: I1013 17:35:08.323258 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ss4c8" event={"ID":"d5bc3e7b-d845-48f4-9387-94904ed3b983","Type":"ContainerDied","Data":"32e8d74847376795f1af9dca294c90fc5952884192cf1094d8dc330080695aef"} Oct 13 17:35:09 crc kubenswrapper[4720]: I1013 17:35:09.649309 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ss4c8" Oct 13 17:35:09 crc kubenswrapper[4720]: I1013 17:35:09.717766 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9sqn\" (UniqueName: \"kubernetes.io/projected/d5bc3e7b-d845-48f4-9387-94904ed3b983-kube-api-access-v9sqn\") pod \"d5bc3e7b-d845-48f4-9387-94904ed3b983\" (UID: \"d5bc3e7b-d845-48f4-9387-94904ed3b983\") " Oct 13 17:35:09 crc kubenswrapper[4720]: I1013 17:35:09.717928 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d5bc3e7b-d845-48f4-9387-94904ed3b983-bundle\") pod \"d5bc3e7b-d845-48f4-9387-94904ed3b983\" (UID: \"d5bc3e7b-d845-48f4-9387-94904ed3b983\") " Oct 13 17:35:09 crc kubenswrapper[4720]: I1013 17:35:09.718116 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d5bc3e7b-d845-48f4-9387-94904ed3b983-util\") pod \"d5bc3e7b-d845-48f4-9387-94904ed3b983\" (UID: \"d5bc3e7b-d845-48f4-9387-94904ed3b983\") " Oct 13 17:35:09 crc kubenswrapper[4720]: I1013 17:35:09.721608 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5bc3e7b-d845-48f4-9387-94904ed3b983-bundle" (OuterVolumeSpecName: "bundle") pod "d5bc3e7b-d845-48f4-9387-94904ed3b983" (UID: "d5bc3e7b-d845-48f4-9387-94904ed3b983"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:35:09 crc kubenswrapper[4720]: I1013 17:35:09.725743 4720 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d5bc3e7b-d845-48f4-9387-94904ed3b983-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 17:35:09 crc kubenswrapper[4720]: I1013 17:35:09.727879 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5bc3e7b-d845-48f4-9387-94904ed3b983-kube-api-access-v9sqn" (OuterVolumeSpecName: "kube-api-access-v9sqn") pod "d5bc3e7b-d845-48f4-9387-94904ed3b983" (UID: "d5bc3e7b-d845-48f4-9387-94904ed3b983"). InnerVolumeSpecName "kube-api-access-v9sqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:35:09 crc kubenswrapper[4720]: I1013 17:35:09.732319 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5bc3e7b-d845-48f4-9387-94904ed3b983-util" (OuterVolumeSpecName: "util") pod "d5bc3e7b-d845-48f4-9387-94904ed3b983" (UID: "d5bc3e7b-d845-48f4-9387-94904ed3b983"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:35:09 crc kubenswrapper[4720]: I1013 17:35:09.827315 4720 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d5bc3e7b-d845-48f4-9387-94904ed3b983-util\") on node \"crc\" DevicePath \"\"" Oct 13 17:35:09 crc kubenswrapper[4720]: I1013 17:35:09.827392 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9sqn\" (UniqueName: \"kubernetes.io/projected/d5bc3e7b-d845-48f4-9387-94904ed3b983-kube-api-access-v9sqn\") on node \"crc\" DevicePath \"\"" Oct 13 17:35:10 crc kubenswrapper[4720]: I1013 17:35:10.342259 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ss4c8" event={"ID":"d5bc3e7b-d845-48f4-9387-94904ed3b983","Type":"ContainerDied","Data":"2329bbd104dc8145b2c9bfb1a6bcb4d71c631ddd274c1388f615b7186d08c204"} Oct 13 17:35:10 crc kubenswrapper[4720]: I1013 17:35:10.342325 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2329bbd104dc8145b2c9bfb1a6bcb4d71c631ddd274c1388f615b7186d08c204" Oct 13 17:35:10 crc kubenswrapper[4720]: I1013 17:35:10.342396 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ss4c8" Oct 13 17:35:21 crc kubenswrapper[4720]: I1013 17:35:21.241889 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-68b7b9f484-t4mwn"] Oct 13 17:35:21 crc kubenswrapper[4720]: E1013 17:35:21.242628 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5bc3e7b-d845-48f4-9387-94904ed3b983" containerName="util" Oct 13 17:35:21 crc kubenswrapper[4720]: I1013 17:35:21.242639 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5bc3e7b-d845-48f4-9387-94904ed3b983" containerName="util" Oct 13 17:35:21 crc kubenswrapper[4720]: E1013 17:35:21.242648 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5bc3e7b-d845-48f4-9387-94904ed3b983" containerName="pull" Oct 13 17:35:21 crc kubenswrapper[4720]: I1013 17:35:21.242654 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5bc3e7b-d845-48f4-9387-94904ed3b983" containerName="pull" Oct 13 17:35:21 crc kubenswrapper[4720]: E1013 17:35:21.242670 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5bc3e7b-d845-48f4-9387-94904ed3b983" containerName="extract" Oct 13 17:35:21 crc kubenswrapper[4720]: I1013 17:35:21.242677 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5bc3e7b-d845-48f4-9387-94904ed3b983" containerName="extract" Oct 13 17:35:21 crc kubenswrapper[4720]: I1013 17:35:21.242776 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5bc3e7b-d845-48f4-9387-94904ed3b983" containerName="extract" Oct 13 17:35:21 crc kubenswrapper[4720]: I1013 17:35:21.243103 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-68b7b9f484-t4mwn" Oct 13 17:35:21 crc kubenswrapper[4720]: I1013 17:35:21.246232 4720 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Oct 13 17:35:21 crc kubenswrapper[4720]: I1013 17:35:21.246609 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Oct 13 17:35:21 crc kubenswrapper[4720]: I1013 17:35:21.246906 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Oct 13 17:35:21 crc kubenswrapper[4720]: I1013 17:35:21.247420 4720 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-kzjml" Oct 13 17:35:21 crc kubenswrapper[4720]: I1013 17:35:21.247967 4720 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Oct 13 17:35:21 crc kubenswrapper[4720]: I1013 17:35:21.268420 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-68b7b9f484-t4mwn"] Oct 13 17:35:21 crc kubenswrapper[4720]: I1013 17:35:21.377634 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/676df020-2204-4ef7-88b8-88eb27f8068b-webhook-cert\") pod \"metallb-operator-controller-manager-68b7b9f484-t4mwn\" (UID: \"676df020-2204-4ef7-88b8-88eb27f8068b\") " pod="metallb-system/metallb-operator-controller-manager-68b7b9f484-t4mwn" Oct 13 17:35:21 crc kubenswrapper[4720]: I1013 17:35:21.377989 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnk7h\" (UniqueName: \"kubernetes.io/projected/676df020-2204-4ef7-88b8-88eb27f8068b-kube-api-access-fnk7h\") pod \"metallb-operator-controller-manager-68b7b9f484-t4mwn\" (UID: \"676df020-2204-4ef7-88b8-88eb27f8068b\") " pod="metallb-system/metallb-operator-controller-manager-68b7b9f484-t4mwn" Oct 13 17:35:21 crc kubenswrapper[4720]: I1013 17:35:21.378134 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/676df020-2204-4ef7-88b8-88eb27f8068b-apiservice-cert\") pod \"metallb-operator-controller-manager-68b7b9f484-t4mwn\" (UID: \"676df020-2204-4ef7-88b8-88eb27f8068b\") " pod="metallb-system/metallb-operator-controller-manager-68b7b9f484-t4mwn" Oct 13 17:35:21 crc kubenswrapper[4720]: I1013 17:35:21.479630 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/676df020-2204-4ef7-88b8-88eb27f8068b-webhook-cert\") pod \"metallb-operator-controller-manager-68b7b9f484-t4mwn\" (UID: \"676df020-2204-4ef7-88b8-88eb27f8068b\") " pod="metallb-system/metallb-operator-controller-manager-68b7b9f484-t4mwn" Oct 13 17:35:21 crc kubenswrapper[4720]: I1013 17:35:21.480175 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnk7h\" (UniqueName: \"kubernetes.io/projected/676df020-2204-4ef7-88b8-88eb27f8068b-kube-api-access-fnk7h\") pod \"metallb-operator-controller-manager-68b7b9f484-t4mwn\" (UID: \"676df020-2204-4ef7-88b8-88eb27f8068b\") " pod="metallb-system/metallb-operator-controller-manager-68b7b9f484-t4mwn" Oct 13 17:35:21 crc kubenswrapper[4720]: I1013 17:35:21.480320 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/676df020-2204-4ef7-88b8-88eb27f8068b-apiservice-cert\") pod \"metallb-operator-controller-manager-68b7b9f484-t4mwn\" (UID: \"676df020-2204-4ef7-88b8-88eb27f8068b\") " pod="metallb-system/metallb-operator-controller-manager-68b7b9f484-t4mwn" Oct 13 17:35:21 crc kubenswrapper[4720]: I1013 17:35:21.485777 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/676df020-2204-4ef7-88b8-88eb27f8068b-webhook-cert\") pod \"metallb-operator-controller-manager-68b7b9f484-t4mwn\" (UID: \"676df020-2204-4ef7-88b8-88eb27f8068b\") " pod="metallb-system/metallb-operator-controller-manager-68b7b9f484-t4mwn" Oct 13 17:35:21 crc kubenswrapper[4720]: I1013 17:35:21.489448 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/676df020-2204-4ef7-88b8-88eb27f8068b-apiservice-cert\") pod \"metallb-operator-controller-manager-68b7b9f484-t4mwn\" (UID: \"676df020-2204-4ef7-88b8-88eb27f8068b\") " pod="metallb-system/metallb-operator-controller-manager-68b7b9f484-t4mwn" Oct 13 17:35:21 crc kubenswrapper[4720]: I1013 17:35:21.496368 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnk7h\" (UniqueName: \"kubernetes.io/projected/676df020-2204-4ef7-88b8-88eb27f8068b-kube-api-access-fnk7h\") pod \"metallb-operator-controller-manager-68b7b9f484-t4mwn\" (UID: \"676df020-2204-4ef7-88b8-88eb27f8068b\") " pod="metallb-system/metallb-operator-controller-manager-68b7b9f484-t4mwn" Oct 13 17:35:21 crc kubenswrapper[4720]: I1013 17:35:21.557677 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-68b7b9f484-t4mwn" Oct 13 17:35:21 crc kubenswrapper[4720]: I1013 17:35:21.741146 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-86c9779c6-vtc8m"] Oct 13 17:35:21 crc kubenswrapper[4720]: I1013 17:35:21.744738 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-86c9779c6-vtc8m"] Oct 13 17:35:21 crc kubenswrapper[4720]: I1013 17:35:21.744834 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-86c9779c6-vtc8m" Oct 13 17:35:21 crc kubenswrapper[4720]: I1013 17:35:21.748644 4720 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Oct 13 17:35:21 crc kubenswrapper[4720]: I1013 17:35:21.748659 4720 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Oct 13 17:35:21 crc kubenswrapper[4720]: I1013 17:35:21.748887 4720 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-rwxvs" Oct 13 17:35:21 crc kubenswrapper[4720]: I1013 17:35:21.810663 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq9dt\" (UniqueName: \"kubernetes.io/projected/8eec22d4-687d-427c-a53e-5316b69e5448-kube-api-access-mq9dt\") pod \"metallb-operator-webhook-server-86c9779c6-vtc8m\" (UID: \"8eec22d4-687d-427c-a53e-5316b69e5448\") " pod="metallb-system/metallb-operator-webhook-server-86c9779c6-vtc8m" Oct 13 17:35:21 crc kubenswrapper[4720]: I1013 17:35:21.810704 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8eec22d4-687d-427c-a53e-5316b69e5448-apiservice-cert\") pod \"metallb-operator-webhook-server-86c9779c6-vtc8m\" (UID: \"8eec22d4-687d-427c-a53e-5316b69e5448\") " pod="metallb-system/metallb-operator-webhook-server-86c9779c6-vtc8m" Oct 13 17:35:21 crc kubenswrapper[4720]: I1013 17:35:21.810746 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8eec22d4-687d-427c-a53e-5316b69e5448-webhook-cert\") pod \"metallb-operator-webhook-server-86c9779c6-vtc8m\" (UID: \"8eec22d4-687d-427c-a53e-5316b69e5448\") " pod="metallb-system/metallb-operator-webhook-server-86c9779c6-vtc8m" Oct 13 17:35:21 crc kubenswrapper[4720]: I1013 17:35:21.892216 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-68b7b9f484-t4mwn"] Oct 13 17:35:21 crc kubenswrapper[4720]: I1013 17:35:21.911450 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8eec22d4-687d-427c-a53e-5316b69e5448-apiservice-cert\") pod \"metallb-operator-webhook-server-86c9779c6-vtc8m\" (UID: \"8eec22d4-687d-427c-a53e-5316b69e5448\") " pod="metallb-system/metallb-operator-webhook-server-86c9779c6-vtc8m" Oct 13 17:35:21 crc kubenswrapper[4720]: I1013 17:35:21.911510 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8eec22d4-687d-427c-a53e-5316b69e5448-webhook-cert\") pod \"metallb-operator-webhook-server-86c9779c6-vtc8m\" (UID: \"8eec22d4-687d-427c-a53e-5316b69e5448\") " pod="metallb-system/metallb-operator-webhook-server-86c9779c6-vtc8m" Oct 13 17:35:21 crc kubenswrapper[4720]: I1013 17:35:21.911567 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq9dt\" (UniqueName: \"kubernetes.io/projected/8eec22d4-687d-427c-a53e-5316b69e5448-kube-api-access-mq9dt\") pod \"metallb-operator-webhook-server-86c9779c6-vtc8m\" (UID: \"8eec22d4-687d-427c-a53e-5316b69e5448\") " pod="metallb-system/metallb-operator-webhook-server-86c9779c6-vtc8m" Oct 13 17:35:21 crc kubenswrapper[4720]: I1013 17:35:21.916849 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8eec22d4-687d-427c-a53e-5316b69e5448-apiservice-cert\") pod \"metallb-operator-webhook-server-86c9779c6-vtc8m\" (UID: \"8eec22d4-687d-427c-a53e-5316b69e5448\") " pod="metallb-system/metallb-operator-webhook-server-86c9779c6-vtc8m" Oct 13 17:35:21 crc kubenswrapper[4720]: I1013 17:35:21.917166 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8eec22d4-687d-427c-a53e-5316b69e5448-webhook-cert\") pod \"metallb-operator-webhook-server-86c9779c6-vtc8m\" (UID: \"8eec22d4-687d-427c-a53e-5316b69e5448\") " pod="metallb-system/metallb-operator-webhook-server-86c9779c6-vtc8m" Oct 13 17:35:21 crc kubenswrapper[4720]: I1013 17:35:21.933834 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq9dt\" (UniqueName: \"kubernetes.io/projected/8eec22d4-687d-427c-a53e-5316b69e5448-kube-api-access-mq9dt\") pod \"metallb-operator-webhook-server-86c9779c6-vtc8m\" (UID: \"8eec22d4-687d-427c-a53e-5316b69e5448\") " pod="metallb-system/metallb-operator-webhook-server-86c9779c6-vtc8m" Oct 13 17:35:22 crc kubenswrapper[4720]: I1013 17:35:22.080519 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-86c9779c6-vtc8m" Oct 13 17:35:22 crc kubenswrapper[4720]: I1013 17:35:22.412776 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-68b7b9f484-t4mwn" event={"ID":"676df020-2204-4ef7-88b8-88eb27f8068b","Type":"ContainerStarted","Data":"21b5d22be488fad1086f0b9927e4476483ec1b15829a32a4c937e093f235af74"} Oct 13 17:35:22 crc kubenswrapper[4720]: I1013 17:35:22.511517 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-86c9779c6-vtc8m"] Oct 13 17:35:22 crc kubenswrapper[4720]: W1013 17:35:22.516457 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8eec22d4_687d_427c_a53e_5316b69e5448.slice/crio-15b6667742c535013f60fdaac7370aeb3c60c97eb2e39d5b5084ad8e9745f8ed WatchSource:0}: Error finding container 15b6667742c535013f60fdaac7370aeb3c60c97eb2e39d5b5084ad8e9745f8ed: Status 404 returned error can't find the container with id 15b6667742c535013f60fdaac7370aeb3c60c97eb2e39d5b5084ad8e9745f8ed Oct 13 17:35:23 crc kubenswrapper[4720]: I1013 17:35:23.426898 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-86c9779c6-vtc8m" event={"ID":"8eec22d4-687d-427c-a53e-5316b69e5448","Type":"ContainerStarted","Data":"15b6667742c535013f60fdaac7370aeb3c60c97eb2e39d5b5084ad8e9745f8ed"} Oct 13 17:35:25 crc kubenswrapper[4720]: I1013 17:35:25.451135 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-68b7b9f484-t4mwn" event={"ID":"676df020-2204-4ef7-88b8-88eb27f8068b","Type":"ContainerStarted","Data":"1669f4be804b08e1b74fbf59ce00aa13ff7ec1cad4e0c80e5e156b1ef16913a5"} Oct 13 17:35:25 crc kubenswrapper[4720]: I1013 17:35:25.451427 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-68b7b9f484-t4mwn" Oct 13 17:35:25 crc kubenswrapper[4720]: I1013 17:35:25.493131 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-68b7b9f484-t4mwn" podStartSLOduration=1.384147604 podStartE2EDuration="4.493118977s" podCreationTimestamp="2025-10-13 17:35:21 +0000 UTC" firstStartedPulling="2025-10-13 17:35:21.922878391 +0000 UTC m=+667.380128533" lastFinishedPulling="2025-10-13 17:35:25.031849774 +0000 UTC m=+670.489099906" observedRunningTime="2025-10-13 17:35:25.49047125 +0000 UTC m=+670.947721382" watchObservedRunningTime="2025-10-13 17:35:25.493118977 +0000 UTC m=+670.950369109" Oct 13 17:35:28 crc kubenswrapper[4720]: I1013 17:35:28.482884 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-86c9779c6-vtc8m" event={"ID":"8eec22d4-687d-427c-a53e-5316b69e5448","Type":"ContainerStarted","Data":"2261d0ba40d0a4121905c73ae70c0c05c0e0a758622aac460f489d88db73c457"} Oct 13 17:35:28 crc kubenswrapper[4720]: I1013 17:35:28.483447 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-86c9779c6-vtc8m" Oct 13 17:35:28 crc kubenswrapper[4720]: I1013 17:35:28.518136 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-86c9779c6-vtc8m" podStartSLOduration=2.559538245 podStartE2EDuration="7.518109077s" podCreationTimestamp="2025-10-13 17:35:21 +0000 UTC" firstStartedPulling="2025-10-13 17:35:22.521202512 +0000 UTC m=+667.978452644" lastFinishedPulling="2025-10-13 17:35:27.479773344 +0000 UTC m=+672.937023476" observedRunningTime="2025-10-13 17:35:28.511504018 +0000 UTC m=+673.968754190" watchObservedRunningTime="2025-10-13 17:35:28.518109077 +0000 UTC m=+673.975359239" Oct 13 17:35:42 crc kubenswrapper[4720]: I1013 17:35:42.087566 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-86c9779c6-vtc8m" Oct 13 17:35:45 crc kubenswrapper[4720]: I1013 17:35:45.213078 4720 patch_prober.go:28] interesting pod/machine-config-daemon-htwnl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 17:35:45 crc kubenswrapper[4720]: I1013 17:35:45.214411 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 17:36:01 crc kubenswrapper[4720]: I1013 17:36:01.561376 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-68b7b9f484-t4mwn" Oct 13 17:36:02 crc kubenswrapper[4720]: I1013 17:36:02.513076 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-fw488"] Oct 13 17:36:02 crc kubenswrapper[4720]: I1013 17:36:02.531171 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-fw488" Oct 13 17:36:02 crc kubenswrapper[4720]: I1013 17:36:02.536713 4720 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Oct 13 17:36:02 crc kubenswrapper[4720]: I1013 17:36:02.537175 4720 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-qt95n" Oct 13 17:36:02 crc kubenswrapper[4720]: I1013 17:36:02.537790 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-64bf5d555-wlwb8"] Oct 13 17:36:02 crc kubenswrapper[4720]: I1013 17:36:02.538758 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Oct 13 17:36:02 crc kubenswrapper[4720]: I1013 17:36:02.540254 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-wlwb8" Oct 13 17:36:02 crc kubenswrapper[4720]: I1013 17:36:02.545745 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-64bf5d555-wlwb8"] Oct 13 17:36:02 crc kubenswrapper[4720]: I1013 17:36:02.547114 4720 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Oct 13 17:36:02 crc kubenswrapper[4720]: I1013 17:36:02.627915 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-2kfxp"] Oct 13 17:36:02 crc kubenswrapper[4720]: I1013 17:36:02.628799 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-2kfxp" Oct 13 17:36:02 crc kubenswrapper[4720]: I1013 17:36:02.630858 4720 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-2q4b7" Oct 13 17:36:02 crc kubenswrapper[4720]: I1013 17:36:02.631041 4720 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Oct 13 17:36:02 crc kubenswrapper[4720]: I1013 17:36:02.631448 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Oct 13 17:36:02 crc kubenswrapper[4720]: I1013 17:36:02.631805 4720 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Oct 13 17:36:02 crc kubenswrapper[4720]: I1013 17:36:02.647148 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d10a0690-99f5-416a-b597-d41fa0635070-frr-conf\") pod \"frr-k8s-fw488\" (UID: \"d10a0690-99f5-416a-b597-d41fa0635070\") " pod="metallb-system/frr-k8s-fw488" Oct 13 17:36:02 crc kubenswrapper[4720]: I1013 17:36:02.647212 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d10a0690-99f5-416a-b597-d41fa0635070-reloader\") pod \"frr-k8s-fw488\" (UID: \"d10a0690-99f5-416a-b597-d41fa0635070\") " pod="metallb-system/frr-k8s-fw488" Oct 13 17:36:02 crc kubenswrapper[4720]: I1013 17:36:02.647237 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d10a0690-99f5-416a-b597-d41fa0635070-frr-startup\") pod \"frr-k8s-fw488\" (UID: \"d10a0690-99f5-416a-b597-d41fa0635070\") " pod="metallb-system/frr-k8s-fw488" Oct 13 17:36:02 crc kubenswrapper[4720]: I1013 17:36:02.647281 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfqrh\" (UniqueName: \"kubernetes.io/projected/d10a0690-99f5-416a-b597-d41fa0635070-kube-api-access-cfqrh\") pod \"frr-k8s-fw488\" (UID: \"d10a0690-99f5-416a-b597-d41fa0635070\") " pod="metallb-system/frr-k8s-fw488" Oct 13 17:36:02 crc kubenswrapper[4720]: I1013 17:36:02.647317 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d10a0690-99f5-416a-b597-d41fa0635070-metrics-certs\") pod \"frr-k8s-fw488\" (UID: \"d10a0690-99f5-416a-b597-d41fa0635070\") " pod="metallb-system/frr-k8s-fw488" Oct 13 17:36:02 crc kubenswrapper[4720]: I1013 17:36:02.647346 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-br5bh\" (UniqueName: \"kubernetes.io/projected/01bbae6a-0286-4e02-bf7a-bdc1e5ba9e53-kube-api-access-br5bh\") pod \"frr-k8s-webhook-server-64bf5d555-wlwb8\" (UID: \"01bbae6a-0286-4e02-bf7a-bdc1e5ba9e53\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-wlwb8" Oct 13 17:36:02 crc kubenswrapper[4720]: I1013 17:36:02.647412 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d10a0690-99f5-416a-b597-d41fa0635070-frr-sockets\") pod \"frr-k8s-fw488\" (UID: \"d10a0690-99f5-416a-b597-d41fa0635070\") " pod="metallb-system/frr-k8s-fw488" Oct 13 17:36:02 crc kubenswrapper[4720]: I1013 17:36:02.647427 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d10a0690-99f5-416a-b597-d41fa0635070-metrics\") pod \"frr-k8s-fw488\" (UID: \"d10a0690-99f5-416a-b597-d41fa0635070\") " pod="metallb-system/frr-k8s-fw488" Oct 13 17:36:02 crc kubenswrapper[4720]: I1013 17:36:02.647447 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/01bbae6a-0286-4e02-bf7a-bdc1e5ba9e53-cert\") pod \"frr-k8s-webhook-server-64bf5d555-wlwb8\" (UID: \"01bbae6a-0286-4e02-bf7a-bdc1e5ba9e53\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-wlwb8" Oct 13 17:36:02 crc kubenswrapper[4720]: I1013 17:36:02.660217 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-68d546b9d8-ztrn5"] Oct 13 17:36:02 crc kubenswrapper[4720]: I1013 17:36:02.661081 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-68d546b9d8-ztrn5" Oct 13 17:36:02 crc kubenswrapper[4720]: I1013 17:36:02.662624 4720 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Oct 13 17:36:02 crc kubenswrapper[4720]: I1013 17:36:02.681834 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-68d546b9d8-ztrn5"] Oct 13 17:36:02 crc kubenswrapper[4720]: I1013 17:36:02.748364 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmn8x\" (UniqueName: \"kubernetes.io/projected/34074aee-3c24-4d8c-929b-d0feb37ead02-kube-api-access-hmn8x\") pod \"controller-68d546b9d8-ztrn5\" (UID: \"34074aee-3c24-4d8c-929b-d0feb37ead02\") " pod="metallb-system/controller-68d546b9d8-ztrn5" Oct 13 17:36:02 crc kubenswrapper[4720]: I1013 17:36:02.748405 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bd395621-3d53-4b0b-b8da-f7d5c7df9570-metrics-certs\") pod \"speaker-2kfxp\" (UID: \"bd395621-3d53-4b0b-b8da-f7d5c7df9570\") " pod="metallb-system/speaker-2kfxp" Oct 13 17:36:02 crc kubenswrapper[4720]: I1013 17:36:02.748425 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/34074aee-3c24-4d8c-929b-d0feb37ead02-cert\") pod \"controller-68d546b9d8-ztrn5\" (UID: \"34074aee-3c24-4d8c-929b-d0feb37ead02\") " pod="metallb-system/controller-68d546b9d8-ztrn5" Oct 13 17:36:02 crc kubenswrapper[4720]: I1013 17:36:02.748530 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d10a0690-99f5-416a-b597-d41fa0635070-frr-conf\") pod \"frr-k8s-fw488\" (UID: \"d10a0690-99f5-416a-b597-d41fa0635070\") " pod="metallb-system/frr-k8s-fw488" Oct 13 17:36:02 crc kubenswrapper[4720]: I1013 17:36:02.748564 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d10a0690-99f5-416a-b597-d41fa0635070-reloader\") pod \"frr-k8s-fw488\" (UID: \"d10a0690-99f5-416a-b597-d41fa0635070\") " pod="metallb-system/frr-k8s-fw488" Oct 13 17:36:02 crc kubenswrapper[4720]: I1013 17:36:02.748593 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d10a0690-99f5-416a-b597-d41fa0635070-frr-startup\") pod \"frr-k8s-fw488\" (UID: \"d10a0690-99f5-416a-b597-d41fa0635070\") " pod="metallb-system/frr-k8s-fw488" Oct 13 17:36:02 crc kubenswrapper[4720]: I1013 17:36:02.748616 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/34074aee-3c24-4d8c-929b-d0feb37ead02-metrics-certs\") pod \"controller-68d546b9d8-ztrn5\" (UID: \"34074aee-3c24-4d8c-929b-d0feb37ead02\") " pod="metallb-system/controller-68d546b9d8-ztrn5" Oct 13 17:36:02 crc kubenswrapper[4720]: I1013 17:36:02.748641 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfqrh\" (UniqueName: \"kubernetes.io/projected/d10a0690-99f5-416a-b597-d41fa0635070-kube-api-access-cfqrh\") pod \"frr-k8s-fw488\" (UID: \"d10a0690-99f5-416a-b597-d41fa0635070\") " pod="metallb-system/frr-k8s-fw488" Oct 13 17:36:02 crc kubenswrapper[4720]: I1013 17:36:02.748665 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d10a0690-99f5-416a-b597-d41fa0635070-metrics-certs\") pod \"frr-k8s-fw488\" (UID: \"d10a0690-99f5-416a-b597-d41fa0635070\") " pod="metallb-system/frr-k8s-fw488" Oct 13 17:36:02 crc kubenswrapper[4720]: I1013 17:36:02.748687 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/bd395621-3d53-4b0b-b8da-f7d5c7df9570-memberlist\") pod \"speaker-2kfxp\" (UID: \"bd395621-3d53-4b0b-b8da-f7d5c7df9570\") " pod="metallb-system/speaker-2kfxp" Oct 13 17:36:02 crc kubenswrapper[4720]: I1013 17:36:02.748706 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-br5bh\" (UniqueName: \"kubernetes.io/projected/01bbae6a-0286-4e02-bf7a-bdc1e5ba9e53-kube-api-access-br5bh\") pod \"frr-k8s-webhook-server-64bf5d555-wlwb8\" (UID: \"01bbae6a-0286-4e02-bf7a-bdc1e5ba9e53\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-wlwb8" Oct 13 17:36:02 crc kubenswrapper[4720]: I1013 17:36:02.748770 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d10a0690-99f5-416a-b597-d41fa0635070-metrics\") pod \"frr-k8s-fw488\" (UID: \"d10a0690-99f5-416a-b597-d41fa0635070\") " pod="metallb-system/frr-k8s-fw488" Oct 13 17:36:02 crc kubenswrapper[4720]: I1013 17:36:02.748790 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d10a0690-99f5-416a-b597-d41fa0635070-frr-sockets\") pod \"frr-k8s-fw488\" (UID: \"d10a0690-99f5-416a-b597-d41fa0635070\") " pod="metallb-system/frr-k8s-fw488" Oct 13 17:36:02 crc kubenswrapper[4720]: I1013 17:36:02.748808 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/01bbae6a-0286-4e02-bf7a-bdc1e5ba9e53-cert\") pod \"frr-k8s-webhook-server-64bf5d555-wlwb8\" (UID: \"01bbae6a-0286-4e02-bf7a-bdc1e5ba9e53\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-wlwb8" Oct 13 17:36:02 crc kubenswrapper[4720]: I1013 17:36:02.748867 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/bd395621-3d53-4b0b-b8da-f7d5c7df9570-metallb-excludel2\") pod \"speaker-2kfxp\" (UID: \"bd395621-3d53-4b0b-b8da-f7d5c7df9570\") " pod="metallb-system/speaker-2kfxp" Oct 13 17:36:02 crc kubenswrapper[4720]: I1013 17:36:02.748893 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hdnc\" (UniqueName: \"kubernetes.io/projected/bd395621-3d53-4b0b-b8da-f7d5c7df9570-kube-api-access-7hdnc\") pod \"speaker-2kfxp\" (UID: \"bd395621-3d53-4b0b-b8da-f7d5c7df9570\") " pod="metallb-system/speaker-2kfxp" Oct 13 17:36:02 crc kubenswrapper[4720]: I1013 17:36:02.748909 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d10a0690-99f5-416a-b597-d41fa0635070-frr-conf\") pod \"frr-k8s-fw488\" (UID: \"d10a0690-99f5-416a-b597-d41fa0635070\") " pod="metallb-system/frr-k8s-fw488" Oct 13 17:36:02 crc kubenswrapper[4720]: I1013 17:36:02.749497 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d10a0690-99f5-416a-b597-d41fa0635070-metrics\") pod \"frr-k8s-fw488\" (UID: \"d10a0690-99f5-416a-b597-d41fa0635070\") " pod="metallb-system/frr-k8s-fw488" Oct 13 17:36:02 crc kubenswrapper[4720]: I1013 17:36:02.749673 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d10a0690-99f5-416a-b597-d41fa0635070-frr-sockets\") pod \"frr-k8s-fw488\" (UID: \"d10a0690-99f5-416a-b597-d41fa0635070\") " pod="metallb-system/frr-k8s-fw488" Oct 13 17:36:02 crc kubenswrapper[4720]: E1013 17:36:02.749741 4720 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Oct 13 17:36:02 crc kubenswrapper[4720]: E1013 17:36:02.749782 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01bbae6a-0286-4e02-bf7a-bdc1e5ba9e53-cert podName:01bbae6a-0286-4e02-bf7a-bdc1e5ba9e53 nodeName:}" failed. No retries permitted until 2025-10-13 17:36:03.249766782 +0000 UTC m=+708.707016914 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/01bbae6a-0286-4e02-bf7a-bdc1e5ba9e53-cert") pod "frr-k8s-webhook-server-64bf5d555-wlwb8" (UID: "01bbae6a-0286-4e02-bf7a-bdc1e5ba9e53") : secret "frr-k8s-webhook-server-cert" not found Oct 13 17:36:02 crc kubenswrapper[4720]: I1013 17:36:02.749955 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d10a0690-99f5-416a-b597-d41fa0635070-reloader\") pod \"frr-k8s-fw488\" (UID: \"d10a0690-99f5-416a-b597-d41fa0635070\") " pod="metallb-system/frr-k8s-fw488" Oct 13 17:36:02 crc kubenswrapper[4720]: I1013 17:36:02.750763 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d10a0690-99f5-416a-b597-d41fa0635070-frr-startup\") pod \"frr-k8s-fw488\" (UID: \"d10a0690-99f5-416a-b597-d41fa0635070\") " pod="metallb-system/frr-k8s-fw488" Oct 13 17:36:02 crc kubenswrapper[4720]: I1013 17:36:02.764152 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d10a0690-99f5-416a-b597-d41fa0635070-metrics-certs\") pod \"frr-k8s-fw488\" (UID: \"d10a0690-99f5-416a-b597-d41fa0635070\") " pod="metallb-system/frr-k8s-fw488" Oct 13 17:36:02 crc kubenswrapper[4720]: I1013 17:36:02.766871 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfqrh\" (UniqueName: \"kubernetes.io/projected/d10a0690-99f5-416a-b597-d41fa0635070-kube-api-access-cfqrh\") pod \"frr-k8s-fw488\" (UID: \"d10a0690-99f5-416a-b597-d41fa0635070\") " pod="metallb-system/frr-k8s-fw488" Oct 13 17:36:02 crc kubenswrapper[4720]: I1013 17:36:02.778372 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-br5bh\" (UniqueName: \"kubernetes.io/projected/01bbae6a-0286-4e02-bf7a-bdc1e5ba9e53-kube-api-access-br5bh\") pod \"frr-k8s-webhook-server-64bf5d555-wlwb8\" (UID: \"01bbae6a-0286-4e02-bf7a-bdc1e5ba9e53\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-wlwb8" Oct 13 17:36:02 crc kubenswrapper[4720]: I1013 17:36:02.850251 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmn8x\" (UniqueName: \"kubernetes.io/projected/34074aee-3c24-4d8c-929b-d0feb37ead02-kube-api-access-hmn8x\") pod \"controller-68d546b9d8-ztrn5\" (UID: \"34074aee-3c24-4d8c-929b-d0feb37ead02\") " pod="metallb-system/controller-68d546b9d8-ztrn5" Oct 13 17:36:02 crc kubenswrapper[4720]: I1013 17:36:02.850302 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bd395621-3d53-4b0b-b8da-f7d5c7df9570-metrics-certs\") pod \"speaker-2kfxp\" (UID: \"bd395621-3d53-4b0b-b8da-f7d5c7df9570\") " pod="metallb-system/speaker-2kfxp" Oct 13 17:36:02 crc kubenswrapper[4720]: I1013 17:36:02.850327 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/34074aee-3c24-4d8c-929b-d0feb37ead02-cert\") pod \"controller-68d546b9d8-ztrn5\" (UID: \"34074aee-3c24-4d8c-929b-d0feb37ead02\") " pod="metallb-system/controller-68d546b9d8-ztrn5" Oct 13 17:36:02 crc kubenswrapper[4720]: I1013 17:36:02.850367 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/34074aee-3c24-4d8c-929b-d0feb37ead02-metrics-certs\") pod \"controller-68d546b9d8-ztrn5\" (UID: \"34074aee-3c24-4d8c-929b-d0feb37ead02\") " pod="metallb-system/controller-68d546b9d8-ztrn5" Oct 13 17:36:02 crc kubenswrapper[4720]: I1013 17:36:02.850394 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/bd395621-3d53-4b0b-b8da-f7d5c7df9570-memberlist\") pod \"speaker-2kfxp\" (UID: \"bd395621-3d53-4b0b-b8da-f7d5c7df9570\") " pod="metallb-system/speaker-2kfxp" Oct 13 17:36:02 crc kubenswrapper[4720]: I1013 17:36:02.850470 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/bd395621-3d53-4b0b-b8da-f7d5c7df9570-metallb-excludel2\") pod \"speaker-2kfxp\" (UID: \"bd395621-3d53-4b0b-b8da-f7d5c7df9570\") " pod="metallb-system/speaker-2kfxp" Oct 13 17:36:02 crc kubenswrapper[4720]: I1013 17:36:02.850493 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hdnc\" (UniqueName: \"kubernetes.io/projected/bd395621-3d53-4b0b-b8da-f7d5c7df9570-kube-api-access-7hdnc\") pod \"speaker-2kfxp\" (UID: \"bd395621-3d53-4b0b-b8da-f7d5c7df9570\") " pod="metallb-system/speaker-2kfxp" Oct 13 17:36:02 crc kubenswrapper[4720]: E1013 17:36:02.850515 4720 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Oct 13 17:36:02 crc kubenswrapper[4720]: E1013 17:36:02.850582 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/34074aee-3c24-4d8c-929b-d0feb37ead02-metrics-certs podName:34074aee-3c24-4d8c-929b-d0feb37ead02 nodeName:}" failed. No retries permitted until 2025-10-13 17:36:03.350564625 +0000 UTC m=+708.807814747 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/34074aee-3c24-4d8c-929b-d0feb37ead02-metrics-certs") pod "controller-68d546b9d8-ztrn5" (UID: "34074aee-3c24-4d8c-929b-d0feb37ead02") : secret "controller-certs-secret" not found Oct 13 17:36:02 crc kubenswrapper[4720]: E1013 17:36:02.850650 4720 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Oct 13 17:36:02 crc kubenswrapper[4720]: E1013 17:36:02.850738 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd395621-3d53-4b0b-b8da-f7d5c7df9570-memberlist podName:bd395621-3d53-4b0b-b8da-f7d5c7df9570 nodeName:}" failed. No retries permitted until 2025-10-13 17:36:03.350713089 +0000 UTC m=+708.807963221 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/bd395621-3d53-4b0b-b8da-f7d5c7df9570-memberlist") pod "speaker-2kfxp" (UID: "bd395621-3d53-4b0b-b8da-f7d5c7df9570") : secret "metallb-memberlist" not found Oct 13 17:36:02 crc kubenswrapper[4720]: I1013 17:36:02.851495 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/bd395621-3d53-4b0b-b8da-f7d5c7df9570-metallb-excludel2\") pod \"speaker-2kfxp\" (UID: \"bd395621-3d53-4b0b-b8da-f7d5c7df9570\") " pod="metallb-system/speaker-2kfxp" Oct 13 17:36:02 crc kubenswrapper[4720]: I1013 17:36:02.853021 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bd395621-3d53-4b0b-b8da-f7d5c7df9570-metrics-certs\") pod \"speaker-2kfxp\" (UID: \"bd395621-3d53-4b0b-b8da-f7d5c7df9570\") " pod="metallb-system/speaker-2kfxp" Oct 13 17:36:02 crc kubenswrapper[4720]: I1013 17:36:02.853509 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/34074aee-3c24-4d8c-929b-d0feb37ead02-cert\") pod \"controller-68d546b9d8-ztrn5\" (UID: \"34074aee-3c24-4d8c-929b-d0feb37ead02\") " pod="metallb-system/controller-68d546b9d8-ztrn5" Oct 13 17:36:02 crc kubenswrapper[4720]: I1013 17:36:02.868432 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmn8x\" (UniqueName: \"kubernetes.io/projected/34074aee-3c24-4d8c-929b-d0feb37ead02-kube-api-access-hmn8x\") pod \"controller-68d546b9d8-ztrn5\" (UID: \"34074aee-3c24-4d8c-929b-d0feb37ead02\") " pod="metallb-system/controller-68d546b9d8-ztrn5" Oct 13 17:36:02 crc kubenswrapper[4720]: I1013 17:36:02.871020 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hdnc\" (UniqueName: \"kubernetes.io/projected/bd395621-3d53-4b0b-b8da-f7d5c7df9570-kube-api-access-7hdnc\") pod \"speaker-2kfxp\" (UID: \"bd395621-3d53-4b0b-b8da-f7d5c7df9570\") " pod="metallb-system/speaker-2kfxp" Oct 13 17:36:02 crc kubenswrapper[4720]: I1013 17:36:02.884650 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-fw488" Oct 13 17:36:03 crc kubenswrapper[4720]: I1013 17:36:03.256934 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/01bbae6a-0286-4e02-bf7a-bdc1e5ba9e53-cert\") pod \"frr-k8s-webhook-server-64bf5d555-wlwb8\" (UID: \"01bbae6a-0286-4e02-bf7a-bdc1e5ba9e53\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-wlwb8" Oct 13 17:36:03 crc kubenswrapper[4720]: I1013 17:36:03.262898 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/01bbae6a-0286-4e02-bf7a-bdc1e5ba9e53-cert\") pod \"frr-k8s-webhook-server-64bf5d555-wlwb8\" (UID: \"01bbae6a-0286-4e02-bf7a-bdc1e5ba9e53\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-wlwb8" Oct 13 17:36:03 crc kubenswrapper[4720]: I1013 17:36:03.358667 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/bd395621-3d53-4b0b-b8da-f7d5c7df9570-memberlist\") pod \"speaker-2kfxp\" (UID: \"bd395621-3d53-4b0b-b8da-f7d5c7df9570\") " pod="metallb-system/speaker-2kfxp" Oct 13 17:36:03 crc kubenswrapper[4720]: I1013 17:36:03.358859 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/34074aee-3c24-4d8c-929b-d0feb37ead02-metrics-certs\") pod \"controller-68d546b9d8-ztrn5\" (UID: \"34074aee-3c24-4d8c-929b-d0feb37ead02\") " pod="metallb-system/controller-68d546b9d8-ztrn5" Oct 13 17:36:03 crc kubenswrapper[4720]: E1013 17:36:03.358894 4720 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Oct 13 17:36:03 crc kubenswrapper[4720]: E1013 17:36:03.358972 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd395621-3d53-4b0b-b8da-f7d5c7df9570-memberlist podName:bd395621-3d53-4b0b-b8da-f7d5c7df9570 nodeName:}" failed. No retries permitted until 2025-10-13 17:36:04.35894927 +0000 UTC m=+709.816199412 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/bd395621-3d53-4b0b-b8da-f7d5c7df9570-memberlist") pod "speaker-2kfxp" (UID: "bd395621-3d53-4b0b-b8da-f7d5c7df9570") : secret "metallb-memberlist" not found Oct 13 17:36:03 crc kubenswrapper[4720]: I1013 17:36:03.362988 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/34074aee-3c24-4d8c-929b-d0feb37ead02-metrics-certs\") pod \"controller-68d546b9d8-ztrn5\" (UID: \"34074aee-3c24-4d8c-929b-d0feb37ead02\") " pod="metallb-system/controller-68d546b9d8-ztrn5" Oct 13 17:36:03 crc kubenswrapper[4720]: I1013 17:36:03.491950 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-wlwb8" Oct 13 17:36:03 crc kubenswrapper[4720]: I1013 17:36:03.573098 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-68d546b9d8-ztrn5" Oct 13 17:36:03 crc kubenswrapper[4720]: I1013 17:36:03.713926 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fw488" event={"ID":"d10a0690-99f5-416a-b597-d41fa0635070","Type":"ContainerStarted","Data":"8b30ee68d4471f734c89e25c954b5135df3d4a891503d132584f8ba27e8dae49"} Oct 13 17:36:03 crc kubenswrapper[4720]: I1013 17:36:03.798915 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-68d546b9d8-ztrn5"] Oct 13 17:36:03 crc kubenswrapper[4720]: W1013 17:36:03.803466 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34074aee_3c24_4d8c_929b_d0feb37ead02.slice/crio-041e5feefdc1de3723d88cbb8911c82ef48759f801cd77284d2eef16cf08e448 WatchSource:0}: Error finding container 041e5feefdc1de3723d88cbb8911c82ef48759f801cd77284d2eef16cf08e448: Status 404 returned error can't find the container with id 041e5feefdc1de3723d88cbb8911c82ef48759f801cd77284d2eef16cf08e448 Oct 13 17:36:03 crc kubenswrapper[4720]: I1013 17:36:03.953281 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-64bf5d555-wlwb8"] Oct 13 17:36:04 crc kubenswrapper[4720]: I1013 17:36:04.372625 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/bd395621-3d53-4b0b-b8da-f7d5c7df9570-memberlist\") pod \"speaker-2kfxp\" (UID: \"bd395621-3d53-4b0b-b8da-f7d5c7df9570\") " pod="metallb-system/speaker-2kfxp" Oct 13 17:36:04 crc kubenswrapper[4720]: I1013 17:36:04.382449 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/bd395621-3d53-4b0b-b8da-f7d5c7df9570-memberlist\") pod \"speaker-2kfxp\" (UID: \"bd395621-3d53-4b0b-b8da-f7d5c7df9570\") " pod="metallb-system/speaker-2kfxp" Oct 13 17:36:04 crc kubenswrapper[4720]: I1013 17:36:04.441405 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-2kfxp" Oct 13 17:36:04 crc kubenswrapper[4720]: W1013 17:36:04.473312 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd395621_3d53_4b0b_b8da_f7d5c7df9570.slice/crio-267748e9059b9473de2ee60914c2a783ae50126041851768c0280e5d7cf00df6 WatchSource:0}: Error finding container 267748e9059b9473de2ee60914c2a783ae50126041851768c0280e5d7cf00df6: Status 404 returned error can't find the container with id 267748e9059b9473de2ee60914c2a783ae50126041851768c0280e5d7cf00df6 Oct 13 17:36:04 crc kubenswrapper[4720]: I1013 17:36:04.724524 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-68d546b9d8-ztrn5" event={"ID":"34074aee-3c24-4d8c-929b-d0feb37ead02","Type":"ContainerStarted","Data":"35ade460132fc961ad89396196f6fa9b3ab08c4e7cad8e9119630f9877a7908a"} Oct 13 17:36:04 crc kubenswrapper[4720]: I1013 17:36:04.724581 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-68d546b9d8-ztrn5" event={"ID":"34074aee-3c24-4d8c-929b-d0feb37ead02","Type":"ContainerStarted","Data":"fdcb3079eef55a474dd80bef1c15df29e5292bc3072d097486ff46bfc17129db"} Oct 13 17:36:04 crc kubenswrapper[4720]: I1013 17:36:04.724599 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-68d546b9d8-ztrn5" event={"ID":"34074aee-3c24-4d8c-929b-d0feb37ead02","Type":"ContainerStarted","Data":"041e5feefdc1de3723d88cbb8911c82ef48759f801cd77284d2eef16cf08e448"} Oct 13 17:36:04 crc kubenswrapper[4720]: I1013 17:36:04.725490 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-68d546b9d8-ztrn5" Oct 13 17:36:04 crc kubenswrapper[4720]: I1013 17:36:04.725936 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-2kfxp" event={"ID":"bd395621-3d53-4b0b-b8da-f7d5c7df9570","Type":"ContainerStarted","Data":"267748e9059b9473de2ee60914c2a783ae50126041851768c0280e5d7cf00df6"} Oct 13 17:36:04 crc kubenswrapper[4720]: I1013 17:36:04.727257 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-wlwb8" event={"ID":"01bbae6a-0286-4e02-bf7a-bdc1e5ba9e53","Type":"ContainerStarted","Data":"ec8b3bd5e21dfb2305f73950bcc08110c58e2f2b5b0a8587496a85009668b8ab"} Oct 13 17:36:04 crc kubenswrapper[4720]: I1013 17:36:04.756471 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-68d546b9d8-ztrn5" podStartSLOduration=2.7564295 podStartE2EDuration="2.7564295s" podCreationTimestamp="2025-10-13 17:36:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:36:04.745988283 +0000 UTC m=+710.203238425" watchObservedRunningTime="2025-10-13 17:36:04.7564295 +0000 UTC m=+710.213679642" Oct 13 17:36:05 crc kubenswrapper[4720]: I1013 17:36:05.735305 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-2kfxp" event={"ID":"bd395621-3d53-4b0b-b8da-f7d5c7df9570","Type":"ContainerStarted","Data":"15cc5fc17db24f24da886ac1cd31e9d03ef5251871cb89cb598002bbbcf0bc7a"} Oct 13 17:36:05 crc kubenswrapper[4720]: I1013 17:36:05.735626 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-2kfxp" event={"ID":"bd395621-3d53-4b0b-b8da-f7d5c7df9570","Type":"ContainerStarted","Data":"0668805aeedc24c30f192dcbd571ac96e5eec46b80699251eaecaef01568082c"} Oct 13 17:36:05 crc kubenswrapper[4720]: I1013 17:36:05.757643 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-2kfxp" podStartSLOduration=3.757621334 podStartE2EDuration="3.757621334s" podCreationTimestamp="2025-10-13 17:36:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:36:05.753944901 +0000 UTC m=+711.211195043" watchObservedRunningTime="2025-10-13 17:36:05.757621334 +0000 UTC m=+711.214871476" Oct 13 17:36:06 crc kubenswrapper[4720]: I1013 17:36:06.740410 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-2kfxp" Oct 13 17:36:11 crc kubenswrapper[4720]: I1013 17:36:11.782076 4720 generic.go:334] "Generic (PLEG): container finished" podID="d10a0690-99f5-416a-b597-d41fa0635070" containerID="0aa1bffdad153db69dba047b422e1671dfc246a10248c6cebd7465dca1766e5f" exitCode=0 Oct 13 17:36:11 crc kubenswrapper[4720]: I1013 17:36:11.782148 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fw488" event={"ID":"d10a0690-99f5-416a-b597-d41fa0635070","Type":"ContainerDied","Data":"0aa1bffdad153db69dba047b422e1671dfc246a10248c6cebd7465dca1766e5f"} Oct 13 17:36:11 crc kubenswrapper[4720]: I1013 17:36:11.786584 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-wlwb8" event={"ID":"01bbae6a-0286-4e02-bf7a-bdc1e5ba9e53","Type":"ContainerStarted","Data":"ffe6698e29dc005b366fc2c659f53b86c5046ba52c0f0fe14f88847584db243e"} Oct 13 17:36:11 crc kubenswrapper[4720]: I1013 17:36:11.786822 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-wlwb8" Oct 13 17:36:12 crc kubenswrapper[4720]: I1013 17:36:12.798218 4720 generic.go:334] "Generic (PLEG): container finished" podID="d10a0690-99f5-416a-b597-d41fa0635070" containerID="f369381c9ad8bb71bb1a14c15ce394ec5bb72c4cb649e2a5df637c67b5ea1c69" exitCode=0 Oct 13 17:36:12 crc kubenswrapper[4720]: I1013 17:36:12.798309 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fw488" event={"ID":"d10a0690-99f5-416a-b597-d41fa0635070","Type":"ContainerDied","Data":"f369381c9ad8bb71bb1a14c15ce394ec5bb72c4cb649e2a5df637c67b5ea1c69"} Oct 13 17:36:12 crc kubenswrapper[4720]: I1013 17:36:12.836292 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-wlwb8" podStartSLOduration=3.28243963 podStartE2EDuration="10.836270399s" podCreationTimestamp="2025-10-13 17:36:02 +0000 UTC" firstStartedPulling="2025-10-13 17:36:03.983885471 +0000 UTC m=+709.441135603" lastFinishedPulling="2025-10-13 17:36:11.53771621 +0000 UTC m=+716.994966372" observedRunningTime="2025-10-13 17:36:11.840652756 +0000 UTC m=+717.297902898" watchObservedRunningTime="2025-10-13 17:36:12.836270399 +0000 UTC m=+718.293520541" Oct 13 17:36:13 crc kubenswrapper[4720]: I1013 17:36:13.578164 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-68d546b9d8-ztrn5" Oct 13 17:36:13 crc kubenswrapper[4720]: I1013 17:36:13.812305 4720 generic.go:334] "Generic (PLEG): container finished" podID="d10a0690-99f5-416a-b597-d41fa0635070" containerID="fd5e48a459e9daafafd0f6a88c8342e5c101c81a338e99d7814f37b6c6066196" exitCode=0 Oct 13 17:36:13 crc kubenswrapper[4720]: I1013 17:36:13.812387 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fw488" event={"ID":"d10a0690-99f5-416a-b597-d41fa0635070","Type":"ContainerDied","Data":"fd5e48a459e9daafafd0f6a88c8342e5c101c81a338e99d7814f37b6c6066196"} Oct 13 17:36:14 crc kubenswrapper[4720]: I1013 17:36:14.445947 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-2kfxp" Oct 13 17:36:14 crc kubenswrapper[4720]: I1013 17:36:14.821139 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fw488" event={"ID":"d10a0690-99f5-416a-b597-d41fa0635070","Type":"ContainerStarted","Data":"4c6c197532bde0538e6022f9179a85c807c2b13bfd395d9c375760245bdbc8f9"} Oct 13 17:36:14 crc kubenswrapper[4720]: I1013 17:36:14.821437 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fw488" event={"ID":"d10a0690-99f5-416a-b597-d41fa0635070","Type":"ContainerStarted","Data":"05a901957889231f10fcfe0c5e98a81b63b706289f8ea76b178b0892254943a0"} Oct 13 17:36:14 crc kubenswrapper[4720]: I1013 17:36:14.821449 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fw488" event={"ID":"d10a0690-99f5-416a-b597-d41fa0635070","Type":"ContainerStarted","Data":"3ef845d2e60921ee06b11f2f952dc8922ac41e501a3616cfbc14b7dd992b0448"} Oct 13 17:36:14 crc kubenswrapper[4720]: I1013 17:36:14.821459 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fw488" event={"ID":"d10a0690-99f5-416a-b597-d41fa0635070","Type":"ContainerStarted","Data":"b031d7a8fb47954c30191ecf4c95366c668e9a096feddca9904ec99ec4b2b35a"} Oct 13 17:36:15 crc kubenswrapper[4720]: I1013 17:36:15.212894 4720 patch_prober.go:28] interesting pod/machine-config-daemon-htwnl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 17:36:15 crc kubenswrapper[4720]: I1013 17:36:15.212972 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 17:36:15 crc kubenswrapper[4720]: I1013 17:36:15.835498 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fw488" event={"ID":"d10a0690-99f5-416a-b597-d41fa0635070","Type":"ContainerStarted","Data":"d5a8a2373b394246bce54322fccf6755710f96f109c554a044d2e0a36c64d034"} Oct 13 17:36:15 crc kubenswrapper[4720]: I1013 17:36:15.835584 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fw488" event={"ID":"d10a0690-99f5-416a-b597-d41fa0635070","Type":"ContainerStarted","Data":"fb5a1a532a58a2d2b67e1b7d9933b61495a66d9e4a17750cbba2a46b291235ea"} Oct 13 17:36:15 crc kubenswrapper[4720]: I1013 17:36:15.836562 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-fw488" Oct 13 17:36:15 crc kubenswrapper[4720]: I1013 17:36:15.878595 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-fw488" podStartSLOduration=5.33290535 podStartE2EDuration="13.878567223s" podCreationTimestamp="2025-10-13 17:36:02 +0000 UTC" firstStartedPulling="2025-10-13 17:36:02.980296756 +0000 UTC m=+708.437546888" lastFinishedPulling="2025-10-13 17:36:11.525958599 +0000 UTC m=+716.983208761" observedRunningTime="2025-10-13 17:36:15.869955192 +0000 UTC m=+721.327205354" watchObservedRunningTime="2025-10-13 17:36:15.878567223 +0000 UTC m=+721.335817385" Oct 13 17:36:17 crc kubenswrapper[4720]: I1013 17:36:17.472834 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-r4j59"] Oct 13 17:36:17 crc kubenswrapper[4720]: I1013 17:36:17.474311 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-r4j59" Oct 13 17:36:17 crc kubenswrapper[4720]: I1013 17:36:17.475989 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-7h5fg" Oct 13 17:36:17 crc kubenswrapper[4720]: I1013 17:36:17.476809 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Oct 13 17:36:17 crc kubenswrapper[4720]: I1013 17:36:17.477036 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Oct 13 17:36:17 crc kubenswrapper[4720]: I1013 17:36:17.485758 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-r4j59"] Oct 13 17:36:17 crc kubenswrapper[4720]: I1013 17:36:17.571584 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk7rl\" (UniqueName: \"kubernetes.io/projected/47342be0-cebb-4699-b8a1-780da7b66a61-kube-api-access-mk7rl\") pod \"openstack-operator-index-r4j59\" (UID: \"47342be0-cebb-4699-b8a1-780da7b66a61\") " pod="openstack-operators/openstack-operator-index-r4j59" Oct 13 17:36:17 crc kubenswrapper[4720]: I1013 17:36:17.673471 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mk7rl\" (UniqueName: \"kubernetes.io/projected/47342be0-cebb-4699-b8a1-780da7b66a61-kube-api-access-mk7rl\") pod \"openstack-operator-index-r4j59\" (UID: \"47342be0-cebb-4699-b8a1-780da7b66a61\") " pod="openstack-operators/openstack-operator-index-r4j59" Oct 13 17:36:17 crc kubenswrapper[4720]: I1013 17:36:17.689966 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mk7rl\" (UniqueName: \"kubernetes.io/projected/47342be0-cebb-4699-b8a1-780da7b66a61-kube-api-access-mk7rl\") pod \"openstack-operator-index-r4j59\" (UID: \"47342be0-cebb-4699-b8a1-780da7b66a61\") " pod="openstack-operators/openstack-operator-index-r4j59" Oct 13 17:36:17 crc kubenswrapper[4720]: I1013 17:36:17.788528 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-r4j59" Oct 13 17:36:17 crc kubenswrapper[4720]: I1013 17:36:17.886167 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-fw488" Oct 13 17:36:17 crc kubenswrapper[4720]: I1013 17:36:17.943356 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-fw488" Oct 13 17:36:18 crc kubenswrapper[4720]: I1013 17:36:18.038286 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-r4j59"] Oct 13 17:36:18 crc kubenswrapper[4720]: I1013 17:36:18.859913 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-r4j59" event={"ID":"47342be0-cebb-4699-b8a1-780da7b66a61","Type":"ContainerStarted","Data":"f0e0cb5fb272efe6f9807b2490f4b39190b57f136092c55089fab6294f14ba0e"} Oct 13 17:36:20 crc kubenswrapper[4720]: I1013 17:36:20.835176 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-r4j59"] Oct 13 17:36:20 crc kubenswrapper[4720]: I1013 17:36:20.877794 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-r4j59" event={"ID":"47342be0-cebb-4699-b8a1-780da7b66a61","Type":"ContainerStarted","Data":"12d0e345b35bd9773106883c23e5c197910afc77683eb0b20ece866f6c82c530"} Oct 13 17:36:20 crc kubenswrapper[4720]: I1013 17:36:20.897808 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-r4j59" podStartSLOduration=1.547490933 podStartE2EDuration="3.897790777s" podCreationTimestamp="2025-10-13 17:36:17 +0000 UTC" firstStartedPulling="2025-10-13 17:36:18.050053256 +0000 UTC m=+723.507303388" lastFinishedPulling="2025-10-13 17:36:20.40035311 +0000 UTC m=+725.857603232" observedRunningTime="2025-10-13 17:36:20.894719538 +0000 UTC m=+726.351969730" watchObservedRunningTime="2025-10-13 17:36:20.897790777 +0000 UTC m=+726.355040899" Oct 13 17:36:21 crc kubenswrapper[4720]: I1013 17:36:21.443952 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-qnssk"] Oct 13 17:36:21 crc kubenswrapper[4720]: I1013 17:36:21.456144 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-qnssk" Oct 13 17:36:21 crc kubenswrapper[4720]: I1013 17:36:21.461444 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-qnssk"] Oct 13 17:36:21 crc kubenswrapper[4720]: I1013 17:36:21.533252 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztm8m\" (UniqueName: \"kubernetes.io/projected/21cd75be-1f87-4a83-a140-d31263d1c86f-kube-api-access-ztm8m\") pod \"openstack-operator-index-qnssk\" (UID: \"21cd75be-1f87-4a83-a140-d31263d1c86f\") " pod="openstack-operators/openstack-operator-index-qnssk" Oct 13 17:36:21 crc kubenswrapper[4720]: I1013 17:36:21.634718 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztm8m\" (UniqueName: \"kubernetes.io/projected/21cd75be-1f87-4a83-a140-d31263d1c86f-kube-api-access-ztm8m\") pod \"openstack-operator-index-qnssk\" (UID: \"21cd75be-1f87-4a83-a140-d31263d1c86f\") " pod="openstack-operators/openstack-operator-index-qnssk" Oct 13 17:36:21 crc kubenswrapper[4720]: I1013 17:36:21.665110 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztm8m\" (UniqueName: \"kubernetes.io/projected/21cd75be-1f87-4a83-a140-d31263d1c86f-kube-api-access-ztm8m\") pod \"openstack-operator-index-qnssk\" (UID: \"21cd75be-1f87-4a83-a140-d31263d1c86f\") " pod="openstack-operators/openstack-operator-index-qnssk" Oct 13 17:36:21 crc kubenswrapper[4720]: I1013 17:36:21.809108 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-qnssk" Oct 13 17:36:21 crc kubenswrapper[4720]: I1013 17:36:21.884359 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-r4j59" podUID="47342be0-cebb-4699-b8a1-780da7b66a61" containerName="registry-server" containerID="cri-o://12d0e345b35bd9773106883c23e5c197910afc77683eb0b20ece866f6c82c530" gracePeriod=2 Oct 13 17:36:22 crc kubenswrapper[4720]: I1013 17:36:22.290533 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-r4j59" Oct 13 17:36:22 crc kubenswrapper[4720]: I1013 17:36:22.305424 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-qnssk"] Oct 13 17:36:22 crc kubenswrapper[4720]: I1013 17:36:22.347298 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mk7rl\" (UniqueName: \"kubernetes.io/projected/47342be0-cebb-4699-b8a1-780da7b66a61-kube-api-access-mk7rl\") pod \"47342be0-cebb-4699-b8a1-780da7b66a61\" (UID: \"47342be0-cebb-4699-b8a1-780da7b66a61\") " Oct 13 17:36:22 crc kubenswrapper[4720]: I1013 17:36:22.351546 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47342be0-cebb-4699-b8a1-780da7b66a61-kube-api-access-mk7rl" (OuterVolumeSpecName: "kube-api-access-mk7rl") pod "47342be0-cebb-4699-b8a1-780da7b66a61" (UID: "47342be0-cebb-4699-b8a1-780da7b66a61"). InnerVolumeSpecName "kube-api-access-mk7rl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:36:22 crc kubenswrapper[4720]: I1013 17:36:22.449669 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mk7rl\" (UniqueName: \"kubernetes.io/projected/47342be0-cebb-4699-b8a1-780da7b66a61-kube-api-access-mk7rl\") on node \"crc\" DevicePath \"\"" Oct 13 17:36:22 crc kubenswrapper[4720]: I1013 17:36:22.894180 4720 generic.go:334] "Generic (PLEG): container finished" podID="47342be0-cebb-4699-b8a1-780da7b66a61" containerID="12d0e345b35bd9773106883c23e5c197910afc77683eb0b20ece866f6c82c530" exitCode=0 Oct 13 17:36:22 crc kubenswrapper[4720]: I1013 17:36:22.894288 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-r4j59" Oct 13 17:36:22 crc kubenswrapper[4720]: I1013 17:36:22.894349 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-r4j59" event={"ID":"47342be0-cebb-4699-b8a1-780da7b66a61","Type":"ContainerDied","Data":"12d0e345b35bd9773106883c23e5c197910afc77683eb0b20ece866f6c82c530"} Oct 13 17:36:22 crc kubenswrapper[4720]: I1013 17:36:22.894435 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-r4j59" event={"ID":"47342be0-cebb-4699-b8a1-780da7b66a61","Type":"ContainerDied","Data":"f0e0cb5fb272efe6f9807b2490f4b39190b57f136092c55089fab6294f14ba0e"} Oct 13 17:36:22 crc kubenswrapper[4720]: I1013 17:36:22.894528 4720 scope.go:117] "RemoveContainer" containerID="12d0e345b35bd9773106883c23e5c197910afc77683eb0b20ece866f6c82c530" Oct 13 17:36:22 crc kubenswrapper[4720]: I1013 17:36:22.897256 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-qnssk" event={"ID":"21cd75be-1f87-4a83-a140-d31263d1c86f","Type":"ContainerStarted","Data":"8d4a2a205d61b9ca276a8c3276699d53912d73d825e90b36d191614013f7f598"} Oct 13 17:36:22 crc kubenswrapper[4720]: I1013 17:36:22.897295 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-qnssk" event={"ID":"21cd75be-1f87-4a83-a140-d31263d1c86f","Type":"ContainerStarted","Data":"8fbeef70384181750735017c3ded8267fa00ebc58873b1326bb0d6d7b4e27b48"} Oct 13 17:36:22 crc kubenswrapper[4720]: I1013 17:36:22.918216 4720 scope.go:117] "RemoveContainer" containerID="12d0e345b35bd9773106883c23e5c197910afc77683eb0b20ece866f6c82c530" Oct 13 17:36:22 crc kubenswrapper[4720]: I1013 17:36:22.919698 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-qnssk" podStartSLOduration=1.8712709589999998 podStartE2EDuration="1.919676141s" podCreationTimestamp="2025-10-13 17:36:21 +0000 UTC" firstStartedPulling="2025-10-13 17:36:22.32314563 +0000 UTC m=+727.780395772" lastFinishedPulling="2025-10-13 17:36:22.371550822 +0000 UTC m=+727.828800954" observedRunningTime="2025-10-13 17:36:22.916315775 +0000 UTC m=+728.373565937" watchObservedRunningTime="2025-10-13 17:36:22.919676141 +0000 UTC m=+728.376926273" Oct 13 17:36:22 crc kubenswrapper[4720]: E1013 17:36:22.920080 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12d0e345b35bd9773106883c23e5c197910afc77683eb0b20ece866f6c82c530\": container with ID starting with 12d0e345b35bd9773106883c23e5c197910afc77683eb0b20ece866f6c82c530 not found: ID does not exist" containerID="12d0e345b35bd9773106883c23e5c197910afc77683eb0b20ece866f6c82c530" Oct 13 17:36:22 crc kubenswrapper[4720]: I1013 17:36:22.920141 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12d0e345b35bd9773106883c23e5c197910afc77683eb0b20ece866f6c82c530"} err="failed to get container status \"12d0e345b35bd9773106883c23e5c197910afc77683eb0b20ece866f6c82c530\": rpc error: code = NotFound desc = could not find container \"12d0e345b35bd9773106883c23e5c197910afc77683eb0b20ece866f6c82c530\": container with ID starting with 12d0e345b35bd9773106883c23e5c197910afc77683eb0b20ece866f6c82c530 not found: ID does not exist" Oct 13 17:36:22 crc kubenswrapper[4720]: I1013 17:36:22.956425 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-r4j59"] Oct 13 17:36:22 crc kubenswrapper[4720]: I1013 17:36:22.964477 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-r4j59"] Oct 13 17:36:23 crc kubenswrapper[4720]: I1013 17:36:23.177399 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47342be0-cebb-4699-b8a1-780da7b66a61" path="/var/lib/kubelet/pods/47342be0-cebb-4699-b8a1-780da7b66a61/volumes" Oct 13 17:36:23 crc kubenswrapper[4720]: I1013 17:36:23.504471 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-wlwb8" Oct 13 17:36:31 crc kubenswrapper[4720]: I1013 17:36:31.809637 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-qnssk" Oct 13 17:36:31 crc kubenswrapper[4720]: I1013 17:36:31.810365 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-qnssk" Oct 13 17:36:31 crc kubenswrapper[4720]: I1013 17:36:31.850232 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-qnssk" Oct 13 17:36:32 crc kubenswrapper[4720]: I1013 17:36:32.021104 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-qnssk" Oct 13 17:36:32 crc kubenswrapper[4720]: I1013 17:36:32.890588 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/f4b764a5aa97d0f87d08f09894533aff6b93246953653cc0ea80385c49l69l6"] Oct 13 17:36:32 crc kubenswrapper[4720]: E1013 17:36:32.891275 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47342be0-cebb-4699-b8a1-780da7b66a61" containerName="registry-server" Oct 13 17:36:32 crc kubenswrapper[4720]: I1013 17:36:32.891296 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="47342be0-cebb-4699-b8a1-780da7b66a61" containerName="registry-server" Oct 13 17:36:32 crc kubenswrapper[4720]: I1013 17:36:32.891458 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="47342be0-cebb-4699-b8a1-780da7b66a61" containerName="registry-server" Oct 13 17:36:32 crc kubenswrapper[4720]: I1013 17:36:32.892625 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f4b764a5aa97d0f87d08f09894533aff6b93246953653cc0ea80385c49l69l6" Oct 13 17:36:32 crc kubenswrapper[4720]: I1013 17:36:32.893689 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-fw488" Oct 13 17:36:32 crc kubenswrapper[4720]: I1013 17:36:32.895289 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-tzd87" Oct 13 17:36:32 crc kubenswrapper[4720]: I1013 17:36:32.907660 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/f4b764a5aa97d0f87d08f09894533aff6b93246953653cc0ea80385c49l69l6"] Oct 13 17:36:33 crc kubenswrapper[4720]: I1013 17:36:33.031674 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f56e63f6-a476-4150-a661-07e988c98f28-util\") pod \"f4b764a5aa97d0f87d08f09894533aff6b93246953653cc0ea80385c49l69l6\" (UID: \"f56e63f6-a476-4150-a661-07e988c98f28\") " pod="openstack-operators/f4b764a5aa97d0f87d08f09894533aff6b93246953653cc0ea80385c49l69l6" Oct 13 17:36:33 crc kubenswrapper[4720]: I1013 17:36:33.031828 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cgkc\" (UniqueName: \"kubernetes.io/projected/f56e63f6-a476-4150-a661-07e988c98f28-kube-api-access-6cgkc\") pod \"f4b764a5aa97d0f87d08f09894533aff6b93246953653cc0ea80385c49l69l6\" (UID: \"f56e63f6-a476-4150-a661-07e988c98f28\") " pod="openstack-operators/f4b764a5aa97d0f87d08f09894533aff6b93246953653cc0ea80385c49l69l6" Oct 13 17:36:33 crc kubenswrapper[4720]: I1013 17:36:33.031875 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f56e63f6-a476-4150-a661-07e988c98f28-bundle\") pod \"f4b764a5aa97d0f87d08f09894533aff6b93246953653cc0ea80385c49l69l6\" (UID: \"f56e63f6-a476-4150-a661-07e988c98f28\") " pod="openstack-operators/f4b764a5aa97d0f87d08f09894533aff6b93246953653cc0ea80385c49l69l6" Oct 13 17:36:33 crc kubenswrapper[4720]: I1013 17:36:33.133590 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f56e63f6-a476-4150-a661-07e988c98f28-util\") pod \"f4b764a5aa97d0f87d08f09894533aff6b93246953653cc0ea80385c49l69l6\" (UID: \"f56e63f6-a476-4150-a661-07e988c98f28\") " pod="openstack-operators/f4b764a5aa97d0f87d08f09894533aff6b93246953653cc0ea80385c49l69l6" Oct 13 17:36:33 crc kubenswrapper[4720]: I1013 17:36:33.133704 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cgkc\" (UniqueName: \"kubernetes.io/projected/f56e63f6-a476-4150-a661-07e988c98f28-kube-api-access-6cgkc\") pod \"f4b764a5aa97d0f87d08f09894533aff6b93246953653cc0ea80385c49l69l6\" (UID: \"f56e63f6-a476-4150-a661-07e988c98f28\") " pod="openstack-operators/f4b764a5aa97d0f87d08f09894533aff6b93246953653cc0ea80385c49l69l6" Oct 13 17:36:33 crc kubenswrapper[4720]: I1013 17:36:33.133763 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f56e63f6-a476-4150-a661-07e988c98f28-bundle\") pod \"f4b764a5aa97d0f87d08f09894533aff6b93246953653cc0ea80385c49l69l6\" (UID: \"f56e63f6-a476-4150-a661-07e988c98f28\") " pod="openstack-operators/f4b764a5aa97d0f87d08f09894533aff6b93246953653cc0ea80385c49l69l6" Oct 13 17:36:33 crc kubenswrapper[4720]: I1013 17:36:33.134433 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f56e63f6-a476-4150-a661-07e988c98f28-util\") pod \"f4b764a5aa97d0f87d08f09894533aff6b93246953653cc0ea80385c49l69l6\" (UID: \"f56e63f6-a476-4150-a661-07e988c98f28\") " pod="openstack-operators/f4b764a5aa97d0f87d08f09894533aff6b93246953653cc0ea80385c49l69l6" Oct 13 17:36:33 crc kubenswrapper[4720]: I1013 17:36:33.134679 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f56e63f6-a476-4150-a661-07e988c98f28-bundle\") pod \"f4b764a5aa97d0f87d08f09894533aff6b93246953653cc0ea80385c49l69l6\" (UID: \"f56e63f6-a476-4150-a661-07e988c98f28\") " pod="openstack-operators/f4b764a5aa97d0f87d08f09894533aff6b93246953653cc0ea80385c49l69l6" Oct 13 17:36:33 crc kubenswrapper[4720]: I1013 17:36:33.160440 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cgkc\" (UniqueName: \"kubernetes.io/projected/f56e63f6-a476-4150-a661-07e988c98f28-kube-api-access-6cgkc\") pod \"f4b764a5aa97d0f87d08f09894533aff6b93246953653cc0ea80385c49l69l6\" (UID: \"f56e63f6-a476-4150-a661-07e988c98f28\") " pod="openstack-operators/f4b764a5aa97d0f87d08f09894533aff6b93246953653cc0ea80385c49l69l6" Oct 13 17:36:33 crc kubenswrapper[4720]: I1013 17:36:33.225409 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f4b764a5aa97d0f87d08f09894533aff6b93246953653cc0ea80385c49l69l6" Oct 13 17:36:33 crc kubenswrapper[4720]: I1013 17:36:33.470022 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/f4b764a5aa97d0f87d08f09894533aff6b93246953653cc0ea80385c49l69l6"] Oct 13 17:36:33 crc kubenswrapper[4720]: W1013 17:36:33.475083 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf56e63f6_a476_4150_a661_07e988c98f28.slice/crio-a4ff4c034d68bf73877a422e9bfdda73d16683d7e45851848980a8bcef05cd4d WatchSource:0}: Error finding container a4ff4c034d68bf73877a422e9bfdda73d16683d7e45851848980a8bcef05cd4d: Status 404 returned error can't find the container with id a4ff4c034d68bf73877a422e9bfdda73d16683d7e45851848980a8bcef05cd4d Oct 13 17:36:33 crc kubenswrapper[4720]: I1013 17:36:33.996996 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f4b764a5aa97d0f87d08f09894533aff6b93246953653cc0ea80385c49l69l6" event={"ID":"f56e63f6-a476-4150-a661-07e988c98f28","Type":"ContainerStarted","Data":"a4ff4c034d68bf73877a422e9bfdda73d16683d7e45851848980a8bcef05cd4d"} Oct 13 17:36:35 crc kubenswrapper[4720]: I1013 17:36:35.007956 4720 generic.go:334] "Generic (PLEG): container finished" podID="f56e63f6-a476-4150-a661-07e988c98f28" containerID="6c6338a23a10036d15aa6af9260917bbecc997df210a3826f2027c557de937c6" exitCode=0 Oct 13 17:36:35 crc kubenswrapper[4720]: I1013 17:36:35.008017 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f4b764a5aa97d0f87d08f09894533aff6b93246953653cc0ea80385c49l69l6" event={"ID":"f56e63f6-a476-4150-a661-07e988c98f28","Type":"ContainerDied","Data":"6c6338a23a10036d15aa6af9260917bbecc997df210a3826f2027c557de937c6"} Oct 13 17:36:36 crc kubenswrapper[4720]: I1013 17:36:36.025519 4720 generic.go:334] "Generic (PLEG): container finished" podID="f56e63f6-a476-4150-a661-07e988c98f28" containerID="b7e221c46759d646cfda7eacd3c2320d78aa45e7b4a21e6e00ae87aea7b2b2d6" exitCode=0 Oct 13 17:36:36 crc kubenswrapper[4720]: I1013 17:36:36.025652 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f4b764a5aa97d0f87d08f09894533aff6b93246953653cc0ea80385c49l69l6" event={"ID":"f56e63f6-a476-4150-a661-07e988c98f28","Type":"ContainerDied","Data":"b7e221c46759d646cfda7eacd3c2320d78aa45e7b4a21e6e00ae87aea7b2b2d6"} Oct 13 17:36:37 crc kubenswrapper[4720]: I1013 17:36:37.037257 4720 generic.go:334] "Generic (PLEG): container finished" podID="f56e63f6-a476-4150-a661-07e988c98f28" containerID="a96f4e270b69d7ae62942748285bdd45a7cf088db9d519cdbf2b465b71f1f42a" exitCode=0 Oct 13 17:36:37 crc kubenswrapper[4720]: I1013 17:36:37.037320 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f4b764a5aa97d0f87d08f09894533aff6b93246953653cc0ea80385c49l69l6" event={"ID":"f56e63f6-a476-4150-a661-07e988c98f28","Type":"ContainerDied","Data":"a96f4e270b69d7ae62942748285bdd45a7cf088db9d519cdbf2b465b71f1f42a"} Oct 13 17:36:38 crc kubenswrapper[4720]: I1013 17:36:38.357221 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f4b764a5aa97d0f87d08f09894533aff6b93246953653cc0ea80385c49l69l6" Oct 13 17:36:38 crc kubenswrapper[4720]: I1013 17:36:38.439579 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cgkc\" (UniqueName: \"kubernetes.io/projected/f56e63f6-a476-4150-a661-07e988c98f28-kube-api-access-6cgkc\") pod \"f56e63f6-a476-4150-a661-07e988c98f28\" (UID: \"f56e63f6-a476-4150-a661-07e988c98f28\") " Oct 13 17:36:38 crc kubenswrapper[4720]: I1013 17:36:38.439656 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f56e63f6-a476-4150-a661-07e988c98f28-bundle\") pod \"f56e63f6-a476-4150-a661-07e988c98f28\" (UID: \"f56e63f6-a476-4150-a661-07e988c98f28\") " Oct 13 17:36:38 crc kubenswrapper[4720]: I1013 17:36:38.439681 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f56e63f6-a476-4150-a661-07e988c98f28-util\") pod \"f56e63f6-a476-4150-a661-07e988c98f28\" (UID: \"f56e63f6-a476-4150-a661-07e988c98f28\") " Oct 13 17:36:38 crc kubenswrapper[4720]: I1013 17:36:38.440969 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f56e63f6-a476-4150-a661-07e988c98f28-bundle" (OuterVolumeSpecName: "bundle") pod "f56e63f6-a476-4150-a661-07e988c98f28" (UID: "f56e63f6-a476-4150-a661-07e988c98f28"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:36:38 crc kubenswrapper[4720]: I1013 17:36:38.449977 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f56e63f6-a476-4150-a661-07e988c98f28-kube-api-access-6cgkc" (OuterVolumeSpecName: "kube-api-access-6cgkc") pod "f56e63f6-a476-4150-a661-07e988c98f28" (UID: "f56e63f6-a476-4150-a661-07e988c98f28"). InnerVolumeSpecName "kube-api-access-6cgkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:36:38 crc kubenswrapper[4720]: I1013 17:36:38.457753 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f56e63f6-a476-4150-a661-07e988c98f28-util" (OuterVolumeSpecName: "util") pod "f56e63f6-a476-4150-a661-07e988c98f28" (UID: "f56e63f6-a476-4150-a661-07e988c98f28"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:36:38 crc kubenswrapper[4720]: I1013 17:36:38.541690 4720 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f56e63f6-a476-4150-a661-07e988c98f28-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 17:36:38 crc kubenswrapper[4720]: I1013 17:36:38.541738 4720 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f56e63f6-a476-4150-a661-07e988c98f28-util\") on node \"crc\" DevicePath \"\"" Oct 13 17:36:38 crc kubenswrapper[4720]: I1013 17:36:38.541759 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6cgkc\" (UniqueName: \"kubernetes.io/projected/f56e63f6-a476-4150-a661-07e988c98f28-kube-api-access-6cgkc\") on node \"crc\" DevicePath \"\"" Oct 13 17:36:39 crc kubenswrapper[4720]: I1013 17:36:39.056461 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f4b764a5aa97d0f87d08f09894533aff6b93246953653cc0ea80385c49l69l6" event={"ID":"f56e63f6-a476-4150-a661-07e988c98f28","Type":"ContainerDied","Data":"a4ff4c034d68bf73877a422e9bfdda73d16683d7e45851848980a8bcef05cd4d"} Oct 13 17:36:39 crc kubenswrapper[4720]: I1013 17:36:39.056529 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4ff4c034d68bf73877a422e9bfdda73d16683d7e45851848980a8bcef05cd4d" Oct 13 17:36:39 crc kubenswrapper[4720]: I1013 17:36:39.056568 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f4b764a5aa97d0f87d08f09894533aff6b93246953653cc0ea80385c49l69l6" Oct 13 17:36:42 crc kubenswrapper[4720]: I1013 17:36:42.129469 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lwp58"] Oct 13 17:36:42 crc kubenswrapper[4720]: I1013 17:36:42.130666 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-lwp58" podUID="4e04c241-5f40-47d5-8c67-e8092a483089" containerName="controller-manager" containerID="cri-o://b65588c90735197c422709ea288feba94a226ea0e2752a9af9203b106c328fd1" gracePeriod=30 Oct 13 17:36:42 crc kubenswrapper[4720]: I1013 17:36:42.214375 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-9mm85"] Oct 13 17:36:42 crc kubenswrapper[4720]: I1013 17:36:42.214601 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9mm85" podUID="b5179459-d832-4419-96ce-44dd4f055e98" containerName="route-controller-manager" containerID="cri-o://431b55886192faf024aea2c290a237484e8569559655af6299b96304d1b7e7bb" gracePeriod=30 Oct 13 17:36:42 crc kubenswrapper[4720]: I1013 17:36:42.502835 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-lwp58" Oct 13 17:36:42 crc kubenswrapper[4720]: I1013 17:36:42.578768 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9mm85" Oct 13 17:36:42 crc kubenswrapper[4720]: I1013 17:36:42.616352 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xpzx\" (UniqueName: \"kubernetes.io/projected/4e04c241-5f40-47d5-8c67-e8092a483089-kube-api-access-9xpzx\") pod \"4e04c241-5f40-47d5-8c67-e8092a483089\" (UID: \"4e04c241-5f40-47d5-8c67-e8092a483089\") " Oct 13 17:36:42 crc kubenswrapper[4720]: I1013 17:36:42.616424 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e04c241-5f40-47d5-8c67-e8092a483089-client-ca\") pod \"4e04c241-5f40-47d5-8c67-e8092a483089\" (UID: \"4e04c241-5f40-47d5-8c67-e8092a483089\") " Oct 13 17:36:42 crc kubenswrapper[4720]: I1013 17:36:42.616457 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e04c241-5f40-47d5-8c67-e8092a483089-config\") pod \"4e04c241-5f40-47d5-8c67-e8092a483089\" (UID: \"4e04c241-5f40-47d5-8c67-e8092a483089\") " Oct 13 17:36:42 crc kubenswrapper[4720]: I1013 17:36:42.616539 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4e04c241-5f40-47d5-8c67-e8092a483089-proxy-ca-bundles\") pod \"4e04c241-5f40-47d5-8c67-e8092a483089\" (UID: \"4e04c241-5f40-47d5-8c67-e8092a483089\") " Oct 13 17:36:42 crc kubenswrapper[4720]: I1013 17:36:42.616554 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e04c241-5f40-47d5-8c67-e8092a483089-serving-cert\") pod \"4e04c241-5f40-47d5-8c67-e8092a483089\" (UID: \"4e04c241-5f40-47d5-8c67-e8092a483089\") " Oct 13 17:36:42 crc kubenswrapper[4720]: I1013 17:36:42.618707 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e04c241-5f40-47d5-8c67-e8092a483089-client-ca" (OuterVolumeSpecName: "client-ca") pod "4e04c241-5f40-47d5-8c67-e8092a483089" (UID: "4e04c241-5f40-47d5-8c67-e8092a483089"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:36:42 crc kubenswrapper[4720]: I1013 17:36:42.618774 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e04c241-5f40-47d5-8c67-e8092a483089-config" (OuterVolumeSpecName: "config") pod "4e04c241-5f40-47d5-8c67-e8092a483089" (UID: "4e04c241-5f40-47d5-8c67-e8092a483089"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:36:42 crc kubenswrapper[4720]: I1013 17:36:42.618898 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e04c241-5f40-47d5-8c67-e8092a483089-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "4e04c241-5f40-47d5-8c67-e8092a483089" (UID: "4e04c241-5f40-47d5-8c67-e8092a483089"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:36:42 crc kubenswrapper[4720]: I1013 17:36:42.621686 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e04c241-5f40-47d5-8c67-e8092a483089-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4e04c241-5f40-47d5-8c67-e8092a483089" (UID: "4e04c241-5f40-47d5-8c67-e8092a483089"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:36:42 crc kubenswrapper[4720]: I1013 17:36:42.622629 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e04c241-5f40-47d5-8c67-e8092a483089-kube-api-access-9xpzx" (OuterVolumeSpecName: "kube-api-access-9xpzx") pod "4e04c241-5f40-47d5-8c67-e8092a483089" (UID: "4e04c241-5f40-47d5-8c67-e8092a483089"). InnerVolumeSpecName "kube-api-access-9xpzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:36:42 crc kubenswrapper[4720]: I1013 17:36:42.717136 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b5179459-d832-4419-96ce-44dd4f055e98-client-ca\") pod \"b5179459-d832-4419-96ce-44dd4f055e98\" (UID: \"b5179459-d832-4419-96ce-44dd4f055e98\") " Oct 13 17:36:42 crc kubenswrapper[4720]: I1013 17:36:42.717304 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5179459-d832-4419-96ce-44dd4f055e98-serving-cert\") pod \"b5179459-d832-4419-96ce-44dd4f055e98\" (UID: \"b5179459-d832-4419-96ce-44dd4f055e98\") " Oct 13 17:36:42 crc kubenswrapper[4720]: I1013 17:36:42.717358 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzzrp\" (UniqueName: \"kubernetes.io/projected/b5179459-d832-4419-96ce-44dd4f055e98-kube-api-access-rzzrp\") pod \"b5179459-d832-4419-96ce-44dd4f055e98\" (UID: \"b5179459-d832-4419-96ce-44dd4f055e98\") " Oct 13 17:36:42 crc kubenswrapper[4720]: I1013 17:36:42.717404 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5179459-d832-4419-96ce-44dd4f055e98-config\") pod \"b5179459-d832-4419-96ce-44dd4f055e98\" (UID: \"b5179459-d832-4419-96ce-44dd4f055e98\") " Oct 13 17:36:42 crc kubenswrapper[4720]: I1013 17:36:42.717626 4720 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e04c241-5f40-47d5-8c67-e8092a483089-client-ca\") on node \"crc\" DevicePath \"\"" Oct 13 17:36:42 crc kubenswrapper[4720]: I1013 17:36:42.717643 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e04c241-5f40-47d5-8c67-e8092a483089-config\") on node \"crc\" DevicePath \"\"" Oct 13 17:36:42 crc kubenswrapper[4720]: I1013 17:36:42.717651 4720 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e04c241-5f40-47d5-8c67-e8092a483089-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 13 17:36:42 crc kubenswrapper[4720]: I1013 17:36:42.717659 4720 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4e04c241-5f40-47d5-8c67-e8092a483089-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Oct 13 17:36:42 crc kubenswrapper[4720]: I1013 17:36:42.717671 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xpzx\" (UniqueName: \"kubernetes.io/projected/4e04c241-5f40-47d5-8c67-e8092a483089-kube-api-access-9xpzx\") on node \"crc\" DevicePath \"\"" Oct 13 17:36:42 crc kubenswrapper[4720]: I1013 17:36:42.718131 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5179459-d832-4419-96ce-44dd4f055e98-config" (OuterVolumeSpecName: "config") pod "b5179459-d832-4419-96ce-44dd4f055e98" (UID: "b5179459-d832-4419-96ce-44dd4f055e98"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:36:42 crc kubenswrapper[4720]: I1013 17:36:42.718162 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5179459-d832-4419-96ce-44dd4f055e98-client-ca" (OuterVolumeSpecName: "client-ca") pod "b5179459-d832-4419-96ce-44dd4f055e98" (UID: "b5179459-d832-4419-96ce-44dd4f055e98"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:36:42 crc kubenswrapper[4720]: I1013 17:36:42.722690 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5179459-d832-4419-96ce-44dd4f055e98-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b5179459-d832-4419-96ce-44dd4f055e98" (UID: "b5179459-d832-4419-96ce-44dd4f055e98"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:36:42 crc kubenswrapper[4720]: I1013 17:36:42.722902 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5179459-d832-4419-96ce-44dd4f055e98-kube-api-access-rzzrp" (OuterVolumeSpecName: "kube-api-access-rzzrp") pod "b5179459-d832-4419-96ce-44dd4f055e98" (UID: "b5179459-d832-4419-96ce-44dd4f055e98"). InnerVolumeSpecName "kube-api-access-rzzrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:36:42 crc kubenswrapper[4720]: I1013 17:36:42.818749 4720 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5179459-d832-4419-96ce-44dd4f055e98-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 13 17:36:42 crc kubenswrapper[4720]: I1013 17:36:42.818807 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzzrp\" (UniqueName: \"kubernetes.io/projected/b5179459-d832-4419-96ce-44dd4f055e98-kube-api-access-rzzrp\") on node \"crc\" DevicePath \"\"" Oct 13 17:36:42 crc kubenswrapper[4720]: I1013 17:36:42.818818 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5179459-d832-4419-96ce-44dd4f055e98-config\") on node \"crc\" DevicePath \"\"" Oct 13 17:36:42 crc kubenswrapper[4720]: I1013 17:36:42.818828 4720 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b5179459-d832-4419-96ce-44dd4f055e98-client-ca\") on node \"crc\" DevicePath \"\"" Oct 13 17:36:43 crc kubenswrapper[4720]: I1013 17:36:43.088967 4720 generic.go:334] "Generic (PLEG): container finished" podID="b5179459-d832-4419-96ce-44dd4f055e98" containerID="431b55886192faf024aea2c290a237484e8569559655af6299b96304d1b7e7bb" exitCode=0 Oct 13 17:36:43 crc kubenswrapper[4720]: I1013 17:36:43.089062 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9mm85" Oct 13 17:36:43 crc kubenswrapper[4720]: I1013 17:36:43.089078 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9mm85" event={"ID":"b5179459-d832-4419-96ce-44dd4f055e98","Type":"ContainerDied","Data":"431b55886192faf024aea2c290a237484e8569559655af6299b96304d1b7e7bb"} Oct 13 17:36:43 crc kubenswrapper[4720]: I1013 17:36:43.089109 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-9mm85" event={"ID":"b5179459-d832-4419-96ce-44dd4f055e98","Type":"ContainerDied","Data":"57d90ad53a9b92a3250ce3dced9722cc50815065a429ee1a039079c7aff76c29"} Oct 13 17:36:43 crc kubenswrapper[4720]: I1013 17:36:43.089157 4720 scope.go:117] "RemoveContainer" containerID="431b55886192faf024aea2c290a237484e8569559655af6299b96304d1b7e7bb" Oct 13 17:36:43 crc kubenswrapper[4720]: I1013 17:36:43.092741 4720 generic.go:334] "Generic (PLEG): container finished" podID="4e04c241-5f40-47d5-8c67-e8092a483089" containerID="b65588c90735197c422709ea288feba94a226ea0e2752a9af9203b106c328fd1" exitCode=0 Oct 13 17:36:43 crc kubenswrapper[4720]: I1013 17:36:43.092815 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-lwp58" event={"ID":"4e04c241-5f40-47d5-8c67-e8092a483089","Type":"ContainerDied","Data":"b65588c90735197c422709ea288feba94a226ea0e2752a9af9203b106c328fd1"} Oct 13 17:36:43 crc kubenswrapper[4720]: I1013 17:36:43.092863 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-lwp58" event={"ID":"4e04c241-5f40-47d5-8c67-e8092a483089","Type":"ContainerDied","Data":"f1669c13e5dfd672bbc672f667f0ff730dc37e0d117b4379c35825185c02dcda"} Oct 13 17:36:43 crc kubenswrapper[4720]: I1013 17:36:43.092954 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-lwp58" Oct 13 17:36:43 crc kubenswrapper[4720]: I1013 17:36:43.117507 4720 scope.go:117] "RemoveContainer" containerID="431b55886192faf024aea2c290a237484e8569559655af6299b96304d1b7e7bb" Oct 13 17:36:43 crc kubenswrapper[4720]: E1013 17:36:43.117987 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"431b55886192faf024aea2c290a237484e8569559655af6299b96304d1b7e7bb\": container with ID starting with 431b55886192faf024aea2c290a237484e8569559655af6299b96304d1b7e7bb not found: ID does not exist" containerID="431b55886192faf024aea2c290a237484e8569559655af6299b96304d1b7e7bb" Oct 13 17:36:43 crc kubenswrapper[4720]: I1013 17:36:43.118031 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"431b55886192faf024aea2c290a237484e8569559655af6299b96304d1b7e7bb"} err="failed to get container status \"431b55886192faf024aea2c290a237484e8569559655af6299b96304d1b7e7bb\": rpc error: code = NotFound desc = could not find container \"431b55886192faf024aea2c290a237484e8569559655af6299b96304d1b7e7bb\": container with ID starting with 431b55886192faf024aea2c290a237484e8569559655af6299b96304d1b7e7bb not found: ID does not exist" Oct 13 17:36:43 crc kubenswrapper[4720]: I1013 17:36:43.118054 4720 scope.go:117] "RemoveContainer" containerID="b65588c90735197c422709ea288feba94a226ea0e2752a9af9203b106c328fd1" Oct 13 17:36:43 crc kubenswrapper[4720]: I1013 17:36:43.137864 4720 scope.go:117] "RemoveContainer" containerID="b65588c90735197c422709ea288feba94a226ea0e2752a9af9203b106c328fd1" Oct 13 17:36:43 crc kubenswrapper[4720]: E1013 17:36:43.138338 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b65588c90735197c422709ea288feba94a226ea0e2752a9af9203b106c328fd1\": container with ID starting with b65588c90735197c422709ea288feba94a226ea0e2752a9af9203b106c328fd1 not found: ID does not exist" containerID="b65588c90735197c422709ea288feba94a226ea0e2752a9af9203b106c328fd1" Oct 13 17:36:43 crc kubenswrapper[4720]: I1013 17:36:43.138367 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b65588c90735197c422709ea288feba94a226ea0e2752a9af9203b106c328fd1"} err="failed to get container status \"b65588c90735197c422709ea288feba94a226ea0e2752a9af9203b106c328fd1\": rpc error: code = NotFound desc = could not find container \"b65588c90735197c422709ea288feba94a226ea0e2752a9af9203b106c328fd1\": container with ID starting with b65588c90735197c422709ea288feba94a226ea0e2752a9af9203b106c328fd1 not found: ID does not exist" Oct 13 17:36:43 crc kubenswrapper[4720]: I1013 17:36:43.144077 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lwp58"] Oct 13 17:36:43 crc kubenswrapper[4720]: I1013 17:36:43.149990 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lwp58"] Oct 13 17:36:43 crc kubenswrapper[4720]: I1013 17:36:43.159333 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-9mm85"] Oct 13 17:36:43 crc kubenswrapper[4720]: I1013 17:36:43.162264 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-9mm85"] Oct 13 17:36:43 crc kubenswrapper[4720]: I1013 17:36:43.175529 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e04c241-5f40-47d5-8c67-e8092a483089" path="/var/lib/kubelet/pods/4e04c241-5f40-47d5-8c67-e8092a483089/volumes" Oct 13 17:36:43 crc kubenswrapper[4720]: I1013 17:36:43.176043 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5179459-d832-4419-96ce-44dd4f055e98" path="/var/lib/kubelet/pods/b5179459-d832-4419-96ce-44dd4f055e98/volumes" Oct 13 17:36:43 crc kubenswrapper[4720]: I1013 17:36:43.743111 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86cf8dc84f-8pfhd"] Oct 13 17:36:43 crc kubenswrapper[4720]: E1013 17:36:43.743645 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f56e63f6-a476-4150-a661-07e988c98f28" containerName="pull" Oct 13 17:36:43 crc kubenswrapper[4720]: I1013 17:36:43.743746 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="f56e63f6-a476-4150-a661-07e988c98f28" containerName="pull" Oct 13 17:36:43 crc kubenswrapper[4720]: E1013 17:36:43.743827 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f56e63f6-a476-4150-a661-07e988c98f28" containerName="extract" Oct 13 17:36:43 crc kubenswrapper[4720]: I1013 17:36:43.743902 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="f56e63f6-a476-4150-a661-07e988c98f28" containerName="extract" Oct 13 17:36:43 crc kubenswrapper[4720]: E1013 17:36:43.743981 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e04c241-5f40-47d5-8c67-e8092a483089" containerName="controller-manager" Oct 13 17:36:43 crc kubenswrapper[4720]: I1013 17:36:43.744058 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e04c241-5f40-47d5-8c67-e8092a483089" containerName="controller-manager" Oct 13 17:36:43 crc kubenswrapper[4720]: E1013 17:36:43.744145 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f56e63f6-a476-4150-a661-07e988c98f28" containerName="util" Oct 13 17:36:43 crc kubenswrapper[4720]: I1013 17:36:43.744258 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="f56e63f6-a476-4150-a661-07e988c98f28" containerName="util" Oct 13 17:36:43 crc kubenswrapper[4720]: E1013 17:36:43.744353 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5179459-d832-4419-96ce-44dd4f055e98" containerName="route-controller-manager" Oct 13 17:36:43 crc kubenswrapper[4720]: I1013 17:36:43.744429 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5179459-d832-4419-96ce-44dd4f055e98" containerName="route-controller-manager" Oct 13 17:36:43 crc kubenswrapper[4720]: I1013 17:36:43.744626 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="f56e63f6-a476-4150-a661-07e988c98f28" containerName="extract" Oct 13 17:36:43 crc kubenswrapper[4720]: I1013 17:36:43.744716 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e04c241-5f40-47d5-8c67-e8092a483089" containerName="controller-manager" Oct 13 17:36:43 crc kubenswrapper[4720]: I1013 17:36:43.744801 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5179459-d832-4419-96ce-44dd4f055e98" containerName="route-controller-manager" Oct 13 17:36:43 crc kubenswrapper[4720]: I1013 17:36:43.745323 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86cf8dc84f-8pfhd" Oct 13 17:36:43 crc kubenswrapper[4720]: I1013 17:36:43.749370 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Oct 13 17:36:43 crc kubenswrapper[4720]: I1013 17:36:43.749525 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Oct 13 17:36:43 crc kubenswrapper[4720]: I1013 17:36:43.749659 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Oct 13 17:36:43 crc kubenswrapper[4720]: I1013 17:36:43.749749 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Oct 13 17:36:43 crc kubenswrapper[4720]: I1013 17:36:43.749869 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Oct 13 17:36:43 crc kubenswrapper[4720]: I1013 17:36:43.753857 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Oct 13 17:36:43 crc kubenswrapper[4720]: I1013 17:36:43.773322 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86cf8dc84f-8pfhd"] Oct 13 17:36:43 crc kubenswrapper[4720]: I1013 17:36:43.776147 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-b966cf55f-xjw27"] Oct 13 17:36:43 crc kubenswrapper[4720]: I1013 17:36:43.776811 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b966cf55f-xjw27" Oct 13 17:36:43 crc kubenswrapper[4720]: I1013 17:36:43.788013 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Oct 13 17:36:43 crc kubenswrapper[4720]: I1013 17:36:43.789540 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Oct 13 17:36:43 crc kubenswrapper[4720]: I1013 17:36:43.789573 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Oct 13 17:36:43 crc kubenswrapper[4720]: I1013 17:36:43.789887 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Oct 13 17:36:43 crc kubenswrapper[4720]: I1013 17:36:43.790055 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Oct 13 17:36:43 crc kubenswrapper[4720]: I1013 17:36:43.790447 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Oct 13 17:36:43 crc kubenswrapper[4720]: I1013 17:36:43.814553 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Oct 13 17:36:43 crc kubenswrapper[4720]: I1013 17:36:43.815169 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-b966cf55f-xjw27"] Oct 13 17:36:43 crc kubenswrapper[4720]: I1013 17:36:43.932641 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94dc372e-be40-4a26-9d76-c7a8ac42c1bc-serving-cert\") pod \"route-controller-manager-86cf8dc84f-8pfhd\" (UID: \"94dc372e-be40-4a26-9d76-c7a8ac42c1bc\") " pod="openshift-route-controller-manager/route-controller-manager-86cf8dc84f-8pfhd" Oct 13 17:36:43 crc kubenswrapper[4720]: I1013 17:36:43.933021 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2c898b0-e30e-4ed5-a6b2-4ae1d4f521be-config\") pod \"controller-manager-b966cf55f-xjw27\" (UID: \"f2c898b0-e30e-4ed5-a6b2-4ae1d4f521be\") " pod="openshift-controller-manager/controller-manager-b966cf55f-xjw27" Oct 13 17:36:43 crc kubenswrapper[4720]: I1013 17:36:43.933153 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g7s6\" (UniqueName: \"kubernetes.io/projected/94dc372e-be40-4a26-9d76-c7a8ac42c1bc-kube-api-access-2g7s6\") pod \"route-controller-manager-86cf8dc84f-8pfhd\" (UID: \"94dc372e-be40-4a26-9d76-c7a8ac42c1bc\") " pod="openshift-route-controller-manager/route-controller-manager-86cf8dc84f-8pfhd" Oct 13 17:36:43 crc kubenswrapper[4720]: I1013 17:36:43.933363 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94dc372e-be40-4a26-9d76-c7a8ac42c1bc-config\") pod \"route-controller-manager-86cf8dc84f-8pfhd\" (UID: \"94dc372e-be40-4a26-9d76-c7a8ac42c1bc\") " pod="openshift-route-controller-manager/route-controller-manager-86cf8dc84f-8pfhd" Oct 13 17:36:43 crc kubenswrapper[4720]: I1013 17:36:43.933508 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f2c898b0-e30e-4ed5-a6b2-4ae1d4f521be-client-ca\") pod \"controller-manager-b966cf55f-xjw27\" (UID: \"f2c898b0-e30e-4ed5-a6b2-4ae1d4f521be\") " pod="openshift-controller-manager/controller-manager-b966cf55f-xjw27" Oct 13 17:36:43 crc kubenswrapper[4720]: I1013 17:36:43.933707 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f2c898b0-e30e-4ed5-a6b2-4ae1d4f521be-proxy-ca-bundles\") pod \"controller-manager-b966cf55f-xjw27\" (UID: \"f2c898b0-e30e-4ed5-a6b2-4ae1d4f521be\") " pod="openshift-controller-manager/controller-manager-b966cf55f-xjw27" Oct 13 17:36:43 crc kubenswrapper[4720]: I1013 17:36:43.933775 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/94dc372e-be40-4a26-9d76-c7a8ac42c1bc-client-ca\") pod \"route-controller-manager-86cf8dc84f-8pfhd\" (UID: \"94dc372e-be40-4a26-9d76-c7a8ac42c1bc\") " pod="openshift-route-controller-manager/route-controller-manager-86cf8dc84f-8pfhd" Oct 13 17:36:43 crc kubenswrapper[4720]: I1013 17:36:43.933806 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzkz2\" (UniqueName: \"kubernetes.io/projected/f2c898b0-e30e-4ed5-a6b2-4ae1d4f521be-kube-api-access-xzkz2\") pod \"controller-manager-b966cf55f-xjw27\" (UID: \"f2c898b0-e30e-4ed5-a6b2-4ae1d4f521be\") " pod="openshift-controller-manager/controller-manager-b966cf55f-xjw27" Oct 13 17:36:43 crc kubenswrapper[4720]: I1013 17:36:43.933882 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2c898b0-e30e-4ed5-a6b2-4ae1d4f521be-serving-cert\") pod \"controller-manager-b966cf55f-xjw27\" (UID: \"f2c898b0-e30e-4ed5-a6b2-4ae1d4f521be\") " pod="openshift-controller-manager/controller-manager-b966cf55f-xjw27" Oct 13 17:36:44 crc kubenswrapper[4720]: I1013 17:36:44.035046 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2g7s6\" (UniqueName: \"kubernetes.io/projected/94dc372e-be40-4a26-9d76-c7a8ac42c1bc-kube-api-access-2g7s6\") pod \"route-controller-manager-86cf8dc84f-8pfhd\" (UID: \"94dc372e-be40-4a26-9d76-c7a8ac42c1bc\") " pod="openshift-route-controller-manager/route-controller-manager-86cf8dc84f-8pfhd" Oct 13 17:36:44 crc kubenswrapper[4720]: I1013 17:36:44.035404 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2c898b0-e30e-4ed5-a6b2-4ae1d4f521be-config\") pod \"controller-manager-b966cf55f-xjw27\" (UID: \"f2c898b0-e30e-4ed5-a6b2-4ae1d4f521be\") " pod="openshift-controller-manager/controller-manager-b966cf55f-xjw27" Oct 13 17:36:44 crc kubenswrapper[4720]: I1013 17:36:44.035551 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94dc372e-be40-4a26-9d76-c7a8ac42c1bc-config\") pod \"route-controller-manager-86cf8dc84f-8pfhd\" (UID: \"94dc372e-be40-4a26-9d76-c7a8ac42c1bc\") " pod="openshift-route-controller-manager/route-controller-manager-86cf8dc84f-8pfhd" Oct 13 17:36:44 crc kubenswrapper[4720]: I1013 17:36:44.035705 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f2c898b0-e30e-4ed5-a6b2-4ae1d4f521be-client-ca\") pod \"controller-manager-b966cf55f-xjw27\" (UID: \"f2c898b0-e30e-4ed5-a6b2-4ae1d4f521be\") " pod="openshift-controller-manager/controller-manager-b966cf55f-xjw27" Oct 13 17:36:44 crc kubenswrapper[4720]: I1013 17:36:44.035852 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f2c898b0-e30e-4ed5-a6b2-4ae1d4f521be-proxy-ca-bundles\") pod \"controller-manager-b966cf55f-xjw27\" (UID: \"f2c898b0-e30e-4ed5-a6b2-4ae1d4f521be\") " pod="openshift-controller-manager/controller-manager-b966cf55f-xjw27" Oct 13 17:36:44 crc kubenswrapper[4720]: I1013 17:36:44.035992 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/94dc372e-be40-4a26-9d76-c7a8ac42c1bc-client-ca\") pod \"route-controller-manager-86cf8dc84f-8pfhd\" (UID: \"94dc372e-be40-4a26-9d76-c7a8ac42c1bc\") " pod="openshift-route-controller-manager/route-controller-manager-86cf8dc84f-8pfhd" Oct 13 17:36:44 crc kubenswrapper[4720]: I1013 17:36:44.036136 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzkz2\" (UniqueName: \"kubernetes.io/projected/f2c898b0-e30e-4ed5-a6b2-4ae1d4f521be-kube-api-access-xzkz2\") pod \"controller-manager-b966cf55f-xjw27\" (UID: \"f2c898b0-e30e-4ed5-a6b2-4ae1d4f521be\") " pod="openshift-controller-manager/controller-manager-b966cf55f-xjw27" Oct 13 17:36:44 crc kubenswrapper[4720]: I1013 17:36:44.036298 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2c898b0-e30e-4ed5-a6b2-4ae1d4f521be-serving-cert\") pod \"controller-manager-b966cf55f-xjw27\" (UID: \"f2c898b0-e30e-4ed5-a6b2-4ae1d4f521be\") " pod="openshift-controller-manager/controller-manager-b966cf55f-xjw27" Oct 13 17:36:44 crc kubenswrapper[4720]: I1013 17:36:44.036436 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94dc372e-be40-4a26-9d76-c7a8ac42c1bc-serving-cert\") pod \"route-controller-manager-86cf8dc84f-8pfhd\" (UID: \"94dc372e-be40-4a26-9d76-c7a8ac42c1bc\") " pod="openshift-route-controller-manager/route-controller-manager-86cf8dc84f-8pfhd" Oct 13 17:36:44 crc kubenswrapper[4720]: I1013 17:36:44.036700 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2c898b0-e30e-4ed5-a6b2-4ae1d4f521be-config\") pod \"controller-manager-b966cf55f-xjw27\" (UID: \"f2c898b0-e30e-4ed5-a6b2-4ae1d4f521be\") " pod="openshift-controller-manager/controller-manager-b966cf55f-xjw27" Oct 13 17:36:44 crc kubenswrapper[4720]: I1013 17:36:44.036881 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/94dc372e-be40-4a26-9d76-c7a8ac42c1bc-client-ca\") pod \"route-controller-manager-86cf8dc84f-8pfhd\" (UID: \"94dc372e-be40-4a26-9d76-c7a8ac42c1bc\") " pod="openshift-route-controller-manager/route-controller-manager-86cf8dc84f-8pfhd" Oct 13 17:36:44 crc kubenswrapper[4720]: I1013 17:36:44.036887 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f2c898b0-e30e-4ed5-a6b2-4ae1d4f521be-client-ca\") pod \"controller-manager-b966cf55f-xjw27\" (UID: \"f2c898b0-e30e-4ed5-a6b2-4ae1d4f521be\") " pod="openshift-controller-manager/controller-manager-b966cf55f-xjw27" Oct 13 17:36:44 crc kubenswrapper[4720]: I1013 17:36:44.037055 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94dc372e-be40-4a26-9d76-c7a8ac42c1bc-config\") pod \"route-controller-manager-86cf8dc84f-8pfhd\" (UID: \"94dc372e-be40-4a26-9d76-c7a8ac42c1bc\") " pod="openshift-route-controller-manager/route-controller-manager-86cf8dc84f-8pfhd" Oct 13 17:36:44 crc kubenswrapper[4720]: I1013 17:36:44.037207 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f2c898b0-e30e-4ed5-a6b2-4ae1d4f521be-proxy-ca-bundles\") pod \"controller-manager-b966cf55f-xjw27\" (UID: \"f2c898b0-e30e-4ed5-a6b2-4ae1d4f521be\") " pod="openshift-controller-manager/controller-manager-b966cf55f-xjw27" Oct 13 17:36:44 crc kubenswrapper[4720]: I1013 17:36:44.042755 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94dc372e-be40-4a26-9d76-c7a8ac42c1bc-serving-cert\") pod \"route-controller-manager-86cf8dc84f-8pfhd\" (UID: \"94dc372e-be40-4a26-9d76-c7a8ac42c1bc\") " pod="openshift-route-controller-manager/route-controller-manager-86cf8dc84f-8pfhd" Oct 13 17:36:44 crc kubenswrapper[4720]: I1013 17:36:44.044876 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2c898b0-e30e-4ed5-a6b2-4ae1d4f521be-serving-cert\") pod \"controller-manager-b966cf55f-xjw27\" (UID: \"f2c898b0-e30e-4ed5-a6b2-4ae1d4f521be\") " pod="openshift-controller-manager/controller-manager-b966cf55f-xjw27" Oct 13 17:36:44 crc kubenswrapper[4720]: I1013 17:36:44.051292 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzkz2\" (UniqueName: \"kubernetes.io/projected/f2c898b0-e30e-4ed5-a6b2-4ae1d4f521be-kube-api-access-xzkz2\") pod \"controller-manager-b966cf55f-xjw27\" (UID: \"f2c898b0-e30e-4ed5-a6b2-4ae1d4f521be\") " pod="openshift-controller-manager/controller-manager-b966cf55f-xjw27" Oct 13 17:36:44 crc kubenswrapper[4720]: I1013 17:36:44.051664 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2g7s6\" (UniqueName: \"kubernetes.io/projected/94dc372e-be40-4a26-9d76-c7a8ac42c1bc-kube-api-access-2g7s6\") pod \"route-controller-manager-86cf8dc84f-8pfhd\" (UID: \"94dc372e-be40-4a26-9d76-c7a8ac42c1bc\") " pod="openshift-route-controller-manager/route-controller-manager-86cf8dc84f-8pfhd" Oct 13 17:36:44 crc kubenswrapper[4720]: I1013 17:36:44.065505 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86cf8dc84f-8pfhd" Oct 13 17:36:44 crc kubenswrapper[4720]: I1013 17:36:44.116677 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b966cf55f-xjw27" Oct 13 17:36:44 crc kubenswrapper[4720]: I1013 17:36:44.509218 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86cf8dc84f-8pfhd"] Oct 13 17:36:44 crc kubenswrapper[4720]: I1013 17:36:44.584440 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-b966cf55f-xjw27"] Oct 13 17:36:44 crc kubenswrapper[4720]: W1013 17:36:44.607862 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2c898b0_e30e_4ed5_a6b2_4ae1d4f521be.slice/crio-91316f2e712b923cf05e294ed09401ebed8ac30a14c71fd93b808d13f543a467 WatchSource:0}: Error finding container 91316f2e712b923cf05e294ed09401ebed8ac30a14c71fd93b808d13f543a467: Status 404 returned error can't find the container with id 91316f2e712b923cf05e294ed09401ebed8ac30a14c71fd93b808d13f543a467 Oct 13 17:36:45 crc kubenswrapper[4720]: I1013 17:36:45.111137 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b966cf55f-xjw27" event={"ID":"f2c898b0-e30e-4ed5-a6b2-4ae1d4f521be","Type":"ContainerStarted","Data":"e6bb8a5473f070e67c4c7d08fd91f55cbf64a02943d94cd537984202152b4431"} Oct 13 17:36:45 crc kubenswrapper[4720]: I1013 17:36:45.111689 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b966cf55f-xjw27" event={"ID":"f2c898b0-e30e-4ed5-a6b2-4ae1d4f521be","Type":"ContainerStarted","Data":"91316f2e712b923cf05e294ed09401ebed8ac30a14c71fd93b808d13f543a467"} Oct 13 17:36:45 crc kubenswrapper[4720]: I1013 17:36:45.111813 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-b966cf55f-xjw27" Oct 13 17:36:45 crc kubenswrapper[4720]: I1013 17:36:45.112871 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86cf8dc84f-8pfhd" event={"ID":"94dc372e-be40-4a26-9d76-c7a8ac42c1bc","Type":"ContainerStarted","Data":"f49d073eb52ac9b6b364588dfda0ed01f02470b2456ad8e35be403808890baf3"} Oct 13 17:36:45 crc kubenswrapper[4720]: I1013 17:36:45.112973 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86cf8dc84f-8pfhd" event={"ID":"94dc372e-be40-4a26-9d76-c7a8ac42c1bc","Type":"ContainerStarted","Data":"61015b87ecbf4ca63cd34bea51bf8d49d22ea99f6c9710e00671ea5369b49a11"} Oct 13 17:36:45 crc kubenswrapper[4720]: I1013 17:36:45.113122 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-86cf8dc84f-8pfhd" Oct 13 17:36:45 crc kubenswrapper[4720]: I1013 17:36:45.117799 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-b966cf55f-xjw27" Oct 13 17:36:45 crc kubenswrapper[4720]: I1013 17:36:45.125704 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-b966cf55f-xjw27" podStartSLOduration=3.125687052 podStartE2EDuration="3.125687052s" podCreationTimestamp="2025-10-13 17:36:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:36:45.12445476 +0000 UTC m=+750.581704892" watchObservedRunningTime="2025-10-13 17:36:45.125687052 +0000 UTC m=+750.582937174" Oct 13 17:36:45 crc kubenswrapper[4720]: I1013 17:36:45.142736 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-86cf8dc84f-8pfhd" podStartSLOduration=2.142722419 podStartE2EDuration="2.142722419s" podCreationTimestamp="2025-10-13 17:36:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:36:45.141141648 +0000 UTC m=+750.598391780" watchObservedRunningTime="2025-10-13 17:36:45.142722419 +0000 UTC m=+750.599972551" Oct 13 17:36:45 crc kubenswrapper[4720]: I1013 17:36:45.213075 4720 patch_prober.go:28] interesting pod/machine-config-daemon-htwnl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 17:36:45 crc kubenswrapper[4720]: I1013 17:36:45.213337 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 17:36:45 crc kubenswrapper[4720]: I1013 17:36:45.213450 4720 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" Oct 13 17:36:45 crc kubenswrapper[4720]: I1013 17:36:45.214015 4720 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"08f406006acd7f2a5ccd32367b83e6ce328ee80787fc6b3f0206a4c41af2f48b"} pod="openshift-machine-config-operator/machine-config-daemon-htwnl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 13 17:36:45 crc kubenswrapper[4720]: I1013 17:36:45.214127 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerName="machine-config-daemon" containerID="cri-o://08f406006acd7f2a5ccd32367b83e6ce328ee80787fc6b3f0206a4c41af2f48b" gracePeriod=600 Oct 13 17:36:45 crc kubenswrapper[4720]: I1013 17:36:45.220089 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-86cf8dc84f-8pfhd" Oct 13 17:36:45 crc kubenswrapper[4720]: I1013 17:36:45.379666 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-c9c874ff7-8qrr8"] Oct 13 17:36:45 crc kubenswrapper[4720]: I1013 17:36:45.380881 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-c9c874ff7-8qrr8" Oct 13 17:36:45 crc kubenswrapper[4720]: I1013 17:36:45.389449 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-2fk2d" Oct 13 17:36:45 crc kubenswrapper[4720]: I1013 17:36:45.416200 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-c9c874ff7-8qrr8"] Oct 13 17:36:45 crc kubenswrapper[4720]: I1013 17:36:45.556884 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ppnb\" (UniqueName: \"kubernetes.io/projected/2a1eb7a4-db4c-4029-9320-c447d9f1c69c-kube-api-access-5ppnb\") pod \"openstack-operator-controller-operator-c9c874ff7-8qrr8\" (UID: \"2a1eb7a4-db4c-4029-9320-c447d9f1c69c\") " pod="openstack-operators/openstack-operator-controller-operator-c9c874ff7-8qrr8" Oct 13 17:36:45 crc kubenswrapper[4720]: I1013 17:36:45.658886 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ppnb\" (UniqueName: \"kubernetes.io/projected/2a1eb7a4-db4c-4029-9320-c447d9f1c69c-kube-api-access-5ppnb\") pod \"openstack-operator-controller-operator-c9c874ff7-8qrr8\" (UID: \"2a1eb7a4-db4c-4029-9320-c447d9f1c69c\") " pod="openstack-operators/openstack-operator-controller-operator-c9c874ff7-8qrr8" Oct 13 17:36:45 crc kubenswrapper[4720]: I1013 17:36:45.691557 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ppnb\" (UniqueName: \"kubernetes.io/projected/2a1eb7a4-db4c-4029-9320-c447d9f1c69c-kube-api-access-5ppnb\") pod \"openstack-operator-controller-operator-c9c874ff7-8qrr8\" (UID: \"2a1eb7a4-db4c-4029-9320-c447d9f1c69c\") " pod="openstack-operators/openstack-operator-controller-operator-c9c874ff7-8qrr8" Oct 13 17:36:45 crc kubenswrapper[4720]: I1013 17:36:45.707127 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-c9c874ff7-8qrr8" Oct 13 17:36:46 crc kubenswrapper[4720]: W1013 17:36:46.121063 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a1eb7a4_db4c_4029_9320_c447d9f1c69c.slice/crio-d79df47b7c155b1207853752a279a0087e5b4f3abfc58777e6605f88cba89650 WatchSource:0}: Error finding container d79df47b7c155b1207853752a279a0087e5b4f3abfc58777e6605f88cba89650: Status 404 returned error can't find the container with id d79df47b7c155b1207853752a279a0087e5b4f3abfc58777e6605f88cba89650 Oct 13 17:36:46 crc kubenswrapper[4720]: I1013 17:36:46.121051 4720 generic.go:334] "Generic (PLEG): container finished" podID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerID="08f406006acd7f2a5ccd32367b83e6ce328ee80787fc6b3f0206a4c41af2f48b" exitCode=0 Oct 13 17:36:46 crc kubenswrapper[4720]: I1013 17:36:46.121087 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" event={"ID":"ce442c80-fcde-4b79-b6f9-f8f25771dfd4","Type":"ContainerDied","Data":"08f406006acd7f2a5ccd32367b83e6ce328ee80787fc6b3f0206a4c41af2f48b"} Oct 13 17:36:46 crc kubenswrapper[4720]: I1013 17:36:46.121642 4720 scope.go:117] "RemoveContainer" containerID="0d9e0d78254b1372630a7f56a5e019b7d8881a622a4a4292a4394c0f2b9be45a" Oct 13 17:36:46 crc kubenswrapper[4720]: I1013 17:36:46.121505 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" event={"ID":"ce442c80-fcde-4b79-b6f9-f8f25771dfd4","Type":"ContainerStarted","Data":"4d6ae9650e2a4d9303f0ed0df57f14a865dd0defb52c4262f76ce0d77b3d80c5"} Oct 13 17:36:46 crc kubenswrapper[4720]: I1013 17:36:46.123424 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-c9c874ff7-8qrr8"] Oct 13 17:36:47 crc kubenswrapper[4720]: I1013 17:36:47.129620 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-c9c874ff7-8qrr8" event={"ID":"2a1eb7a4-db4c-4029-9320-c447d9f1c69c","Type":"ContainerStarted","Data":"d79df47b7c155b1207853752a279a0087e5b4f3abfc58777e6605f88cba89650"} Oct 13 17:36:49 crc kubenswrapper[4720]: I1013 17:36:49.156270 4720 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 13 17:36:51 crc kubenswrapper[4720]: I1013 17:36:51.201761 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-c9c874ff7-8qrr8" event={"ID":"2a1eb7a4-db4c-4029-9320-c447d9f1c69c","Type":"ContainerStarted","Data":"14ee5bc7c3f89f61e468227736fa0eb466dd4a186d216acc85289afbec25f6ae"} Oct 13 17:36:52 crc kubenswrapper[4720]: I1013 17:36:52.551901 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5cgfp"] Oct 13 17:36:52 crc kubenswrapper[4720]: I1013 17:36:52.553387 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5cgfp" Oct 13 17:36:52 crc kubenswrapper[4720]: I1013 17:36:52.568926 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5cgfp"] Oct 13 17:36:52 crc kubenswrapper[4720]: I1013 17:36:52.707695 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nn4w\" (UniqueName: \"kubernetes.io/projected/f54b38eb-5a4a-483d-a621-248d19a8d07d-kube-api-access-7nn4w\") pod \"certified-operators-5cgfp\" (UID: \"f54b38eb-5a4a-483d-a621-248d19a8d07d\") " pod="openshift-marketplace/certified-operators-5cgfp" Oct 13 17:36:52 crc kubenswrapper[4720]: I1013 17:36:52.707871 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f54b38eb-5a4a-483d-a621-248d19a8d07d-catalog-content\") pod \"certified-operators-5cgfp\" (UID: \"f54b38eb-5a4a-483d-a621-248d19a8d07d\") " pod="openshift-marketplace/certified-operators-5cgfp" Oct 13 17:36:52 crc kubenswrapper[4720]: I1013 17:36:52.708015 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f54b38eb-5a4a-483d-a621-248d19a8d07d-utilities\") pod \"certified-operators-5cgfp\" (UID: \"f54b38eb-5a4a-483d-a621-248d19a8d07d\") " pod="openshift-marketplace/certified-operators-5cgfp" Oct 13 17:36:52 crc kubenswrapper[4720]: I1013 17:36:52.809259 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f54b38eb-5a4a-483d-a621-248d19a8d07d-utilities\") pod \"certified-operators-5cgfp\" (UID: \"f54b38eb-5a4a-483d-a621-248d19a8d07d\") " pod="openshift-marketplace/certified-operators-5cgfp" Oct 13 17:36:52 crc kubenswrapper[4720]: I1013 17:36:52.809347 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nn4w\" (UniqueName: \"kubernetes.io/projected/f54b38eb-5a4a-483d-a621-248d19a8d07d-kube-api-access-7nn4w\") pod \"certified-operators-5cgfp\" (UID: \"f54b38eb-5a4a-483d-a621-248d19a8d07d\") " pod="openshift-marketplace/certified-operators-5cgfp" Oct 13 17:36:52 crc kubenswrapper[4720]: I1013 17:36:52.809440 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f54b38eb-5a4a-483d-a621-248d19a8d07d-catalog-content\") pod \"certified-operators-5cgfp\" (UID: \"f54b38eb-5a4a-483d-a621-248d19a8d07d\") " pod="openshift-marketplace/certified-operators-5cgfp" Oct 13 17:36:52 crc kubenswrapper[4720]: I1013 17:36:52.810276 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f54b38eb-5a4a-483d-a621-248d19a8d07d-catalog-content\") pod \"certified-operators-5cgfp\" (UID: \"f54b38eb-5a4a-483d-a621-248d19a8d07d\") " pod="openshift-marketplace/certified-operators-5cgfp" Oct 13 17:36:52 crc kubenswrapper[4720]: I1013 17:36:52.810817 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f54b38eb-5a4a-483d-a621-248d19a8d07d-utilities\") pod \"certified-operators-5cgfp\" (UID: \"f54b38eb-5a4a-483d-a621-248d19a8d07d\") " pod="openshift-marketplace/certified-operators-5cgfp" Oct 13 17:36:52 crc kubenswrapper[4720]: I1013 17:36:52.846010 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nn4w\" (UniqueName: \"kubernetes.io/projected/f54b38eb-5a4a-483d-a621-248d19a8d07d-kube-api-access-7nn4w\") pod \"certified-operators-5cgfp\" (UID: \"f54b38eb-5a4a-483d-a621-248d19a8d07d\") " pod="openshift-marketplace/certified-operators-5cgfp" Oct 13 17:36:52 crc kubenswrapper[4720]: I1013 17:36:52.878944 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5cgfp" Oct 13 17:36:53 crc kubenswrapper[4720]: I1013 17:36:53.228673 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-c9c874ff7-8qrr8" event={"ID":"2a1eb7a4-db4c-4029-9320-c447d9f1c69c","Type":"ContainerStarted","Data":"dfa9f7d9949b7af847338cb0120aad7b08ae261a2890479b4cebad62ab6b0175"} Oct 13 17:36:53 crc kubenswrapper[4720]: I1013 17:36:53.230120 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-c9c874ff7-8qrr8" Oct 13 17:36:53 crc kubenswrapper[4720]: I1013 17:36:53.265832 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-c9c874ff7-8qrr8" podStartSLOduration=2.141306985 podStartE2EDuration="8.265818667s" podCreationTimestamp="2025-10-13 17:36:45 +0000 UTC" firstStartedPulling="2025-10-13 17:36:46.124828746 +0000 UTC m=+751.582078908" lastFinishedPulling="2025-10-13 17:36:52.249340458 +0000 UTC m=+757.706590590" observedRunningTime="2025-10-13 17:36:53.263022595 +0000 UTC m=+758.720272737" watchObservedRunningTime="2025-10-13 17:36:53.265818667 +0000 UTC m=+758.723068789" Oct 13 17:36:53 crc kubenswrapper[4720]: I1013 17:36:53.415395 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5cgfp"] Oct 13 17:36:53 crc kubenswrapper[4720]: W1013 17:36:53.431603 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf54b38eb_5a4a_483d_a621_248d19a8d07d.slice/crio-de1d0179992b82ee45f158d6bb4eed809ef2baa6a7c3450738593a64048100d1 WatchSource:0}: Error finding container de1d0179992b82ee45f158d6bb4eed809ef2baa6a7c3450738593a64048100d1: Status 404 returned error can't find the container with id de1d0179992b82ee45f158d6bb4eed809ef2baa6a7c3450738593a64048100d1 Oct 13 17:36:54 crc kubenswrapper[4720]: I1013 17:36:54.238753 4720 generic.go:334] "Generic (PLEG): container finished" podID="f54b38eb-5a4a-483d-a621-248d19a8d07d" containerID="f0145221323f32fb4ff6d7c8c0c135c485b2b7f7428276c965b8b133705d78a6" exitCode=0 Oct 13 17:36:54 crc kubenswrapper[4720]: I1013 17:36:54.238926 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5cgfp" event={"ID":"f54b38eb-5a4a-483d-a621-248d19a8d07d","Type":"ContainerDied","Data":"f0145221323f32fb4ff6d7c8c0c135c485b2b7f7428276c965b8b133705d78a6"} Oct 13 17:36:54 crc kubenswrapper[4720]: I1013 17:36:54.239111 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5cgfp" event={"ID":"f54b38eb-5a4a-483d-a621-248d19a8d07d","Type":"ContainerStarted","Data":"de1d0179992b82ee45f158d6bb4eed809ef2baa6a7c3450738593a64048100d1"} Oct 13 17:36:55 crc kubenswrapper[4720]: I1013 17:36:55.249941 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5cgfp" event={"ID":"f54b38eb-5a4a-483d-a621-248d19a8d07d","Type":"ContainerStarted","Data":"98ff0c94a8a9345b6b54c148b6e8c2eb711abe4ab8da98c458b46e9832e6ae3f"} Oct 13 17:36:55 crc kubenswrapper[4720]: I1013 17:36:55.253558 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-c9c874ff7-8qrr8" Oct 13 17:36:56 crc kubenswrapper[4720]: I1013 17:36:56.256955 4720 generic.go:334] "Generic (PLEG): container finished" podID="f54b38eb-5a4a-483d-a621-248d19a8d07d" containerID="98ff0c94a8a9345b6b54c148b6e8c2eb711abe4ab8da98c458b46e9832e6ae3f" exitCode=0 Oct 13 17:36:56 crc kubenswrapper[4720]: I1013 17:36:56.257048 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5cgfp" event={"ID":"f54b38eb-5a4a-483d-a621-248d19a8d07d","Type":"ContainerDied","Data":"98ff0c94a8a9345b6b54c148b6e8c2eb711abe4ab8da98c458b46e9832e6ae3f"} Oct 13 17:36:56 crc kubenswrapper[4720]: I1013 17:36:56.257105 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5cgfp" event={"ID":"f54b38eb-5a4a-483d-a621-248d19a8d07d","Type":"ContainerStarted","Data":"4c81adf364215f6c9e250254acf8d12c6adcb0a539c6c7d5ca022e042121744f"} Oct 13 17:36:56 crc kubenswrapper[4720]: I1013 17:36:56.275512 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5cgfp" podStartSLOduration=2.5331981839999997 podStartE2EDuration="4.275493953s" podCreationTimestamp="2025-10-13 17:36:52 +0000 UTC" firstStartedPulling="2025-10-13 17:36:54.241656102 +0000 UTC m=+759.698906234" lastFinishedPulling="2025-10-13 17:36:55.983951831 +0000 UTC m=+761.441202003" observedRunningTime="2025-10-13 17:36:56.272204199 +0000 UTC m=+761.729454321" watchObservedRunningTime="2025-10-13 17:36:56.275493953 +0000 UTC m=+761.732744085" Oct 13 17:37:02 crc kubenswrapper[4720]: I1013 17:37:02.879911 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5cgfp" Oct 13 17:37:02 crc kubenswrapper[4720]: I1013 17:37:02.881470 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5cgfp" Oct 13 17:37:02 crc kubenswrapper[4720]: I1013 17:37:02.941857 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5cgfp" Oct 13 17:37:03 crc kubenswrapper[4720]: I1013 17:37:03.340694 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5cgfp" Oct 13 17:37:05 crc kubenswrapper[4720]: I1013 17:37:05.343351 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5cgfp"] Oct 13 17:37:06 crc kubenswrapper[4720]: I1013 17:37:06.325244 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5cgfp" podUID="f54b38eb-5a4a-483d-a621-248d19a8d07d" containerName="registry-server" containerID="cri-o://4c81adf364215f6c9e250254acf8d12c6adcb0a539c6c7d5ca022e042121744f" gracePeriod=2 Oct 13 17:37:06 crc kubenswrapper[4720]: I1013 17:37:06.770892 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5cgfp" Oct 13 17:37:06 crc kubenswrapper[4720]: I1013 17:37:06.934985 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f54b38eb-5a4a-483d-a621-248d19a8d07d-catalog-content\") pod \"f54b38eb-5a4a-483d-a621-248d19a8d07d\" (UID: \"f54b38eb-5a4a-483d-a621-248d19a8d07d\") " Oct 13 17:37:06 crc kubenswrapper[4720]: I1013 17:37:06.935093 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f54b38eb-5a4a-483d-a621-248d19a8d07d-utilities\") pod \"f54b38eb-5a4a-483d-a621-248d19a8d07d\" (UID: \"f54b38eb-5a4a-483d-a621-248d19a8d07d\") " Oct 13 17:37:06 crc kubenswrapper[4720]: I1013 17:37:06.935279 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nn4w\" (UniqueName: \"kubernetes.io/projected/f54b38eb-5a4a-483d-a621-248d19a8d07d-kube-api-access-7nn4w\") pod \"f54b38eb-5a4a-483d-a621-248d19a8d07d\" (UID: \"f54b38eb-5a4a-483d-a621-248d19a8d07d\") " Oct 13 17:37:06 crc kubenswrapper[4720]: I1013 17:37:06.936306 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f54b38eb-5a4a-483d-a621-248d19a8d07d-utilities" (OuterVolumeSpecName: "utilities") pod "f54b38eb-5a4a-483d-a621-248d19a8d07d" (UID: "f54b38eb-5a4a-483d-a621-248d19a8d07d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:37:06 crc kubenswrapper[4720]: I1013 17:37:06.942435 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f54b38eb-5a4a-483d-a621-248d19a8d07d-kube-api-access-7nn4w" (OuterVolumeSpecName: "kube-api-access-7nn4w") pod "f54b38eb-5a4a-483d-a621-248d19a8d07d" (UID: "f54b38eb-5a4a-483d-a621-248d19a8d07d"). InnerVolumeSpecName "kube-api-access-7nn4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:37:06 crc kubenswrapper[4720]: I1013 17:37:06.990030 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f54b38eb-5a4a-483d-a621-248d19a8d07d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f54b38eb-5a4a-483d-a621-248d19a8d07d" (UID: "f54b38eb-5a4a-483d-a621-248d19a8d07d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:37:07 crc kubenswrapper[4720]: I1013 17:37:07.037423 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f54b38eb-5a4a-483d-a621-248d19a8d07d-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 17:37:07 crc kubenswrapper[4720]: I1013 17:37:07.037471 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7nn4w\" (UniqueName: \"kubernetes.io/projected/f54b38eb-5a4a-483d-a621-248d19a8d07d-kube-api-access-7nn4w\") on node \"crc\" DevicePath \"\"" Oct 13 17:37:07 crc kubenswrapper[4720]: I1013 17:37:07.037486 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f54b38eb-5a4a-483d-a621-248d19a8d07d-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 17:37:07 crc kubenswrapper[4720]: I1013 17:37:07.340292 4720 generic.go:334] "Generic (PLEG): container finished" podID="f54b38eb-5a4a-483d-a621-248d19a8d07d" containerID="4c81adf364215f6c9e250254acf8d12c6adcb0a539c6c7d5ca022e042121744f" exitCode=0 Oct 13 17:37:07 crc kubenswrapper[4720]: I1013 17:37:07.340375 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5cgfp" Oct 13 17:37:07 crc kubenswrapper[4720]: I1013 17:37:07.340354 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5cgfp" event={"ID":"f54b38eb-5a4a-483d-a621-248d19a8d07d","Type":"ContainerDied","Data":"4c81adf364215f6c9e250254acf8d12c6adcb0a539c6c7d5ca022e042121744f"} Oct 13 17:37:07 crc kubenswrapper[4720]: I1013 17:37:07.340453 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5cgfp" event={"ID":"f54b38eb-5a4a-483d-a621-248d19a8d07d","Type":"ContainerDied","Data":"de1d0179992b82ee45f158d6bb4eed809ef2baa6a7c3450738593a64048100d1"} Oct 13 17:37:07 crc kubenswrapper[4720]: I1013 17:37:07.340486 4720 scope.go:117] "RemoveContainer" containerID="4c81adf364215f6c9e250254acf8d12c6adcb0a539c6c7d5ca022e042121744f" Oct 13 17:37:07 crc kubenswrapper[4720]: I1013 17:37:07.367271 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5cgfp"] Oct 13 17:37:07 crc kubenswrapper[4720]: I1013 17:37:07.375499 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5cgfp"] Oct 13 17:37:07 crc kubenswrapper[4720]: I1013 17:37:07.378305 4720 scope.go:117] "RemoveContainer" containerID="98ff0c94a8a9345b6b54c148b6e8c2eb711abe4ab8da98c458b46e9832e6ae3f" Oct 13 17:37:07 crc kubenswrapper[4720]: I1013 17:37:07.420513 4720 scope.go:117] "RemoveContainer" containerID="f0145221323f32fb4ff6d7c8c0c135c485b2b7f7428276c965b8b133705d78a6" Oct 13 17:37:07 crc kubenswrapper[4720]: I1013 17:37:07.437205 4720 scope.go:117] "RemoveContainer" containerID="4c81adf364215f6c9e250254acf8d12c6adcb0a539c6c7d5ca022e042121744f" Oct 13 17:37:07 crc kubenswrapper[4720]: E1013 17:37:07.437683 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c81adf364215f6c9e250254acf8d12c6adcb0a539c6c7d5ca022e042121744f\": container with ID starting with 4c81adf364215f6c9e250254acf8d12c6adcb0a539c6c7d5ca022e042121744f not found: ID does not exist" containerID="4c81adf364215f6c9e250254acf8d12c6adcb0a539c6c7d5ca022e042121744f" Oct 13 17:37:07 crc kubenswrapper[4720]: I1013 17:37:07.437714 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c81adf364215f6c9e250254acf8d12c6adcb0a539c6c7d5ca022e042121744f"} err="failed to get container status \"4c81adf364215f6c9e250254acf8d12c6adcb0a539c6c7d5ca022e042121744f\": rpc error: code = NotFound desc = could not find container \"4c81adf364215f6c9e250254acf8d12c6adcb0a539c6c7d5ca022e042121744f\": container with ID starting with 4c81adf364215f6c9e250254acf8d12c6adcb0a539c6c7d5ca022e042121744f not found: ID does not exist" Oct 13 17:37:07 crc kubenswrapper[4720]: I1013 17:37:07.437735 4720 scope.go:117] "RemoveContainer" containerID="98ff0c94a8a9345b6b54c148b6e8c2eb711abe4ab8da98c458b46e9832e6ae3f" Oct 13 17:37:07 crc kubenswrapper[4720]: E1013 17:37:07.438098 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98ff0c94a8a9345b6b54c148b6e8c2eb711abe4ab8da98c458b46e9832e6ae3f\": container with ID starting with 98ff0c94a8a9345b6b54c148b6e8c2eb711abe4ab8da98c458b46e9832e6ae3f not found: ID does not exist" containerID="98ff0c94a8a9345b6b54c148b6e8c2eb711abe4ab8da98c458b46e9832e6ae3f" Oct 13 17:37:07 crc kubenswrapper[4720]: I1013 17:37:07.438157 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98ff0c94a8a9345b6b54c148b6e8c2eb711abe4ab8da98c458b46e9832e6ae3f"} err="failed to get container status \"98ff0c94a8a9345b6b54c148b6e8c2eb711abe4ab8da98c458b46e9832e6ae3f\": rpc error: code = NotFound desc = could not find container \"98ff0c94a8a9345b6b54c148b6e8c2eb711abe4ab8da98c458b46e9832e6ae3f\": container with ID starting with 98ff0c94a8a9345b6b54c148b6e8c2eb711abe4ab8da98c458b46e9832e6ae3f not found: ID does not exist" Oct 13 17:37:07 crc kubenswrapper[4720]: I1013 17:37:07.438203 4720 scope.go:117] "RemoveContainer" containerID="f0145221323f32fb4ff6d7c8c0c135c485b2b7f7428276c965b8b133705d78a6" Oct 13 17:37:07 crc kubenswrapper[4720]: E1013 17:37:07.438512 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0145221323f32fb4ff6d7c8c0c135c485b2b7f7428276c965b8b133705d78a6\": container with ID starting with f0145221323f32fb4ff6d7c8c0c135c485b2b7f7428276c965b8b133705d78a6 not found: ID does not exist" containerID="f0145221323f32fb4ff6d7c8c0c135c485b2b7f7428276c965b8b133705d78a6" Oct 13 17:37:07 crc kubenswrapper[4720]: I1013 17:37:07.438549 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0145221323f32fb4ff6d7c8c0c135c485b2b7f7428276c965b8b133705d78a6"} err="failed to get container status \"f0145221323f32fb4ff6d7c8c0c135c485b2b7f7428276c965b8b133705d78a6\": rpc error: code = NotFound desc = could not find container \"f0145221323f32fb4ff6d7c8c0c135c485b2b7f7428276c965b8b133705d78a6\": container with ID starting with f0145221323f32fb4ff6d7c8c0c135c485b2b7f7428276c965b8b133705d78a6 not found: ID does not exist" Oct 13 17:37:09 crc kubenswrapper[4720]: I1013 17:37:09.179188 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f54b38eb-5a4a-483d-a621-248d19a8d07d" path="/var/lib/kubelet/pods/f54b38eb-5a4a-483d-a621-248d19a8d07d/volumes" Oct 13 17:37:12 crc kubenswrapper[4720]: I1013 17:37:12.781331 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-64f84fcdbb-bk68v"] Oct 13 17:37:12 crc kubenswrapper[4720]: E1013 17:37:12.782051 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f54b38eb-5a4a-483d-a621-248d19a8d07d" containerName="extract-content" Oct 13 17:37:12 crc kubenswrapper[4720]: I1013 17:37:12.782068 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="f54b38eb-5a4a-483d-a621-248d19a8d07d" containerName="extract-content" Oct 13 17:37:12 crc kubenswrapper[4720]: E1013 17:37:12.782091 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f54b38eb-5a4a-483d-a621-248d19a8d07d" containerName="extract-utilities" Oct 13 17:37:12 crc kubenswrapper[4720]: I1013 17:37:12.782101 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="f54b38eb-5a4a-483d-a621-248d19a8d07d" containerName="extract-utilities" Oct 13 17:37:12 crc kubenswrapper[4720]: E1013 17:37:12.782122 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f54b38eb-5a4a-483d-a621-248d19a8d07d" containerName="registry-server" Oct 13 17:37:12 crc kubenswrapper[4720]: I1013 17:37:12.782131 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="f54b38eb-5a4a-483d-a621-248d19a8d07d" containerName="registry-server" Oct 13 17:37:12 crc kubenswrapper[4720]: I1013 17:37:12.782304 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="f54b38eb-5a4a-483d-a621-248d19a8d07d" containerName="registry-server" Oct 13 17:37:12 crc kubenswrapper[4720]: I1013 17:37:12.787809 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-59cdc64769-nlnsl"] Oct 13 17:37:12 crc kubenswrapper[4720]: I1013 17:37:12.788838 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-59cdc64769-nlnsl" Oct 13 17:37:12 crc kubenswrapper[4720]: I1013 17:37:12.789572 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-64f84fcdbb-bk68v" Oct 13 17:37:12 crc kubenswrapper[4720]: I1013 17:37:12.791446 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-lrj7t" Oct 13 17:37:12 crc kubenswrapper[4720]: I1013 17:37:12.791581 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-28hkc" Oct 13 17:37:12 crc kubenswrapper[4720]: I1013 17:37:12.803340 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-64f84fcdbb-bk68v"] Oct 13 17:37:12 crc kubenswrapper[4720]: I1013 17:37:12.818304 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-687df44cdb-ldlvt"] Oct 13 17:37:12 crc kubenswrapper[4720]: I1013 17:37:12.819904 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-687df44cdb-ldlvt" Oct 13 17:37:12 crc kubenswrapper[4720]: I1013 17:37:12.826811 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-w4g29" Oct 13 17:37:12 crc kubenswrapper[4720]: I1013 17:37:12.839243 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-59cdc64769-nlnsl"] Oct 13 17:37:12 crc kubenswrapper[4720]: I1013 17:37:12.840877 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-687df44cdb-ldlvt"] Oct 13 17:37:12 crc kubenswrapper[4720]: I1013 17:37:12.852729 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-7bb46cd7d-v7vx2"] Oct 13 17:37:12 crc kubenswrapper[4720]: I1013 17:37:12.853732 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-7bb46cd7d-v7vx2" Oct 13 17:37:12 crc kubenswrapper[4720]: I1013 17:37:12.856249 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-scmwq" Oct 13 17:37:12 crc kubenswrapper[4720]: I1013 17:37:12.861579 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-6d9967f8dd-qvv82"] Oct 13 17:37:12 crc kubenswrapper[4720]: I1013 17:37:12.862689 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-6d9967f8dd-qvv82" Oct 13 17:37:12 crc kubenswrapper[4720]: I1013 17:37:12.865698 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-6mzsw" Oct 13 17:37:12 crc kubenswrapper[4720]: I1013 17:37:12.867855 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-6d9967f8dd-qvv82"] Oct 13 17:37:12 crc kubenswrapper[4720]: I1013 17:37:12.884252 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-7bb46cd7d-v7vx2"] Oct 13 17:37:12 crc kubenswrapper[4720]: I1013 17:37:12.895337 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-585fc5b659-2f6bd"] Oct 13 17:37:12 crc kubenswrapper[4720]: I1013 17:37:12.896208 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-585fc5b659-2f6bd" Oct 13 17:37:12 crc kubenswrapper[4720]: I1013 17:37:12.898471 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6d74794d9b-hvb85"] Oct 13 17:37:12 crc kubenswrapper[4720]: I1013 17:37:12.899559 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-6d74794d9b-hvb85" Oct 13 17:37:12 crc kubenswrapper[4720]: I1013 17:37:12.901986 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Oct 13 17:37:12 crc kubenswrapper[4720]: I1013 17:37:12.902211 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-xnzfh" Oct 13 17:37:12 crc kubenswrapper[4720]: I1013 17:37:12.902382 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-wwz8k" Oct 13 17:37:12 crc kubenswrapper[4720]: I1013 17:37:12.914786 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22xkx\" (UniqueName: \"kubernetes.io/projected/d266783d-75ba-4864-af5d-4f2b8702c6a9-kube-api-access-22xkx\") pod \"horizon-operator-controller-manager-6d74794d9b-hvb85\" (UID: \"d266783d-75ba-4864-af5d-4f2b8702c6a9\") " pod="openstack-operators/horizon-operator-controller-manager-6d74794d9b-hvb85" Oct 13 17:37:12 crc kubenswrapper[4720]: I1013 17:37:12.914832 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9qct\" (UniqueName: \"kubernetes.io/projected/a9b388de-4993-46d1-86db-ac92a9df4f2f-kube-api-access-n9qct\") pod \"glance-operator-controller-manager-7bb46cd7d-v7vx2\" (UID: \"a9b388de-4993-46d1-86db-ac92a9df4f2f\") " pod="openstack-operators/glance-operator-controller-manager-7bb46cd7d-v7vx2" Oct 13 17:37:12 crc kubenswrapper[4720]: I1013 17:37:12.914855 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5btnz\" (UniqueName: \"kubernetes.io/projected/771ce8c4-ac65-4db7-bc56-a8b7cb2f1448-kube-api-access-5btnz\") pod \"heat-operator-controller-manager-6d9967f8dd-qvv82\" (UID: \"771ce8c4-ac65-4db7-bc56-a8b7cb2f1448\") " pod="openstack-operators/heat-operator-controller-manager-6d9967f8dd-qvv82" Oct 13 17:37:12 crc kubenswrapper[4720]: I1013 17:37:12.914914 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlfjr\" (UniqueName: \"kubernetes.io/projected/45c8f080-0f28-47b5-80df-e1877c3f77bb-kube-api-access-tlfjr\") pod \"infra-operator-controller-manager-585fc5b659-2f6bd\" (UID: \"45c8f080-0f28-47b5-80df-e1877c3f77bb\") " pod="openstack-operators/infra-operator-controller-manager-585fc5b659-2f6bd" Oct 13 17:37:12 crc kubenswrapper[4720]: I1013 17:37:12.914933 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/45c8f080-0f28-47b5-80df-e1877c3f77bb-cert\") pod \"infra-operator-controller-manager-585fc5b659-2f6bd\" (UID: \"45c8f080-0f28-47b5-80df-e1877c3f77bb\") " pod="openstack-operators/infra-operator-controller-manager-585fc5b659-2f6bd" Oct 13 17:37:12 crc kubenswrapper[4720]: I1013 17:37:12.914951 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44hm5\" (UniqueName: \"kubernetes.io/projected/d34e7c64-7562-4a1a-8d47-20b3bb785756-kube-api-access-44hm5\") pod \"barbican-operator-controller-manager-64f84fcdbb-bk68v\" (UID: \"d34e7c64-7562-4a1a-8d47-20b3bb785756\") " pod="openstack-operators/barbican-operator-controller-manager-64f84fcdbb-bk68v" Oct 13 17:37:12 crc kubenswrapper[4720]: I1013 17:37:12.914971 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2rfw\" (UniqueName: \"kubernetes.io/projected/c25871a6-cdf1-49c1-8d51-ab4fb186fa83-kube-api-access-k2rfw\") pod \"designate-operator-controller-manager-687df44cdb-ldlvt\" (UID: \"c25871a6-cdf1-49c1-8d51-ab4fb186fa83\") " pod="openstack-operators/designate-operator-controller-manager-687df44cdb-ldlvt" Oct 13 17:37:12 crc kubenswrapper[4720]: I1013 17:37:12.914998 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svtls\" (UniqueName: \"kubernetes.io/projected/d81c88e6-1b2a-405d-861a-ca4b3baed83d-kube-api-access-svtls\") pod \"cinder-operator-controller-manager-59cdc64769-nlnsl\" (UID: \"d81c88e6-1b2a-405d-861a-ca4b3baed83d\") " pod="openstack-operators/cinder-operator-controller-manager-59cdc64769-nlnsl" Oct 13 17:37:12 crc kubenswrapper[4720]: I1013 17:37:12.921612 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-585fc5b659-2f6bd"] Oct 13 17:37:12 crc kubenswrapper[4720]: I1013 17:37:12.929310 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-74cb5cbc49-bxtwp"] Oct 13 17:37:12 crc kubenswrapper[4720]: I1013 17:37:12.930336 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-74cb5cbc49-bxtwp" Oct 13 17:37:12 crc kubenswrapper[4720]: I1013 17:37:12.935713 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-9qb9x" Oct 13 17:37:12 crc kubenswrapper[4720]: I1013 17:37:12.954359 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6d74794d9b-hvb85"] Oct 13 17:37:12 crc kubenswrapper[4720]: I1013 17:37:12.968269 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-ddb98f99b-qqmcw"] Oct 13 17:37:12 crc kubenswrapper[4720]: I1013 17:37:12.969980 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-ddb98f99b-qqmcw" Oct 13 17:37:12 crc kubenswrapper[4720]: I1013 17:37:12.972543 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-fn6rz" Oct 13 17:37:12 crc kubenswrapper[4720]: I1013 17:37:12.981019 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-74cb5cbc49-bxtwp"] Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.001759 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-ddb98f99b-qqmcw"] Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.015485 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-59578bc799-mhlvn"] Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.016512 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-59578bc799-mhlvn" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.017996 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22xkx\" (UniqueName: \"kubernetes.io/projected/d266783d-75ba-4864-af5d-4f2b8702c6a9-kube-api-access-22xkx\") pod \"horizon-operator-controller-manager-6d74794d9b-hvb85\" (UID: \"d266783d-75ba-4864-af5d-4f2b8702c6a9\") " pod="openstack-operators/horizon-operator-controller-manager-6d74794d9b-hvb85" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.018040 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9qct\" (UniqueName: \"kubernetes.io/projected/a9b388de-4993-46d1-86db-ac92a9df4f2f-kube-api-access-n9qct\") pod \"glance-operator-controller-manager-7bb46cd7d-v7vx2\" (UID: \"a9b388de-4993-46d1-86db-ac92a9df4f2f\") " pod="openstack-operators/glance-operator-controller-manager-7bb46cd7d-v7vx2" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.018068 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5btnz\" (UniqueName: \"kubernetes.io/projected/771ce8c4-ac65-4db7-bc56-a8b7cb2f1448-kube-api-access-5btnz\") pod \"heat-operator-controller-manager-6d9967f8dd-qvv82\" (UID: \"771ce8c4-ac65-4db7-bc56-a8b7cb2f1448\") " pod="openstack-operators/heat-operator-controller-manager-6d9967f8dd-qvv82" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.018126 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlfjr\" (UniqueName: \"kubernetes.io/projected/45c8f080-0f28-47b5-80df-e1877c3f77bb-kube-api-access-tlfjr\") pod \"infra-operator-controller-manager-585fc5b659-2f6bd\" (UID: \"45c8f080-0f28-47b5-80df-e1877c3f77bb\") " pod="openstack-operators/infra-operator-controller-manager-585fc5b659-2f6bd" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.018147 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/45c8f080-0f28-47b5-80df-e1877c3f77bb-cert\") pod \"infra-operator-controller-manager-585fc5b659-2f6bd\" (UID: \"45c8f080-0f28-47b5-80df-e1877c3f77bb\") " pod="openstack-operators/infra-operator-controller-manager-585fc5b659-2f6bd" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.018167 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44hm5\" (UniqueName: \"kubernetes.io/projected/d34e7c64-7562-4a1a-8d47-20b3bb785756-kube-api-access-44hm5\") pod \"barbican-operator-controller-manager-64f84fcdbb-bk68v\" (UID: \"d34e7c64-7562-4a1a-8d47-20b3bb785756\") " pod="openstack-operators/barbican-operator-controller-manager-64f84fcdbb-bk68v" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.018207 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2rfw\" (UniqueName: \"kubernetes.io/projected/c25871a6-cdf1-49c1-8d51-ab4fb186fa83-kube-api-access-k2rfw\") pod \"designate-operator-controller-manager-687df44cdb-ldlvt\" (UID: \"c25871a6-cdf1-49c1-8d51-ab4fb186fa83\") " pod="openstack-operators/designate-operator-controller-manager-687df44cdb-ldlvt" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.018236 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svtls\" (UniqueName: \"kubernetes.io/projected/d81c88e6-1b2a-405d-861a-ca4b3baed83d-kube-api-access-svtls\") pod \"cinder-operator-controller-manager-59cdc64769-nlnsl\" (UID: \"d81c88e6-1b2a-405d-861a-ca4b3baed83d\") " pod="openstack-operators/cinder-operator-controller-manager-59cdc64769-nlnsl" Oct 13 17:37:13 crc kubenswrapper[4720]: E1013 17:37:13.019468 4720 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Oct 13 17:37:13 crc kubenswrapper[4720]: E1013 17:37:13.019512 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45c8f080-0f28-47b5-80df-e1877c3f77bb-cert podName:45c8f080-0f28-47b5-80df-e1877c3f77bb nodeName:}" failed. No retries permitted until 2025-10-13 17:37:13.519498375 +0000 UTC m=+778.976748507 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/45c8f080-0f28-47b5-80df-e1877c3f77bb-cert") pod "infra-operator-controller-manager-585fc5b659-2f6bd" (UID: "45c8f080-0f28-47b5-80df-e1877c3f77bb") : secret "infra-operator-webhook-server-cert" not found Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.021572 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-9f5fd" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.034406 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-59578bc799-mhlvn"] Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.041920 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9qct\" (UniqueName: \"kubernetes.io/projected/a9b388de-4993-46d1-86db-ac92a9df4f2f-kube-api-access-n9qct\") pod \"glance-operator-controller-manager-7bb46cd7d-v7vx2\" (UID: \"a9b388de-4993-46d1-86db-ac92a9df4f2f\") " pod="openstack-operators/glance-operator-controller-manager-7bb46cd7d-v7vx2" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.042822 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svtls\" (UniqueName: \"kubernetes.io/projected/d81c88e6-1b2a-405d-861a-ca4b3baed83d-kube-api-access-svtls\") pod \"cinder-operator-controller-manager-59cdc64769-nlnsl\" (UID: \"d81c88e6-1b2a-405d-861a-ca4b3baed83d\") " pod="openstack-operators/cinder-operator-controller-manager-59cdc64769-nlnsl" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.047733 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22xkx\" (UniqueName: \"kubernetes.io/projected/d266783d-75ba-4864-af5d-4f2b8702c6a9-kube-api-access-22xkx\") pod \"horizon-operator-controller-manager-6d74794d9b-hvb85\" (UID: \"d266783d-75ba-4864-af5d-4f2b8702c6a9\") " pod="openstack-operators/horizon-operator-controller-manager-6d74794d9b-hvb85" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.052382 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-797d478b46-m44cs"] Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.092287 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlfjr\" (UniqueName: \"kubernetes.io/projected/45c8f080-0f28-47b5-80df-e1877c3f77bb-kube-api-access-tlfjr\") pod \"infra-operator-controller-manager-585fc5b659-2f6bd\" (UID: \"45c8f080-0f28-47b5-80df-e1877c3f77bb\") " pod="openstack-operators/infra-operator-controller-manager-585fc5b659-2f6bd" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.098357 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44hm5\" (UniqueName: \"kubernetes.io/projected/d34e7c64-7562-4a1a-8d47-20b3bb785756-kube-api-access-44hm5\") pod \"barbican-operator-controller-manager-64f84fcdbb-bk68v\" (UID: \"d34e7c64-7562-4a1a-8d47-20b3bb785756\") " pod="openstack-operators/barbican-operator-controller-manager-64f84fcdbb-bk68v" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.103810 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-797d478b46-m44cs" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.108312 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-br7sh" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.110609 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-5777b4f897-mpdxw"] Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.113948 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-59cdc64769-nlnsl" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.122608 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2rfw\" (UniqueName: \"kubernetes.io/projected/c25871a6-cdf1-49c1-8d51-ab4fb186fa83-kube-api-access-k2rfw\") pod \"designate-operator-controller-manager-687df44cdb-ldlvt\" (UID: \"c25871a6-cdf1-49c1-8d51-ab4fb186fa83\") " pod="openstack-operators/designate-operator-controller-manager-687df44cdb-ldlvt" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.124032 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5btnz\" (UniqueName: \"kubernetes.io/projected/771ce8c4-ac65-4db7-bc56-a8b7cb2f1448-kube-api-access-5btnz\") pod \"heat-operator-controller-manager-6d9967f8dd-qvv82\" (UID: \"771ce8c4-ac65-4db7-bc56-a8b7cb2f1448\") " pod="openstack-operators/heat-operator-controller-manager-6d9967f8dd-qvv82" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.124785 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rdsv\" (UniqueName: \"kubernetes.io/projected/32b53ed3-af12-4a7d-b371-eab8aa1ab1bb-kube-api-access-5rdsv\") pod \"keystone-operator-controller-manager-ddb98f99b-qqmcw\" (UID: \"32b53ed3-af12-4a7d-b371-eab8aa1ab1bb\") " pod="openstack-operators/keystone-operator-controller-manager-ddb98f99b-qqmcw" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.129538 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fzzh\" (UniqueName: \"kubernetes.io/projected/284cd6cf-5985-4cad-a31c-f91f3c2098c6-kube-api-access-7fzzh\") pod \"manila-operator-controller-manager-59578bc799-mhlvn\" (UID: \"284cd6cf-5985-4cad-a31c-f91f3c2098c6\") " pod="openstack-operators/manila-operator-controller-manager-59578bc799-mhlvn" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.129843 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv852\" (UniqueName: \"kubernetes.io/projected/5a86eb76-3453-4f0e-8529-c877f739d822-kube-api-access-jv852\") pod \"ironic-operator-controller-manager-74cb5cbc49-bxtwp\" (UID: \"5a86eb76-3453-4f0e-8529-c877f739d822\") " pod="openstack-operators/ironic-operator-controller-manager-74cb5cbc49-bxtwp" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.130306 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-64f84fcdbb-bk68v" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.161768 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-5777b4f897-mpdxw"] Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.161871 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-5777b4f897-mpdxw" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.162025 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-687df44cdb-ldlvt" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.167760 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-xpj2d" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.167938 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-57bb74c7bf-dkbrd"] Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.169257 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-57bb74c7bf-dkbrd" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.175375 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-n7psl" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.181937 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-7bb46cd7d-v7vx2" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.195524 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-6d9967f8dd-qvv82" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.199536 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-797d478b46-m44cs"] Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.208453 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-4jfq5"] Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.209669 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-4jfq5" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.212514 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-j5487" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.215666 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757dd6zq7"] Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.217606 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757dd6zq7" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.221326 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-57bb74c7bf-dkbrd"] Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.227732 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.227980 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-88z5k" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.229699 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757dd6zq7"] Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.230857 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsqf2\" (UniqueName: \"kubernetes.io/projected/30a28fe6-1905-48df-ab2d-b9d92eaf940e-kube-api-access-zsqf2\") pod \"mariadb-operator-controller-manager-5777b4f897-mpdxw\" (UID: \"30a28fe6-1905-48df-ab2d-b9d92eaf940e\") " pod="openstack-operators/mariadb-operator-controller-manager-5777b4f897-mpdxw" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.230892 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jv852\" (UniqueName: \"kubernetes.io/projected/5a86eb76-3453-4f0e-8529-c877f739d822-kube-api-access-jv852\") pod \"ironic-operator-controller-manager-74cb5cbc49-bxtwp\" (UID: \"5a86eb76-3453-4f0e-8529-c877f739d822\") " pod="openstack-operators/ironic-operator-controller-manager-74cb5cbc49-bxtwp" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.230930 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b7c96c4b-b0c5-4c82-a6ab-3878c394eab0-cert\") pod \"openstack-baremetal-operator-controller-manager-6cc7fb757dd6zq7\" (UID: \"b7c96c4b-b0c5-4c82-a6ab-3878c394eab0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757dd6zq7" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.230953 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqsmc\" (UniqueName: \"kubernetes.io/projected/ccb8b109-9d10-48f5-b8ce-65ad05b5e1a4-kube-api-access-dqsmc\") pod \"neutron-operator-controller-manager-797d478b46-m44cs\" (UID: \"ccb8b109-9d10-48f5-b8ce-65ad05b5e1a4\") " pod="openstack-operators/neutron-operator-controller-manager-797d478b46-m44cs" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.230992 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rdsv\" (UniqueName: \"kubernetes.io/projected/32b53ed3-af12-4a7d-b371-eab8aa1ab1bb-kube-api-access-5rdsv\") pod \"keystone-operator-controller-manager-ddb98f99b-qqmcw\" (UID: \"32b53ed3-af12-4a7d-b371-eab8aa1ab1bb\") " pod="openstack-operators/keystone-operator-controller-manager-ddb98f99b-qqmcw" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.231014 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b8m4\" (UniqueName: \"kubernetes.io/projected/60fc7b7f-c85a-4a6d-8de9-e8e9e8df8ada-kube-api-access-5b8m4\") pod \"octavia-operator-controller-manager-6d7c7ddf95-4jfq5\" (UID: \"60fc7b7f-c85a-4a6d-8de9-e8e9e8df8ada\") " pod="openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-4jfq5" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.231040 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fzzh\" (UniqueName: \"kubernetes.io/projected/284cd6cf-5985-4cad-a31c-f91f3c2098c6-kube-api-access-7fzzh\") pod \"manila-operator-controller-manager-59578bc799-mhlvn\" (UID: \"284cd6cf-5985-4cad-a31c-f91f3c2098c6\") " pod="openstack-operators/manila-operator-controller-manager-59578bc799-mhlvn" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.231086 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2cc4\" (UniqueName: \"kubernetes.io/projected/b7c96c4b-b0c5-4c82-a6ab-3878c394eab0-kube-api-access-d2cc4\") pod \"openstack-baremetal-operator-controller-manager-6cc7fb757dd6zq7\" (UID: \"b7c96c4b-b0c5-4c82-a6ab-3878c394eab0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757dd6zq7" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.231105 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9fmg\" (UniqueName: \"kubernetes.io/projected/246b649b-7481-433e-aaf0-30cebf5543d8-kube-api-access-g9fmg\") pod \"nova-operator-controller-manager-57bb74c7bf-dkbrd\" (UID: \"246b649b-7481-433e-aaf0-30cebf5543d8\") " pod="openstack-operators/nova-operator-controller-manager-57bb74c7bf-dkbrd" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.233735 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-4jfq5"] Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.239765 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-869cc7797f-gzwxn"] Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.240815 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-869cc7797f-gzwxn" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.243684 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-6d74794d9b-hvb85" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.244057 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-zx22k" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.244962 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-869cc7797f-gzwxn"] Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.260495 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jv852\" (UniqueName: \"kubernetes.io/projected/5a86eb76-3453-4f0e-8529-c877f739d822-kube-api-access-jv852\") pod \"ironic-operator-controller-manager-74cb5cbc49-bxtwp\" (UID: \"5a86eb76-3453-4f0e-8529-c877f739d822\") " pod="openstack-operators/ironic-operator-controller-manager-74cb5cbc49-bxtwp" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.262822 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rdsv\" (UniqueName: \"kubernetes.io/projected/32b53ed3-af12-4a7d-b371-eab8aa1ab1bb-kube-api-access-5rdsv\") pod \"keystone-operator-controller-manager-ddb98f99b-qqmcw\" (UID: \"32b53ed3-af12-4a7d-b371-eab8aa1ab1bb\") " pod="openstack-operators/keystone-operator-controller-manager-ddb98f99b-qqmcw" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.271901 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-74cb5cbc49-bxtwp" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.272680 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-664664cb68-xwn8g"] Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.274115 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-664664cb68-xwn8g" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.277415 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-nxc4k" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.278141 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fzzh\" (UniqueName: \"kubernetes.io/projected/284cd6cf-5985-4cad-a31c-f91f3c2098c6-kube-api-access-7fzzh\") pod \"manila-operator-controller-manager-59578bc799-mhlvn\" (UID: \"284cd6cf-5985-4cad-a31c-f91f3c2098c6\") " pod="openstack-operators/manila-operator-controller-manager-59578bc799-mhlvn" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.286401 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-664664cb68-xwn8g"] Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.289738 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-ddb98f99b-qqmcw" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.296571 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-g4th9"] Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.297983 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-g4th9" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.300169 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-dhql4" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.312135 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-g4th9"] Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.325306 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-578874c84d-62vnr"] Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.326421 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-578874c84d-62vnr" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.328788 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-swf57" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.331901 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzxpm\" (UniqueName: \"kubernetes.io/projected/20db86ae-f595-4a1d-b000-c97df02b65af-kube-api-access-dzxpm\") pod \"telemetry-operator-controller-manager-578874c84d-62vnr\" (UID: \"20db86ae-f595-4a1d-b000-c97df02b65af\") " pod="openstack-operators/telemetry-operator-controller-manager-578874c84d-62vnr" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.331957 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b7c96c4b-b0c5-4c82-a6ab-3878c394eab0-cert\") pod \"openstack-baremetal-operator-controller-manager-6cc7fb757dd6zq7\" (UID: \"b7c96c4b-b0c5-4c82-a6ab-3878c394eab0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757dd6zq7" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.331985 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqsmc\" (UniqueName: \"kubernetes.io/projected/ccb8b109-9d10-48f5-b8ce-65ad05b5e1a4-kube-api-access-dqsmc\") pod \"neutron-operator-controller-manager-797d478b46-m44cs\" (UID: \"ccb8b109-9d10-48f5-b8ce-65ad05b5e1a4\") " pod="openstack-operators/neutron-operator-controller-manager-797d478b46-m44cs" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.332010 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hbqn\" (UniqueName: \"kubernetes.io/projected/487124d6-9dcd-4173-8f78-2dbf29cafe87-kube-api-access-6hbqn\") pod \"ovn-operator-controller-manager-869cc7797f-gzwxn\" (UID: \"487124d6-9dcd-4173-8f78-2dbf29cafe87\") " pod="openstack-operators/ovn-operator-controller-manager-869cc7797f-gzwxn" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.332044 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5b8m4\" (UniqueName: \"kubernetes.io/projected/60fc7b7f-c85a-4a6d-8de9-e8e9e8df8ada-kube-api-access-5b8m4\") pod \"octavia-operator-controller-manager-6d7c7ddf95-4jfq5\" (UID: \"60fc7b7f-c85a-4a6d-8de9-e8e9e8df8ada\") " pod="openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-4jfq5" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.332068 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89c62\" (UniqueName: \"kubernetes.io/projected/2eab29c4-2ebe-4f71-af0d-df5f0d113f66-kube-api-access-89c62\") pod \"placement-operator-controller-manager-664664cb68-xwn8g\" (UID: \"2eab29c4-2ebe-4f71-af0d-df5f0d113f66\") " pod="openstack-operators/placement-operator-controller-manager-664664cb68-xwn8g" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.332107 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2cc4\" (UniqueName: \"kubernetes.io/projected/b7c96c4b-b0c5-4c82-a6ab-3878c394eab0-kube-api-access-d2cc4\") pod \"openstack-baremetal-operator-controller-manager-6cc7fb757dd6zq7\" (UID: \"b7c96c4b-b0c5-4c82-a6ab-3878c394eab0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757dd6zq7" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.332128 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9fmg\" (UniqueName: \"kubernetes.io/projected/246b649b-7481-433e-aaf0-30cebf5543d8-kube-api-access-g9fmg\") pod \"nova-operator-controller-manager-57bb74c7bf-dkbrd\" (UID: \"246b649b-7481-433e-aaf0-30cebf5543d8\") " pod="openstack-operators/nova-operator-controller-manager-57bb74c7bf-dkbrd" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.332150 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs4t6\" (UniqueName: \"kubernetes.io/projected/935d79c8-281f-4ad8-8c6d-404c0e89653e-kube-api-access-xs4t6\") pod \"swift-operator-controller-manager-5f4d5dfdc6-g4th9\" (UID: \"935d79c8-281f-4ad8-8c6d-404c0e89653e\") " pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-g4th9" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.332205 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsqf2\" (UniqueName: \"kubernetes.io/projected/30a28fe6-1905-48df-ab2d-b9d92eaf940e-kube-api-access-zsqf2\") pod \"mariadb-operator-controller-manager-5777b4f897-mpdxw\" (UID: \"30a28fe6-1905-48df-ab2d-b9d92eaf940e\") " pod="openstack-operators/mariadb-operator-controller-manager-5777b4f897-mpdxw" Oct 13 17:37:13 crc kubenswrapper[4720]: E1013 17:37:13.332713 4720 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Oct 13 17:37:13 crc kubenswrapper[4720]: E1013 17:37:13.332757 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b7c96c4b-b0c5-4c82-a6ab-3878c394eab0-cert podName:b7c96c4b-b0c5-4c82-a6ab-3878c394eab0 nodeName:}" failed. No retries permitted until 2025-10-13 17:37:13.832743605 +0000 UTC m=+779.289993737 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b7c96c4b-b0c5-4c82-a6ab-3878c394eab0-cert") pod "openstack-baremetal-operator-controller-manager-6cc7fb757dd6zq7" (UID: "b7c96c4b-b0c5-4c82-a6ab-3878c394eab0") : secret "openstack-baremetal-operator-webhook-server-cert" not found Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.337375 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-578874c84d-62vnr"] Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.358271 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5b8m4\" (UniqueName: \"kubernetes.io/projected/60fc7b7f-c85a-4a6d-8de9-e8e9e8df8ada-kube-api-access-5b8m4\") pod \"octavia-operator-controller-manager-6d7c7ddf95-4jfq5\" (UID: \"60fc7b7f-c85a-4a6d-8de9-e8e9e8df8ada\") " pod="openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-4jfq5" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.369838 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsqf2\" (UniqueName: \"kubernetes.io/projected/30a28fe6-1905-48df-ab2d-b9d92eaf940e-kube-api-access-zsqf2\") pod \"mariadb-operator-controller-manager-5777b4f897-mpdxw\" (UID: \"30a28fe6-1905-48df-ab2d-b9d92eaf940e\") " pod="openstack-operators/mariadb-operator-controller-manager-5777b4f897-mpdxw" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.371055 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqsmc\" (UniqueName: \"kubernetes.io/projected/ccb8b109-9d10-48f5-b8ce-65ad05b5e1a4-kube-api-access-dqsmc\") pod \"neutron-operator-controller-manager-797d478b46-m44cs\" (UID: \"ccb8b109-9d10-48f5-b8ce-65ad05b5e1a4\") " pod="openstack-operators/neutron-operator-controller-manager-797d478b46-m44cs" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.371255 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9fmg\" (UniqueName: \"kubernetes.io/projected/246b649b-7481-433e-aaf0-30cebf5543d8-kube-api-access-g9fmg\") pod \"nova-operator-controller-manager-57bb74c7bf-dkbrd\" (UID: \"246b649b-7481-433e-aaf0-30cebf5543d8\") " pod="openstack-operators/nova-operator-controller-manager-57bb74c7bf-dkbrd" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.379207 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2cc4\" (UniqueName: \"kubernetes.io/projected/b7c96c4b-b0c5-4c82-a6ab-3878c394eab0-kube-api-access-d2cc4\") pod \"openstack-baremetal-operator-controller-manager-6cc7fb757dd6zq7\" (UID: \"b7c96c4b-b0c5-4c82-a6ab-3878c394eab0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757dd6zq7" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.403704 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-59578bc799-mhlvn" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.406768 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-ffcdd6c94-xm8vk"] Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.407825 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-ffcdd6c94-xm8vk" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.410147 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-9p7hd" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.425266 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-ffcdd6c94-xm8vk"] Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.433260 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzxpm\" (UniqueName: \"kubernetes.io/projected/20db86ae-f595-4a1d-b000-c97df02b65af-kube-api-access-dzxpm\") pod \"telemetry-operator-controller-manager-578874c84d-62vnr\" (UID: \"20db86ae-f595-4a1d-b000-c97df02b65af\") " pod="openstack-operators/telemetry-operator-controller-manager-578874c84d-62vnr" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.433322 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hbqn\" (UniqueName: \"kubernetes.io/projected/487124d6-9dcd-4173-8f78-2dbf29cafe87-kube-api-access-6hbqn\") pod \"ovn-operator-controller-manager-869cc7797f-gzwxn\" (UID: \"487124d6-9dcd-4173-8f78-2dbf29cafe87\") " pod="openstack-operators/ovn-operator-controller-manager-869cc7797f-gzwxn" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.433362 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbc76\" (UniqueName: \"kubernetes.io/projected/5c9d42bc-4b65-42f2-beda-164c7c5ba3e2-kube-api-access-zbc76\") pod \"test-operator-controller-manager-ffcdd6c94-xm8vk\" (UID: \"5c9d42bc-4b65-42f2-beda-164c7c5ba3e2\") " pod="openstack-operators/test-operator-controller-manager-ffcdd6c94-xm8vk" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.433382 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89c62\" (UniqueName: \"kubernetes.io/projected/2eab29c4-2ebe-4f71-af0d-df5f0d113f66-kube-api-access-89c62\") pod \"placement-operator-controller-manager-664664cb68-xwn8g\" (UID: \"2eab29c4-2ebe-4f71-af0d-df5f0d113f66\") " pod="openstack-operators/placement-operator-controller-manager-664664cb68-xwn8g" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.433424 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xs4t6\" (UniqueName: \"kubernetes.io/projected/935d79c8-281f-4ad8-8c6d-404c0e89653e-kube-api-access-xs4t6\") pod \"swift-operator-controller-manager-5f4d5dfdc6-g4th9\" (UID: \"935d79c8-281f-4ad8-8c6d-404c0e89653e\") " pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-g4th9" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.458065 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzxpm\" (UniqueName: \"kubernetes.io/projected/20db86ae-f595-4a1d-b000-c97df02b65af-kube-api-access-dzxpm\") pod \"telemetry-operator-controller-manager-578874c84d-62vnr\" (UID: \"20db86ae-f595-4a1d-b000-c97df02b65af\") " pod="openstack-operators/telemetry-operator-controller-manager-578874c84d-62vnr" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.460805 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hbqn\" (UniqueName: \"kubernetes.io/projected/487124d6-9dcd-4173-8f78-2dbf29cafe87-kube-api-access-6hbqn\") pod \"ovn-operator-controller-manager-869cc7797f-gzwxn\" (UID: \"487124d6-9dcd-4173-8f78-2dbf29cafe87\") " pod="openstack-operators/ovn-operator-controller-manager-869cc7797f-gzwxn" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.469969 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xs4t6\" (UniqueName: \"kubernetes.io/projected/935d79c8-281f-4ad8-8c6d-404c0e89653e-kube-api-access-xs4t6\") pod \"swift-operator-controller-manager-5f4d5dfdc6-g4th9\" (UID: \"935d79c8-281f-4ad8-8c6d-404c0e89653e\") " pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-g4th9" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.469979 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89c62\" (UniqueName: \"kubernetes.io/projected/2eab29c4-2ebe-4f71-af0d-df5f0d113f66-kube-api-access-89c62\") pod \"placement-operator-controller-manager-664664cb68-xwn8g\" (UID: \"2eab29c4-2ebe-4f71-af0d-df5f0d113f66\") " pod="openstack-operators/placement-operator-controller-manager-664664cb68-xwn8g" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.488549 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-646675d848-c8bnc"] Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.490238 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-646675d848-c8bnc" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.494742 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-mh67l" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.496779 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-646675d848-c8bnc"] Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.497052 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-797d478b46-m44cs" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.522610 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-5777b4f897-mpdxw" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.535796 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/45c8f080-0f28-47b5-80df-e1877c3f77bb-cert\") pod \"infra-operator-controller-manager-585fc5b659-2f6bd\" (UID: \"45c8f080-0f28-47b5-80df-e1877c3f77bb\") " pod="openstack-operators/infra-operator-controller-manager-585fc5b659-2f6bd" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.535902 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbc76\" (UniqueName: \"kubernetes.io/projected/5c9d42bc-4b65-42f2-beda-164c7c5ba3e2-kube-api-access-zbc76\") pod \"test-operator-controller-manager-ffcdd6c94-xm8vk\" (UID: \"5c9d42bc-4b65-42f2-beda-164c7c5ba3e2\") " pod="openstack-operators/test-operator-controller-manager-ffcdd6c94-xm8vk" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.542682 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/45c8f080-0f28-47b5-80df-e1877c3f77bb-cert\") pod \"infra-operator-controller-manager-585fc5b659-2f6bd\" (UID: \"45c8f080-0f28-47b5-80df-e1877c3f77bb\") " pod="openstack-operators/infra-operator-controller-manager-585fc5b659-2f6bd" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.545706 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-57bb74c7bf-dkbrd" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.563684 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-4jfq5" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.566440 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbc76\" (UniqueName: \"kubernetes.io/projected/5c9d42bc-4b65-42f2-beda-164c7c5ba3e2-kube-api-access-zbc76\") pod \"test-operator-controller-manager-ffcdd6c94-xm8vk\" (UID: \"5c9d42bc-4b65-42f2-beda-164c7c5ba3e2\") " pod="openstack-operators/test-operator-controller-manager-ffcdd6c94-xm8vk" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.578850 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-bb4f97fd9-d7cs5"] Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.582049 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-bb4f97fd9-d7cs5" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.589413 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-869cc7797f-gzwxn" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.604454 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-664664cb68-xwn8g" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.607585 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-bb4f97fd9-d7cs5"] Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.614126 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-rmpw6" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.614411 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.618017 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-g4th9" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.645964 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf87r\" (UniqueName: \"kubernetes.io/projected/4b641160-215b-4547-a820-d613c04d9348-kube-api-access-sf87r\") pod \"openstack-operator-controller-manager-bb4f97fd9-d7cs5\" (UID: \"4b641160-215b-4547-a820-d613c04d9348\") " pod="openstack-operators/openstack-operator-controller-manager-bb4f97fd9-d7cs5" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.646014 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4b641160-215b-4547-a820-d613c04d9348-cert\") pod \"openstack-operator-controller-manager-bb4f97fd9-d7cs5\" (UID: \"4b641160-215b-4547-a820-d613c04d9348\") " pod="openstack-operators/openstack-operator-controller-manager-bb4f97fd9-d7cs5" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.646047 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgtwz\" (UniqueName: \"kubernetes.io/projected/492905c0-fe64-45b8-af6b-5d7373c3f71a-kube-api-access-fgtwz\") pod \"watcher-operator-controller-manager-646675d848-c8bnc\" (UID: \"492905c0-fe64-45b8-af6b-5d7373c3f71a\") " pod="openstack-operators/watcher-operator-controller-manager-646675d848-c8bnc" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.692036 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-578874c84d-62vnr" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.692482 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-h959b"] Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.693459 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-h959b" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.695997 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-zl2cz" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.722276 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-h959b"] Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.733576 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-ffcdd6c94-xm8vk" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.747213 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgtwz\" (UniqueName: \"kubernetes.io/projected/492905c0-fe64-45b8-af6b-5d7373c3f71a-kube-api-access-fgtwz\") pod \"watcher-operator-controller-manager-646675d848-c8bnc\" (UID: \"492905c0-fe64-45b8-af6b-5d7373c3f71a\") " pod="openstack-operators/watcher-operator-controller-manager-646675d848-c8bnc" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.748092 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sf87r\" (UniqueName: \"kubernetes.io/projected/4b641160-215b-4547-a820-d613c04d9348-kube-api-access-sf87r\") pod \"openstack-operator-controller-manager-bb4f97fd9-d7cs5\" (UID: \"4b641160-215b-4547-a820-d613c04d9348\") " pod="openstack-operators/openstack-operator-controller-manager-bb4f97fd9-d7cs5" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.748119 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4b641160-215b-4547-a820-d613c04d9348-cert\") pod \"openstack-operator-controller-manager-bb4f97fd9-d7cs5\" (UID: \"4b641160-215b-4547-a820-d613c04d9348\") " pod="openstack-operators/openstack-operator-controller-manager-bb4f97fd9-d7cs5" Oct 13 17:37:13 crc kubenswrapper[4720]: E1013 17:37:13.748207 4720 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Oct 13 17:37:13 crc kubenswrapper[4720]: E1013 17:37:13.748244 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4b641160-215b-4547-a820-d613c04d9348-cert podName:4b641160-215b-4547-a820-d613c04d9348 nodeName:}" failed. No retries permitted until 2025-10-13 17:37:14.248232089 +0000 UTC m=+779.705482221 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4b641160-215b-4547-a820-d613c04d9348-cert") pod "openstack-operator-controller-manager-bb4f97fd9-d7cs5" (UID: "4b641160-215b-4547-a820-d613c04d9348") : secret "webhook-server-cert" not found Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.783559 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-64f84fcdbb-bk68v"] Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.792499 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sf87r\" (UniqueName: \"kubernetes.io/projected/4b641160-215b-4547-a820-d613c04d9348-kube-api-access-sf87r\") pod \"openstack-operator-controller-manager-bb4f97fd9-d7cs5\" (UID: \"4b641160-215b-4547-a820-d613c04d9348\") " pod="openstack-operators/openstack-operator-controller-manager-bb4f97fd9-d7cs5" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.818555 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgtwz\" (UniqueName: \"kubernetes.io/projected/492905c0-fe64-45b8-af6b-5d7373c3f71a-kube-api-access-fgtwz\") pod \"watcher-operator-controller-manager-646675d848-c8bnc\" (UID: \"492905c0-fe64-45b8-af6b-5d7373c3f71a\") " pod="openstack-operators/watcher-operator-controller-manager-646675d848-c8bnc" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.826062 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-585fc5b659-2f6bd" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.848856 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b7c96c4b-b0c5-4c82-a6ab-3878c394eab0-cert\") pod \"openstack-baremetal-operator-controller-manager-6cc7fb757dd6zq7\" (UID: \"b7c96c4b-b0c5-4c82-a6ab-3878c394eab0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757dd6zq7" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.848943 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwrdm\" (UniqueName: \"kubernetes.io/projected/f7d32fd1-190f-46ec-a313-b3c0b2c58556-kube-api-access-cwrdm\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-h959b\" (UID: \"f7d32fd1-190f-46ec-a313-b3c0b2c58556\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-h959b" Oct 13 17:37:13 crc kubenswrapper[4720]: E1013 17:37:13.849505 4720 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Oct 13 17:37:13 crc kubenswrapper[4720]: E1013 17:37:13.849557 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b7c96c4b-b0c5-4c82-a6ab-3878c394eab0-cert podName:b7c96c4b-b0c5-4c82-a6ab-3878c394eab0 nodeName:}" failed. No retries permitted until 2025-10-13 17:37:14.849538239 +0000 UTC m=+780.306788371 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b7c96c4b-b0c5-4c82-a6ab-3878c394eab0-cert") pod "openstack-baremetal-operator-controller-manager-6cc7fb757dd6zq7" (UID: "b7c96c4b-b0c5-4c82-a6ab-3878c394eab0") : secret "openstack-baremetal-operator-webhook-server-cert" not found Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.851602 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-646675d848-c8bnc" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.951138 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwrdm\" (UniqueName: \"kubernetes.io/projected/f7d32fd1-190f-46ec-a313-b3c0b2c58556-kube-api-access-cwrdm\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-h959b\" (UID: \"f7d32fd1-190f-46ec-a313-b3c0b2c58556\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-h959b" Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.986123 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-59cdc64769-nlnsl"] Oct 13 17:37:13 crc kubenswrapper[4720]: I1013 17:37:13.995955 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwrdm\" (UniqueName: \"kubernetes.io/projected/f7d32fd1-190f-46ec-a313-b3c0b2c58556-kube-api-access-cwrdm\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-h959b\" (UID: \"f7d32fd1-190f-46ec-a313-b3c0b2c58556\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-h959b" Oct 13 17:37:14 crc kubenswrapper[4720]: W1013 17:37:14.050312 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd81c88e6_1b2a_405d_861a_ca4b3baed83d.slice/crio-9e6ef21d7bde930849a3fe685ca505ea3b8717b94eb4341f65bc5f85034a3257 WatchSource:0}: Error finding container 9e6ef21d7bde930849a3fe685ca505ea3b8717b94eb4341f65bc5f85034a3257: Status 404 returned error can't find the container with id 9e6ef21d7bde930849a3fe685ca505ea3b8717b94eb4341f65bc5f85034a3257 Oct 13 17:37:14 crc kubenswrapper[4720]: I1013 17:37:14.081268 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-h959b" Oct 13 17:37:14 crc kubenswrapper[4720]: I1013 17:37:14.254776 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4b641160-215b-4547-a820-d613c04d9348-cert\") pod \"openstack-operator-controller-manager-bb4f97fd9-d7cs5\" (UID: \"4b641160-215b-4547-a820-d613c04d9348\") " pod="openstack-operators/openstack-operator-controller-manager-bb4f97fd9-d7cs5" Oct 13 17:37:14 crc kubenswrapper[4720]: E1013 17:37:14.254948 4720 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Oct 13 17:37:14 crc kubenswrapper[4720]: E1013 17:37:14.255004 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4b641160-215b-4547-a820-d613c04d9348-cert podName:4b641160-215b-4547-a820-d613c04d9348 nodeName:}" failed. No retries permitted until 2025-10-13 17:37:15.254987585 +0000 UTC m=+780.712237727 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4b641160-215b-4547-a820-d613c04d9348-cert") pod "openstack-operator-controller-manager-bb4f97fd9-d7cs5" (UID: "4b641160-215b-4547-a820-d613c04d9348") : secret "webhook-server-cert" not found Oct 13 17:37:14 crc kubenswrapper[4720]: I1013 17:37:14.397373 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-59cdc64769-nlnsl" event={"ID":"d81c88e6-1b2a-405d-861a-ca4b3baed83d","Type":"ContainerStarted","Data":"9e6ef21d7bde930849a3fe685ca505ea3b8717b94eb4341f65bc5f85034a3257"} Oct 13 17:37:14 crc kubenswrapper[4720]: I1013 17:37:14.398285 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-64f84fcdbb-bk68v" event={"ID":"d34e7c64-7562-4a1a-8d47-20b3bb785756","Type":"ContainerStarted","Data":"fbe9ebe8bbac51b884aac7f3757a6a78daed3b96c4ef713c6a23b4be5a736419"} Oct 13 17:37:14 crc kubenswrapper[4720]: I1013 17:37:14.427646 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6d74794d9b-hvb85"] Oct 13 17:37:14 crc kubenswrapper[4720]: I1013 17:37:14.433474 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-6d9967f8dd-qvv82"] Oct 13 17:37:14 crc kubenswrapper[4720]: I1013 17:37:14.443863 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-687df44cdb-ldlvt"] Oct 13 17:37:14 crc kubenswrapper[4720]: W1013 17:37:14.448662 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc25871a6_cdf1_49c1_8d51_ab4fb186fa83.slice/crio-63e55423d764c786bf725b99f63f8034753c628d0eff13f27d05f1b8345fa074 WatchSource:0}: Error finding container 63e55423d764c786bf725b99f63f8034753c628d0eff13f27d05f1b8345fa074: Status 404 returned error can't find the container with id 63e55423d764c786bf725b99f63f8034753c628d0eff13f27d05f1b8345fa074 Oct 13 17:37:14 crc kubenswrapper[4720]: I1013 17:37:14.449562 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-7bb46cd7d-v7vx2"] Oct 13 17:37:14 crc kubenswrapper[4720]: I1013 17:37:14.607424 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-ddb98f99b-qqmcw"] Oct 13 17:37:14 crc kubenswrapper[4720]: I1013 17:37:14.611376 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-59578bc799-mhlvn"] Oct 13 17:37:14 crc kubenswrapper[4720]: I1013 17:37:14.625692 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-74cb5cbc49-bxtwp"] Oct 13 17:37:14 crc kubenswrapper[4720]: W1013 17:37:14.629803 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32b53ed3_af12_4a7d_b371_eab8aa1ab1bb.slice/crio-9b78edd98529092d8d1ff1c27772d599dd25ee3976a101ec835a74e62fc711df WatchSource:0}: Error finding container 9b78edd98529092d8d1ff1c27772d599dd25ee3976a101ec835a74e62fc711df: Status 404 returned error can't find the container with id 9b78edd98529092d8d1ff1c27772d599dd25ee3976a101ec835a74e62fc711df Oct 13 17:37:14 crc kubenswrapper[4720]: W1013 17:37:14.645787 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a86eb76_3453_4f0e_8529_c877f739d822.slice/crio-dff28b34e1fe020fc173d9fc56be16996a145ff376e7e69278419edffc25a572 WatchSource:0}: Error finding container dff28b34e1fe020fc173d9fc56be16996a145ff376e7e69278419edffc25a572: Status 404 returned error can't find the container with id dff28b34e1fe020fc173d9fc56be16996a145ff376e7e69278419edffc25a572 Oct 13 17:37:14 crc kubenswrapper[4720]: I1013 17:37:14.864459 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b7c96c4b-b0c5-4c82-a6ab-3878c394eab0-cert\") pod \"openstack-baremetal-operator-controller-manager-6cc7fb757dd6zq7\" (UID: \"b7c96c4b-b0c5-4c82-a6ab-3878c394eab0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757dd6zq7" Oct 13 17:37:14 crc kubenswrapper[4720]: I1013 17:37:14.871109 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b7c96c4b-b0c5-4c82-a6ab-3878c394eab0-cert\") pod \"openstack-baremetal-operator-controller-manager-6cc7fb757dd6zq7\" (UID: \"b7c96c4b-b0c5-4c82-a6ab-3878c394eab0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757dd6zq7" Oct 13 17:37:15 crc kubenswrapper[4720]: I1013 17:37:15.078718 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757dd6zq7" Oct 13 17:37:15 crc kubenswrapper[4720]: I1013 17:37:15.269404 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4b641160-215b-4547-a820-d613c04d9348-cert\") pod \"openstack-operator-controller-manager-bb4f97fd9-d7cs5\" (UID: \"4b641160-215b-4547-a820-d613c04d9348\") " pod="openstack-operators/openstack-operator-controller-manager-bb4f97fd9-d7cs5" Oct 13 17:37:15 crc kubenswrapper[4720]: I1013 17:37:15.286630 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4b641160-215b-4547-a820-d613c04d9348-cert\") pod \"openstack-operator-controller-manager-bb4f97fd9-d7cs5\" (UID: \"4b641160-215b-4547-a820-d613c04d9348\") " pod="openstack-operators/openstack-operator-controller-manager-bb4f97fd9-d7cs5" Oct 13 17:37:15 crc kubenswrapper[4720]: W1013 17:37:15.331886 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20db86ae_f595_4a1d_b000_c97df02b65af.slice/crio-c63158969c44adab972dad4ba8528b4826172b21a3afaef4603b3d876542a5aa WatchSource:0}: Error finding container c63158969c44adab972dad4ba8528b4826172b21a3afaef4603b3d876542a5aa: Status 404 returned error can't find the container with id c63158969c44adab972dad4ba8528b4826172b21a3afaef4603b3d876542a5aa Oct 13 17:37:15 crc kubenswrapper[4720]: I1013 17:37:15.372153 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-4jfq5"] Oct 13 17:37:15 crc kubenswrapper[4720]: I1013 17:37:15.393916 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-h959b"] Oct 13 17:37:15 crc kubenswrapper[4720]: I1013 17:37:15.403019 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-578874c84d-62vnr"] Oct 13 17:37:15 crc kubenswrapper[4720]: I1013 17:37:15.407646 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-664664cb68-xwn8g"] Oct 13 17:37:15 crc kubenswrapper[4720]: W1013 17:37:15.410646 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podccb8b109_9d10_48f5_b8ce_65ad05b5e1a4.slice/crio-030cf229fc0d3f25ee5641a7ab4192beb2dd0a90b0f82a8693d88191316277fa WatchSource:0}: Error finding container 030cf229fc0d3f25ee5641a7ab4192beb2dd0a90b0f82a8693d88191316277fa: Status 404 returned error can't find the container with id 030cf229fc0d3f25ee5641a7ab4192beb2dd0a90b0f82a8693d88191316277fa Oct 13 17:37:15 crc kubenswrapper[4720]: E1013 17:37:15.410837 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:b2e9acf568a48c28cf2aed6012e432eeeb7d5f0eb11878fc91b62bc34cba10cd,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g9fmg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-57bb74c7bf-dkbrd_openstack-operators(246b649b-7481-433e-aaf0-30cebf5543d8): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 13 17:37:15 crc kubenswrapper[4720]: I1013 17:37:15.418818 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-646675d848-c8bnc"] Oct 13 17:37:15 crc kubenswrapper[4720]: E1013 17:37:15.420284 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:33652e75a03a058769019fe8d8c51585a6eeefef5e1ecb96f9965434117954f2,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dqsmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-797d478b46-m44cs_openstack-operators(ccb8b109-9d10-48f5-b8ce-65ad05b5e1a4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 13 17:37:15 crc kubenswrapper[4720]: I1013 17:37:15.428978 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-687df44cdb-ldlvt" event={"ID":"c25871a6-cdf1-49c1-8d51-ab4fb186fa83","Type":"ContainerStarted","Data":"63e55423d764c786bf725b99f63f8034753c628d0eff13f27d05f1b8345fa074"} Oct 13 17:37:15 crc kubenswrapper[4720]: I1013 17:37:15.431662 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-57bb74c7bf-dkbrd"] Oct 13 17:37:15 crc kubenswrapper[4720]: I1013 17:37:15.432164 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-h959b" event={"ID":"f7d32fd1-190f-46ec-a313-b3c0b2c58556","Type":"ContainerStarted","Data":"38a0b1638ea06f7d435ce476fbe075ed961c5b734e0b2ff3b6027146681e6050"} Oct 13 17:37:15 crc kubenswrapper[4720]: I1013 17:37:15.434707 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7bb46cd7d-v7vx2" event={"ID":"a9b388de-4993-46d1-86db-ac92a9df4f2f","Type":"ContainerStarted","Data":"a97eb5b0f94b3edff0a57c5d7cf79c75f63676fcdebe4605ac612a285b1caf13"} Oct 13 17:37:15 crc kubenswrapper[4720]: I1013 17:37:15.435273 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-869cc7797f-gzwxn"] Oct 13 17:37:15 crc kubenswrapper[4720]: I1013 17:37:15.437075 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-59578bc799-mhlvn" event={"ID":"284cd6cf-5985-4cad-a31c-f91f3c2098c6","Type":"ContainerStarted","Data":"e4ac0d5d2f75f25afecb237bf8de107217753081078c29807e5273b0b391e9dc"} Oct 13 17:37:15 crc kubenswrapper[4720]: I1013 17:37:15.439680 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-74cb5cbc49-bxtwp" event={"ID":"5a86eb76-3453-4f0e-8529-c877f739d822","Type":"ContainerStarted","Data":"dff28b34e1fe020fc173d9fc56be16996a145ff376e7e69278419edffc25a572"} Oct 13 17:37:15 crc kubenswrapper[4720]: I1013 17:37:15.440046 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-797d478b46-m44cs"] Oct 13 17:37:15 crc kubenswrapper[4720]: I1013 17:37:15.441419 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-578874c84d-62vnr" event={"ID":"20db86ae-f595-4a1d-b000-c97df02b65af","Type":"ContainerStarted","Data":"c63158969c44adab972dad4ba8528b4826172b21a3afaef4603b3d876542a5aa"} Oct 13 17:37:15 crc kubenswrapper[4720]: I1013 17:37:15.443719 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-5777b4f897-mpdxw"] Oct 13 17:37:15 crc kubenswrapper[4720]: I1013 17:37:15.443862 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-869cc7797f-gzwxn" event={"ID":"487124d6-9dcd-4173-8f78-2dbf29cafe87","Type":"ContainerStarted","Data":"8214a4d08a3e6e8434dc84a28d9d3a1a70f793cd71ae43fcca22b1ecbd8139ce"} Oct 13 17:37:15 crc kubenswrapper[4720]: I1013 17:37:15.445445 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-rmpw6" Oct 13 17:37:15 crc kubenswrapper[4720]: I1013 17:37:15.447220 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-646675d848-c8bnc" event={"ID":"492905c0-fe64-45b8-af6b-5d7373c3f71a","Type":"ContainerStarted","Data":"e5731d60fc72b7ee2f2fa54f8f639e364158e4db19a629aa60693da28627f374"} Oct 13 17:37:15 crc kubenswrapper[4720]: I1013 17:37:15.449211 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-ddb98f99b-qqmcw" event={"ID":"32b53ed3-af12-4a7d-b371-eab8aa1ab1bb","Type":"ContainerStarted","Data":"9b78edd98529092d8d1ff1c27772d599dd25ee3976a101ec835a74e62fc711df"} Oct 13 17:37:15 crc kubenswrapper[4720]: I1013 17:37:15.452084 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-6d9967f8dd-qvv82" event={"ID":"771ce8c4-ac65-4db7-bc56-a8b7cb2f1448","Type":"ContainerStarted","Data":"bf5ec3cb63a096aeb823cd0ab950c35c13578d79363aad7b293b80f63a07397e"} Oct 13 17:37:15 crc kubenswrapper[4720]: I1013 17:37:15.454453 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-bb4f97fd9-d7cs5" Oct 13 17:37:15 crc kubenswrapper[4720]: W1013 17:37:15.455595 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45c8f080_0f28_47b5_80df_e1877c3f77bb.slice/crio-c2003bd092975cc58319a2456e803df59362c274f5801c3c8d6f06642913e12a WatchSource:0}: Error finding container c2003bd092975cc58319a2456e803df59362c274f5801c3c8d6f06642913e12a: Status 404 returned error can't find the container with id c2003bd092975cc58319a2456e803df59362c274f5801c3c8d6f06642913e12a Oct 13 17:37:15 crc kubenswrapper[4720]: I1013 17:37:15.458850 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-ffcdd6c94-xm8vk"] Oct 13 17:37:15 crc kubenswrapper[4720]: I1013 17:37:15.462933 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-4jfq5" event={"ID":"60fc7b7f-c85a-4a6d-8de9-e8e9e8df8ada","Type":"ContainerStarted","Data":"07cf87b96fa696701add7a36445097904d47666bcf49dd898e7af04347e0a919"} Oct 13 17:37:15 crc kubenswrapper[4720]: W1013 17:37:15.463213 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod935d79c8_281f_4ad8_8c6d_404c0e89653e.slice/crio-0669afd674cb085ca3417940cced630fa32f08635a2317f8baa96cea3b79cfe8 WatchSource:0}: Error finding container 0669afd674cb085ca3417940cced630fa32f08635a2317f8baa96cea3b79cfe8: Status 404 returned error can't find the container with id 0669afd674cb085ca3417940cced630fa32f08635a2317f8baa96cea3b79cfe8 Oct 13 17:37:15 crc kubenswrapper[4720]: I1013 17:37:15.465083 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6d74794d9b-hvb85" event={"ID":"d266783d-75ba-4864-af5d-4f2b8702c6a9","Type":"ContainerStarted","Data":"e5da7dc0cfe1d54d6a0ad499ccb279eb709e8d8cb69f839adf50032c3b16958e"} Oct 13 17:37:15 crc kubenswrapper[4720]: I1013 17:37:15.467044 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-664664cb68-xwn8g" event={"ID":"2eab29c4-2ebe-4f71-af0d-df5f0d113f66","Type":"ContainerStarted","Data":"b357981a51cfd8e3cdb86f5eda4ae5d742ba7d2e95b2b77d88cd4a2486ad77b2"} Oct 13 17:37:15 crc kubenswrapper[4720]: I1013 17:37:15.485103 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-585fc5b659-2f6bd"] Oct 13 17:37:15 crc kubenswrapper[4720]: I1013 17:37:15.495753 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-g4th9"] Oct 13 17:37:15 crc kubenswrapper[4720]: E1013 17:37:15.497164 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/infra-operator@sha256:5cfb2ae1092445950b39dd59caa9a8c9367f42fb8353a8c3848d3bc729f24492,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{600 -3} {} 600m DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{536870912 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tlfjr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infra-operator-controller-manager-585fc5b659-2f6bd_openstack-operators(45c8f080-0f28-47b5-80df-e1877c3f77bb): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 13 17:37:15 crc kubenswrapper[4720]: E1013 17:37:15.497303 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:7e584b1c430441c8b6591dadeff32e065de8a185ad37ef90d2e08d37e59aab4a,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zbc76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-ffcdd6c94-xm8vk_openstack-operators(5c9d42bc-4b65-42f2-beda-164c7c5ba3e2): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 13 17:37:15 crc kubenswrapper[4720]: I1013 17:37:15.637014 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757dd6zq7"] Oct 13 17:37:15 crc kubenswrapper[4720]: E1013 17:37:15.902392 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-57bb74c7bf-dkbrd" podUID="246b649b-7481-433e-aaf0-30cebf5543d8" Oct 13 17:37:15 crc kubenswrapper[4720]: E1013 17:37:15.936143 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/infra-operator-controller-manager-585fc5b659-2f6bd" podUID="45c8f080-0f28-47b5-80df-e1877c3f77bb" Oct 13 17:37:15 crc kubenswrapper[4720]: E1013 17:37:15.958565 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-ffcdd6c94-xm8vk" podUID="5c9d42bc-4b65-42f2-beda-164c7c5ba3e2" Oct 13 17:37:16 crc kubenswrapper[4720]: E1013 17:37:16.022739 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/neutron-operator-controller-manager-797d478b46-m44cs" podUID="ccb8b109-9d10-48f5-b8ce-65ad05b5e1a4" Oct 13 17:37:16 crc kubenswrapper[4720]: I1013 17:37:16.158032 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-bb4f97fd9-d7cs5"] Oct 13 17:37:16 crc kubenswrapper[4720]: I1013 17:37:16.496578 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-bb4f97fd9-d7cs5" event={"ID":"4b641160-215b-4547-a820-d613c04d9348","Type":"ContainerStarted","Data":"606b6c96834122b73032c6a7eda13e42454e8053c86ab961a07fbe7172d06818"} Oct 13 17:37:16 crc kubenswrapper[4720]: I1013 17:37:16.496621 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-bb4f97fd9-d7cs5" event={"ID":"4b641160-215b-4547-a820-d613c04d9348","Type":"ContainerStarted","Data":"54fc3c61ab2899007da341eba8ceeced1f6088b4ab3e6f79933780809ffe05b8"} Oct 13 17:37:16 crc kubenswrapper[4720]: I1013 17:37:16.512132 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-ffcdd6c94-xm8vk" event={"ID":"5c9d42bc-4b65-42f2-beda-164c7c5ba3e2","Type":"ContainerStarted","Data":"f56b0634ad394a745d5103ef5bf2d3f00eae86bd58456b309149a03c92da277d"} Oct 13 17:37:16 crc kubenswrapper[4720]: I1013 17:37:16.512165 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-ffcdd6c94-xm8vk" event={"ID":"5c9d42bc-4b65-42f2-beda-164c7c5ba3e2","Type":"ContainerStarted","Data":"b3043785ab409c99b524a35db40b33fb7801c110f7b9e7c352adb54dddb4c59c"} Oct 13 17:37:16 crc kubenswrapper[4720]: E1013 17:37:16.514020 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:7e584b1c430441c8b6591dadeff32e065de8a185ad37ef90d2e08d37e59aab4a\\\"\"" pod="openstack-operators/test-operator-controller-manager-ffcdd6c94-xm8vk" podUID="5c9d42bc-4b65-42f2-beda-164c7c5ba3e2" Oct 13 17:37:16 crc kubenswrapper[4720]: I1013 17:37:16.530372 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-585fc5b659-2f6bd" event={"ID":"45c8f080-0f28-47b5-80df-e1877c3f77bb","Type":"ContainerStarted","Data":"ebafd5e8796a8b0d073ed5df392da5d310508f7f7f55ae01968e0d13a717ff8c"} Oct 13 17:37:16 crc kubenswrapper[4720]: I1013 17:37:16.530420 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-585fc5b659-2f6bd" event={"ID":"45c8f080-0f28-47b5-80df-e1877c3f77bb","Type":"ContainerStarted","Data":"c2003bd092975cc58319a2456e803df59362c274f5801c3c8d6f06642913e12a"} Oct 13 17:37:16 crc kubenswrapper[4720]: E1013 17:37:16.536067 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:5cfb2ae1092445950b39dd59caa9a8c9367f42fb8353a8c3848d3bc729f24492\\\"\"" pod="openstack-operators/infra-operator-controller-manager-585fc5b659-2f6bd" podUID="45c8f080-0f28-47b5-80df-e1877c3f77bb" Oct 13 17:37:16 crc kubenswrapper[4720]: I1013 17:37:16.537854 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-g4th9" event={"ID":"935d79c8-281f-4ad8-8c6d-404c0e89653e","Type":"ContainerStarted","Data":"0669afd674cb085ca3417940cced630fa32f08635a2317f8baa96cea3b79cfe8"} Oct 13 17:37:16 crc kubenswrapper[4720]: I1013 17:37:16.548803 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757dd6zq7" event={"ID":"b7c96c4b-b0c5-4c82-a6ab-3878c394eab0","Type":"ContainerStarted","Data":"22a6f9ffeeb90cdc297f9304d49a39b0b23428523538b837c5bb1da88b532899"} Oct 13 17:37:16 crc kubenswrapper[4720]: I1013 17:37:16.555364 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-797d478b46-m44cs" event={"ID":"ccb8b109-9d10-48f5-b8ce-65ad05b5e1a4","Type":"ContainerStarted","Data":"ae19736df48d2d66357bda04a81f87a12f22cd3bd0ebf620fd081b88ac40bd69"} Oct 13 17:37:16 crc kubenswrapper[4720]: I1013 17:37:16.555407 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-797d478b46-m44cs" event={"ID":"ccb8b109-9d10-48f5-b8ce-65ad05b5e1a4","Type":"ContainerStarted","Data":"030cf229fc0d3f25ee5641a7ab4192beb2dd0a90b0f82a8693d88191316277fa"} Oct 13 17:37:16 crc kubenswrapper[4720]: E1013 17:37:16.556715 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:33652e75a03a058769019fe8d8c51585a6eeefef5e1ecb96f9965434117954f2\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-797d478b46-m44cs" podUID="ccb8b109-9d10-48f5-b8ce-65ad05b5e1a4" Oct 13 17:37:16 crc kubenswrapper[4720]: I1013 17:37:16.560827 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-5777b4f897-mpdxw" event={"ID":"30a28fe6-1905-48df-ab2d-b9d92eaf940e","Type":"ContainerStarted","Data":"b7ce769724ba2348858e416e48227fea3b29a8f58ba5824b127f84c2a767a8d1"} Oct 13 17:37:16 crc kubenswrapper[4720]: I1013 17:37:16.584299 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-57bb74c7bf-dkbrd" event={"ID":"246b649b-7481-433e-aaf0-30cebf5543d8","Type":"ContainerStarted","Data":"15ef84ddcc1a30b2aa8ad65dd9d17e2dec239852397e3be92963c8c7597378f5"} Oct 13 17:37:16 crc kubenswrapper[4720]: I1013 17:37:16.584338 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-57bb74c7bf-dkbrd" event={"ID":"246b649b-7481-433e-aaf0-30cebf5543d8","Type":"ContainerStarted","Data":"aa4b10191991accd95d8f6ea3a3640cfb2e313b84ec7766e7ecd9bbb8d6201b0"} Oct 13 17:37:16 crc kubenswrapper[4720]: E1013 17:37:16.594869 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:b2e9acf568a48c28cf2aed6012e432eeeb7d5f0eb11878fc91b62bc34cba10cd\\\"\"" pod="openstack-operators/nova-operator-controller-manager-57bb74c7bf-dkbrd" podUID="246b649b-7481-433e-aaf0-30cebf5543d8" Oct 13 17:37:17 crc kubenswrapper[4720]: E1013 17:37:17.603879 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:7e584b1c430441c8b6591dadeff32e065de8a185ad37ef90d2e08d37e59aab4a\\\"\"" pod="openstack-operators/test-operator-controller-manager-ffcdd6c94-xm8vk" podUID="5c9d42bc-4b65-42f2-beda-164c7c5ba3e2" Oct 13 17:37:17 crc kubenswrapper[4720]: E1013 17:37:17.604556 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:b2e9acf568a48c28cf2aed6012e432eeeb7d5f0eb11878fc91b62bc34cba10cd\\\"\"" pod="openstack-operators/nova-operator-controller-manager-57bb74c7bf-dkbrd" podUID="246b649b-7481-433e-aaf0-30cebf5543d8" Oct 13 17:37:17 crc kubenswrapper[4720]: E1013 17:37:17.609657 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:5cfb2ae1092445950b39dd59caa9a8c9367f42fb8353a8c3848d3bc729f24492\\\"\"" pod="openstack-operators/infra-operator-controller-manager-585fc5b659-2f6bd" podUID="45c8f080-0f28-47b5-80df-e1877c3f77bb" Oct 13 17:37:17 crc kubenswrapper[4720]: E1013 17:37:17.615408 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:33652e75a03a058769019fe8d8c51585a6eeefef5e1ecb96f9965434117954f2\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-797d478b46-m44cs" podUID="ccb8b109-9d10-48f5-b8ce-65ad05b5e1a4" Oct 13 17:37:19 crc kubenswrapper[4720]: I1013 17:37:19.632451 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-bb4f97fd9-d7cs5" event={"ID":"4b641160-215b-4547-a820-d613c04d9348","Type":"ContainerStarted","Data":"d8e2aebe47f8941341e7c5f72e52b0b2857e12dfb04462ba8a6910af2eec332d"} Oct 13 17:37:19 crc kubenswrapper[4720]: I1013 17:37:19.632860 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-bb4f97fd9-d7cs5" Oct 13 17:37:19 crc kubenswrapper[4720]: I1013 17:37:19.674758 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-bb4f97fd9-d7cs5" podStartSLOduration=6.674730747 podStartE2EDuration="6.674730747s" podCreationTimestamp="2025-10-13 17:37:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:37:19.668059196 +0000 UTC m=+785.125309328" watchObservedRunningTime="2025-10-13 17:37:19.674730747 +0000 UTC m=+785.131980919" Oct 13 17:37:20 crc kubenswrapper[4720]: I1013 17:37:20.937952 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kkvnm"] Oct 13 17:37:20 crc kubenswrapper[4720]: I1013 17:37:20.939878 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kkvnm" Oct 13 17:37:20 crc kubenswrapper[4720]: I1013 17:37:20.961251 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kkvnm"] Oct 13 17:37:21 crc kubenswrapper[4720]: I1013 17:37:21.088841 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77892\" (UniqueName: \"kubernetes.io/projected/ddda9c4b-88a0-4d21-9edd-6ca1ac64ab1a-kube-api-access-77892\") pod \"redhat-marketplace-kkvnm\" (UID: \"ddda9c4b-88a0-4d21-9edd-6ca1ac64ab1a\") " pod="openshift-marketplace/redhat-marketplace-kkvnm" Oct 13 17:37:21 crc kubenswrapper[4720]: I1013 17:37:21.088974 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddda9c4b-88a0-4d21-9edd-6ca1ac64ab1a-utilities\") pod \"redhat-marketplace-kkvnm\" (UID: \"ddda9c4b-88a0-4d21-9edd-6ca1ac64ab1a\") " pod="openshift-marketplace/redhat-marketplace-kkvnm" Oct 13 17:37:21 crc kubenswrapper[4720]: I1013 17:37:21.089037 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddda9c4b-88a0-4d21-9edd-6ca1ac64ab1a-catalog-content\") pod \"redhat-marketplace-kkvnm\" (UID: \"ddda9c4b-88a0-4d21-9edd-6ca1ac64ab1a\") " pod="openshift-marketplace/redhat-marketplace-kkvnm" Oct 13 17:37:21 crc kubenswrapper[4720]: I1013 17:37:21.190247 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77892\" (UniqueName: \"kubernetes.io/projected/ddda9c4b-88a0-4d21-9edd-6ca1ac64ab1a-kube-api-access-77892\") pod \"redhat-marketplace-kkvnm\" (UID: \"ddda9c4b-88a0-4d21-9edd-6ca1ac64ab1a\") " pod="openshift-marketplace/redhat-marketplace-kkvnm" Oct 13 17:37:21 crc kubenswrapper[4720]: I1013 17:37:21.190299 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddda9c4b-88a0-4d21-9edd-6ca1ac64ab1a-utilities\") pod \"redhat-marketplace-kkvnm\" (UID: \"ddda9c4b-88a0-4d21-9edd-6ca1ac64ab1a\") " pod="openshift-marketplace/redhat-marketplace-kkvnm" Oct 13 17:37:21 crc kubenswrapper[4720]: I1013 17:37:21.190351 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddda9c4b-88a0-4d21-9edd-6ca1ac64ab1a-catalog-content\") pod \"redhat-marketplace-kkvnm\" (UID: \"ddda9c4b-88a0-4d21-9edd-6ca1ac64ab1a\") " pod="openshift-marketplace/redhat-marketplace-kkvnm" Oct 13 17:37:21 crc kubenswrapper[4720]: I1013 17:37:21.190820 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddda9c4b-88a0-4d21-9edd-6ca1ac64ab1a-catalog-content\") pod \"redhat-marketplace-kkvnm\" (UID: \"ddda9c4b-88a0-4d21-9edd-6ca1ac64ab1a\") " pod="openshift-marketplace/redhat-marketplace-kkvnm" Oct 13 17:37:21 crc kubenswrapper[4720]: I1013 17:37:21.191357 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddda9c4b-88a0-4d21-9edd-6ca1ac64ab1a-utilities\") pod \"redhat-marketplace-kkvnm\" (UID: \"ddda9c4b-88a0-4d21-9edd-6ca1ac64ab1a\") " pod="openshift-marketplace/redhat-marketplace-kkvnm" Oct 13 17:37:21 crc kubenswrapper[4720]: I1013 17:37:21.212011 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77892\" (UniqueName: \"kubernetes.io/projected/ddda9c4b-88a0-4d21-9edd-6ca1ac64ab1a-kube-api-access-77892\") pod \"redhat-marketplace-kkvnm\" (UID: \"ddda9c4b-88a0-4d21-9edd-6ca1ac64ab1a\") " pod="openshift-marketplace/redhat-marketplace-kkvnm" Oct 13 17:37:21 crc kubenswrapper[4720]: I1013 17:37:21.270018 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kkvnm" Oct 13 17:37:25 crc kubenswrapper[4720]: I1013 17:37:25.464744 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-bb4f97fd9-d7cs5" Oct 13 17:37:27 crc kubenswrapper[4720]: E1013 17:37:27.611766 4720 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:a17fc270857869fd1efe5020b2a1cb8c2abbd838f08de88f3a6a59e8754ec351" Oct 13 17:37:27 crc kubenswrapper[4720]: E1013 17:37:27.612488 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:a17fc270857869fd1efe5020b2a1cb8c2abbd838f08de88f3a6a59e8754ec351,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_ANSIBLEEE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_EVALUATOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-evaluator:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_NOTIFIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-notifier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_APACHE_IMAGE_URL_DEFAULT,Value:registry.redhat.io/ubi9/httpd-24:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_KEYSTONE_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_IPMI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_MYSQLD_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/mysqld-exporter:v0.15.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_NOTIFICATION_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_SGCORE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/sg-core:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_BACKUP_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-backup:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_VOLUME_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-volume:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_BACKENDBIND9_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-backend-bind9:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_MDNS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-mdns:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_PRODUCER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-producer:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_UNBOUND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-unbound:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_FRR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-frr:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_ISCSID_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-iscsid:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_KEPLER_IMAGE_URL_DEFAULT,Value:quay.io/sustainable_computing_io/kepler:release-0.7.12,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_LOGROTATE_CROND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cron:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_MULTIPATHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-multipathd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_DHCP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_METADATA_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_OVN_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-ovn-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_SRIOV_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NODE_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/node-exporter:v1.5.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_OVN_BGP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-bgp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_PODMAN_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/navidys/prometheus-podman-exporter:v1.10.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_GLANCE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_CFNAPI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api-cfn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HORIZON_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_MEMCACHED_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_REDIS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-redis:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_INSPECTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-inspector:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_NEUTRON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PXE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-pxe:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PYTHON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/ironic-python-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KEYSTONE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KSM_IMAGE_URL_DEFAULT,Value:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SHARE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-share:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MARIADB_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NET_UTILS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-netutils:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NEUTRON_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_NOVNC_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-novncproxy:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HEALTHMANAGER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-health-manager:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HOUSEKEEPING_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-housekeeping:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_RSYSLOG_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rsyslog:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_CLIENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_MUST_GATHER_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-must-gather:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_NETWORK_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OS_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/edpm-hardened-uefi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_OVS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NORTHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_SB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PLACEMENT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_RABBITMQ_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_ACCOUNT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-account:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-container:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_OBJECT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-object:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_PROXY_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_TEST_TEMPEST_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_APPLIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-applier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_DECISION_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-decision-engine:current-podified,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d2cc4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-baremetal-operator-controller-manager-6cc7fb757dd6zq7_openstack-operators(b7c96c4b-b0c5-4c82-a6ab-3878c394eab0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 13 17:37:28 crc kubenswrapper[4720]: E1013 17:37:28.156305 4720 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:4b4a17fe08ce00e375afaaec6a28835f5c1784f03d11c4558376ac04130f3a9e" Oct 13 17:37:28 crc kubenswrapper[4720]: E1013 17:37:28.156496 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:4b4a17fe08ce00e375afaaec6a28835f5c1784f03d11c4558376ac04130f3a9e,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xs4t6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-5f4d5dfdc6-g4th9_openstack-operators(935d79c8-281f-4ad8-8c6d-404c0e89653e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 13 17:37:28 crc kubenswrapper[4720]: E1013 17:37:28.685042 4720 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Oct 13 17:37:28 crc kubenswrapper[4720]: E1013 17:37:28.685260 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cwrdm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-5f97d8c699-h959b_openstack-operators(f7d32fd1-190f-46ec-a313-b3c0b2c58556): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 13 17:37:28 crc kubenswrapper[4720]: E1013 17:37:28.686514 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-h959b" podUID="f7d32fd1-190f-46ec-a313-b3c0b2c58556" Oct 13 17:37:28 crc kubenswrapper[4720]: E1013 17:37:28.707512 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-h959b" podUID="f7d32fd1-190f-46ec-a313-b3c0b2c58556" Oct 13 17:37:28 crc kubenswrapper[4720]: E1013 17:37:28.957208 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757dd6zq7" podUID="b7c96c4b-b0c5-4c82-a6ab-3878c394eab0" Oct 13 17:37:29 crc kubenswrapper[4720]: E1013 17:37:29.000164 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-g4th9" podUID="935d79c8-281f-4ad8-8c6d-404c0e89653e" Oct 13 17:37:29 crc kubenswrapper[4720]: I1013 17:37:29.273542 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kkvnm"] Oct 13 17:37:29 crc kubenswrapper[4720]: W1013 17:37:29.287691 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podddda9c4b_88a0_4d21_9edd_6ca1ac64ab1a.slice/crio-48a4406537690bc54dff92da548c54b9f928f2903f73570a355b43a15c21dadd WatchSource:0}: Error finding container 48a4406537690bc54dff92da548c54b9f928f2903f73570a355b43a15c21dadd: Status 404 returned error can't find the container with id 48a4406537690bc54dff92da548c54b9f928f2903f73570a355b43a15c21dadd Oct 13 17:37:29 crc kubenswrapper[4720]: I1013 17:37:29.729837 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-578874c84d-62vnr" event={"ID":"20db86ae-f595-4a1d-b000-c97df02b65af","Type":"ContainerStarted","Data":"a911f0d2bbecd4fc91e294864b185e802f6e73b3538783fa48755ca3198ed1d7"} Oct 13 17:37:29 crc kubenswrapper[4720]: I1013 17:37:29.752715 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6d74794d9b-hvb85" event={"ID":"d266783d-75ba-4864-af5d-4f2b8702c6a9","Type":"ContainerStarted","Data":"0b7d512eb353d77f2545e2ef2ecaba332f73053b5bf07b679ba6e25c1be122f6"} Oct 13 17:37:29 crc kubenswrapper[4720]: I1013 17:37:29.774806 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-687df44cdb-ldlvt" event={"ID":"c25871a6-cdf1-49c1-8d51-ab4fb186fa83","Type":"ContainerStarted","Data":"2165b4dfc77a4e04fac7e792a09be09751019848d365eb7de5fa1180333ad333"} Oct 13 17:37:29 crc kubenswrapper[4720]: I1013 17:37:29.787873 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-74cb5cbc49-bxtwp" event={"ID":"5a86eb76-3453-4f0e-8529-c877f739d822","Type":"ContainerStarted","Data":"53e8b5b9c48f284c4d853d15e1235d286aca76f05a6a5c3850bf37419318fe09"} Oct 13 17:37:29 crc kubenswrapper[4720]: I1013 17:37:29.803525 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-4jfq5" event={"ID":"60fc7b7f-c85a-4a6d-8de9-e8e9e8df8ada","Type":"ContainerStarted","Data":"ee5c56c413bea3dfc2e71f9b0a9459bfca236dfb36a18d87b92731cf3648bb4e"} Oct 13 17:37:29 crc kubenswrapper[4720]: I1013 17:37:29.822899 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-ddb98f99b-qqmcw" event={"ID":"32b53ed3-af12-4a7d-b371-eab8aa1ab1bb","Type":"ContainerStarted","Data":"24c6e2e9197832ce6f88d9fe9036343580508bd4015c6b1457e179f87a4a8870"} Oct 13 17:37:29 crc kubenswrapper[4720]: I1013 17:37:29.825210 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-6d9967f8dd-qvv82" event={"ID":"771ce8c4-ac65-4db7-bc56-a8b7cb2f1448","Type":"ContainerStarted","Data":"b20c510fa84e8cc86e8ee5f16e6295e473bfda9c4481e3fdd145774cddd67bbe"} Oct 13 17:37:29 crc kubenswrapper[4720]: I1013 17:37:29.833519 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-5777b4f897-mpdxw" event={"ID":"30a28fe6-1905-48df-ab2d-b9d92eaf940e","Type":"ContainerStarted","Data":"3ad11ae961895ebc4219ac8c902c015282108a40c14a920275ef76c0986dd044"} Oct 13 17:37:29 crc kubenswrapper[4720]: I1013 17:37:29.833597 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-5777b4f897-mpdxw" Oct 13 17:37:29 crc kubenswrapper[4720]: I1013 17:37:29.837500 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757dd6zq7" event={"ID":"b7c96c4b-b0c5-4c82-a6ab-3878c394eab0","Type":"ContainerStarted","Data":"79d2ca55e9053dfa7918577e3cea25a3b4b03dddcdddf33c90d09cf4c27415ef"} Oct 13 17:37:29 crc kubenswrapper[4720]: E1013 17:37:29.839328 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:a17fc270857869fd1efe5020b2a1cb8c2abbd838f08de88f3a6a59e8754ec351\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757dd6zq7" podUID="b7c96c4b-b0c5-4c82-a6ab-3878c394eab0" Oct 13 17:37:29 crc kubenswrapper[4720]: I1013 17:37:29.851947 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-664664cb68-xwn8g" event={"ID":"2eab29c4-2ebe-4f71-af0d-df5f0d113f66","Type":"ContainerStarted","Data":"8459359ebe0fe453961d3c0fb404ffbd963672865dc6d0943b181ae84478d4e9"} Oct 13 17:37:29 crc kubenswrapper[4720]: I1013 17:37:29.866098 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kkvnm" event={"ID":"ddda9c4b-88a0-4d21-9edd-6ca1ac64ab1a","Type":"ContainerStarted","Data":"48a4406537690bc54dff92da548c54b9f928f2903f73570a355b43a15c21dadd"} Oct 13 17:37:29 crc kubenswrapper[4720]: I1013 17:37:29.894080 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-646675d848-c8bnc" event={"ID":"492905c0-fe64-45b8-af6b-5d7373c3f71a","Type":"ContainerStarted","Data":"a4234bf9ab8106b65ddd5824430012d76757b045dbc6a98b7e2ab64d88b3bcad"} Oct 13 17:37:29 crc kubenswrapper[4720]: I1013 17:37:29.895768 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-869cc7797f-gzwxn" event={"ID":"487124d6-9dcd-4173-8f78-2dbf29cafe87","Type":"ContainerStarted","Data":"391ec98172d96d56b25f7baa872fe918b287acfb94f5d8585f1ac593f89f89be"} Oct 13 17:37:29 crc kubenswrapper[4720]: I1013 17:37:29.904326 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-59cdc64769-nlnsl" event={"ID":"d81c88e6-1b2a-405d-861a-ca4b3baed83d","Type":"ContainerStarted","Data":"1d0d4441ac9c5f4d0842748f9779fe33d40d7b583afd56a7b8e894271ed7e5ff"} Oct 13 17:37:29 crc kubenswrapper[4720]: I1013 17:37:29.910915 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-59578bc799-mhlvn" event={"ID":"284cd6cf-5985-4cad-a31c-f91f3c2098c6","Type":"ContainerStarted","Data":"f5f4527fc9414c0b197255a2c1600fdf433ebc770bb1f0f151039880f85918fa"} Oct 13 17:37:29 crc kubenswrapper[4720]: I1013 17:37:29.927525 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-64f84fcdbb-bk68v" event={"ID":"d34e7c64-7562-4a1a-8d47-20b3bb785756","Type":"ContainerStarted","Data":"1d13dfbc88cab4c5e472d29fe0bcbb7602da974bdbb38c2d2921b30b806089a7"} Oct 13 17:37:29 crc kubenswrapper[4720]: I1013 17:37:29.967365 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-5777b4f897-mpdxw" podStartSLOduration=4.561162148 podStartE2EDuration="17.967348795s" podCreationTimestamp="2025-10-13 17:37:12 +0000 UTC" firstStartedPulling="2025-10-13 17:37:15.407151327 +0000 UTC m=+780.864401459" lastFinishedPulling="2025-10-13 17:37:28.813337964 +0000 UTC m=+794.270588106" observedRunningTime="2025-10-13 17:37:29.966642727 +0000 UTC m=+795.423892859" watchObservedRunningTime="2025-10-13 17:37:29.967348795 +0000 UTC m=+795.424598927" Oct 13 17:37:29 crc kubenswrapper[4720]: I1013 17:37:29.984811 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7bb46cd7d-v7vx2" event={"ID":"a9b388de-4993-46d1-86db-ac92a9df4f2f","Type":"ContainerStarted","Data":"64f15a60b29f3bfceb453fbe8081e0c5bba175c309207e916cf86b5b6c17f0b3"} Oct 13 17:37:29 crc kubenswrapper[4720]: I1013 17:37:29.997517 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-g4th9" event={"ID":"935d79c8-281f-4ad8-8c6d-404c0e89653e","Type":"ContainerStarted","Data":"be55db867cd3cf03b8f813d00927d64d05fe658c980aef6c2d21e251bf516878"} Oct 13 17:37:30 crc kubenswrapper[4720]: E1013 17:37:30.013067 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:4b4a17fe08ce00e375afaaec6a28835f5c1784f03d11c4558376ac04130f3a9e\\\"\"" pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-g4th9" podUID="935d79c8-281f-4ad8-8c6d-404c0e89653e" Oct 13 17:37:31 crc kubenswrapper[4720]: I1013 17:37:31.006686 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-59578bc799-mhlvn" event={"ID":"284cd6cf-5985-4cad-a31c-f91f3c2098c6","Type":"ContainerStarted","Data":"50c7bf26ad2cdc9e7c45a919d3f28c5fad4893d28227a796c90f292e12781056"} Oct 13 17:37:31 crc kubenswrapper[4720]: I1013 17:37:31.006812 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-59578bc799-mhlvn" Oct 13 17:37:31 crc kubenswrapper[4720]: I1013 17:37:31.008068 4720 generic.go:334] "Generic (PLEG): container finished" podID="ddda9c4b-88a0-4d21-9edd-6ca1ac64ab1a" containerID="5af2586a2f5f7d31cf2b02ff6f4be96bb97455dae4e71ff1897b2a08be29e2ac" exitCode=0 Oct 13 17:37:31 crc kubenswrapper[4720]: I1013 17:37:31.008115 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kkvnm" event={"ID":"ddda9c4b-88a0-4d21-9edd-6ca1ac64ab1a","Type":"ContainerDied","Data":"5af2586a2f5f7d31cf2b02ff6f4be96bb97455dae4e71ff1897b2a08be29e2ac"} Oct 13 17:37:31 crc kubenswrapper[4720]: I1013 17:37:31.011277 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6d74794d9b-hvb85" event={"ID":"d266783d-75ba-4864-af5d-4f2b8702c6a9","Type":"ContainerStarted","Data":"6262accdd6aa450f9237af6846b47a8b5af8033ccecbae5dfb13dfce3823aee8"} Oct 13 17:37:31 crc kubenswrapper[4720]: I1013 17:37:31.011334 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-6d74794d9b-hvb85" Oct 13 17:37:31 crc kubenswrapper[4720]: I1013 17:37:31.017869 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-869cc7797f-gzwxn" event={"ID":"487124d6-9dcd-4173-8f78-2dbf29cafe87","Type":"ContainerStarted","Data":"8f80ce45cd46224614e4555524352fb0dad44655f63f17a6e239fe75d8305383"} Oct 13 17:37:31 crc kubenswrapper[4720]: I1013 17:37:31.017988 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-869cc7797f-gzwxn" Oct 13 17:37:31 crc kubenswrapper[4720]: I1013 17:37:31.024674 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-59578bc799-mhlvn" podStartSLOduration=4.8732893 podStartE2EDuration="19.024660849s" podCreationTimestamp="2025-10-13 17:37:12 +0000 UTC" firstStartedPulling="2025-10-13 17:37:14.622785767 +0000 UTC m=+780.080035899" lastFinishedPulling="2025-10-13 17:37:28.774157306 +0000 UTC m=+794.231407448" observedRunningTime="2025-10-13 17:37:31.023429467 +0000 UTC m=+796.480679599" watchObservedRunningTime="2025-10-13 17:37:31.024660849 +0000 UTC m=+796.481910981" Oct 13 17:37:31 crc kubenswrapper[4720]: I1013 17:37:31.028303 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-646675d848-c8bnc" event={"ID":"492905c0-fe64-45b8-af6b-5d7373c3f71a","Type":"ContainerStarted","Data":"8dacb26ec7581d3a66c47ae84f39b87ea799b23e8656015d7d11226a0a39b71f"} Oct 13 17:37:31 crc kubenswrapper[4720]: I1013 17:37:31.028904 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-646675d848-c8bnc" Oct 13 17:37:31 crc kubenswrapper[4720]: I1013 17:37:31.030879 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-ddb98f99b-qqmcw" event={"ID":"32b53ed3-af12-4a7d-b371-eab8aa1ab1bb","Type":"ContainerStarted","Data":"ebaf5ced042c8d80c6a09389282285795a9a3fff825010ab50931c7808521bdd"} Oct 13 17:37:31 crc kubenswrapper[4720]: I1013 17:37:31.031294 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-ddb98f99b-qqmcw" Oct 13 17:37:31 crc kubenswrapper[4720]: I1013 17:37:31.032927 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-6d9967f8dd-qvv82" event={"ID":"771ce8c4-ac65-4db7-bc56-a8b7cb2f1448","Type":"ContainerStarted","Data":"b9ad3afa8f0ed05cf6123710924b4f46dcf9d0f880f219400da822557691c91e"} Oct 13 17:37:31 crc kubenswrapper[4720]: I1013 17:37:31.032998 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-6d9967f8dd-qvv82" Oct 13 17:37:31 crc kubenswrapper[4720]: I1013 17:37:31.038851 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-578874c84d-62vnr" event={"ID":"20db86ae-f595-4a1d-b000-c97df02b65af","Type":"ContainerStarted","Data":"1e16b03c6f09d89bd63c0195edf329215f9f206e0d0a2cd4538c4d8a31660dbb"} Oct 13 17:37:31 crc kubenswrapper[4720]: I1013 17:37:31.039645 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-578874c84d-62vnr" Oct 13 17:37:31 crc kubenswrapper[4720]: I1013 17:37:31.044961 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-6d74794d9b-hvb85" podStartSLOduration=4.726848012 podStartE2EDuration="19.044947831s" podCreationTimestamp="2025-10-13 17:37:12 +0000 UTC" firstStartedPulling="2025-10-13 17:37:14.458215413 +0000 UTC m=+779.915465535" lastFinishedPulling="2025-10-13 17:37:28.776315222 +0000 UTC m=+794.233565354" observedRunningTime="2025-10-13 17:37:31.040355543 +0000 UTC m=+796.497605675" watchObservedRunningTime="2025-10-13 17:37:31.044947831 +0000 UTC m=+796.502197973" Oct 13 17:37:31 crc kubenswrapper[4720]: I1013 17:37:31.048241 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-687df44cdb-ldlvt" event={"ID":"c25871a6-cdf1-49c1-8d51-ab4fb186fa83","Type":"ContainerStarted","Data":"d9d6d21d33fe3aea7e1c0ee32abbbdca080a5089157fbbfabd26820d502d199e"} Oct 13 17:37:31 crc kubenswrapper[4720]: I1013 17:37:31.048367 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-687df44cdb-ldlvt" Oct 13 17:37:31 crc kubenswrapper[4720]: I1013 17:37:31.055736 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-59cdc64769-nlnsl" event={"ID":"d81c88e6-1b2a-405d-861a-ca4b3baed83d","Type":"ContainerStarted","Data":"324322712a000d130fd6ce2cf25c0eeee4027a9cc9107dcb2612ea5d0f6bb839"} Oct 13 17:37:31 crc kubenswrapper[4720]: I1013 17:37:31.055866 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-59cdc64769-nlnsl" Oct 13 17:37:31 crc kubenswrapper[4720]: I1013 17:37:31.059169 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-869cc7797f-gzwxn" podStartSLOduration=4.665547393 podStartE2EDuration="18.059157616s" podCreationTimestamp="2025-10-13 17:37:13 +0000 UTC" firstStartedPulling="2025-10-13 17:37:15.383348895 +0000 UTC m=+780.840599027" lastFinishedPulling="2025-10-13 17:37:28.776959068 +0000 UTC m=+794.234209250" observedRunningTime="2025-10-13 17:37:31.058424698 +0000 UTC m=+796.515674840" watchObservedRunningTime="2025-10-13 17:37:31.059157616 +0000 UTC m=+796.516407748" Oct 13 17:37:31 crc kubenswrapper[4720]: I1013 17:37:31.061565 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-5777b4f897-mpdxw" event={"ID":"30a28fe6-1905-48df-ab2d-b9d92eaf940e","Type":"ContainerStarted","Data":"1ce3a238fa81024ddc7f21e104a6c261ca1eda3ae31885633a0f447cd2d2edef"} Oct 13 17:37:31 crc kubenswrapper[4720]: I1013 17:37:31.065745 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-64f84fcdbb-bk68v" event={"ID":"d34e7c64-7562-4a1a-8d47-20b3bb785756","Type":"ContainerStarted","Data":"008a99c7edf6daf107595ca1d99e0650aafa564381cf364a6e9c86565aefc2c7"} Oct 13 17:37:31 crc kubenswrapper[4720]: I1013 17:37:31.066092 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-64f84fcdbb-bk68v" Oct 13 17:37:31 crc kubenswrapper[4720]: I1013 17:37:31.069526 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-664664cb68-xwn8g" event={"ID":"2eab29c4-2ebe-4f71-af0d-df5f0d113f66","Type":"ContainerStarted","Data":"284a16b6dab9a2fa418e7ccc0dad20c67c4ac911f6693077c03b7a09f3214154"} Oct 13 17:37:31 crc kubenswrapper[4720]: I1013 17:37:31.069852 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-664664cb68-xwn8g" Oct 13 17:37:31 crc kubenswrapper[4720]: I1013 17:37:31.070950 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7bb46cd7d-v7vx2" event={"ID":"a9b388de-4993-46d1-86db-ac92a9df4f2f","Type":"ContainerStarted","Data":"f2b5f9d3ec0da96ef7d23a567a01b069efc50318fd5f169598e4453a1ae75f7a"} Oct 13 17:37:31 crc kubenswrapper[4720]: I1013 17:37:31.071530 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-7bb46cd7d-v7vx2" Oct 13 17:37:31 crc kubenswrapper[4720]: I1013 17:37:31.072749 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-74cb5cbc49-bxtwp" event={"ID":"5a86eb76-3453-4f0e-8529-c877f739d822","Type":"ContainerStarted","Data":"4d7009400abe6e651ffef3d1be17804321b021a62e8fbf7dfd905cc61351a22f"} Oct 13 17:37:31 crc kubenswrapper[4720]: I1013 17:37:31.073106 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-74cb5cbc49-bxtwp" Oct 13 17:37:31 crc kubenswrapper[4720]: I1013 17:37:31.075607 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-4jfq5" event={"ID":"60fc7b7f-c85a-4a6d-8de9-e8e9e8df8ada","Type":"ContainerStarted","Data":"1b412f7cf0f39f06a49ee3d2063ebbd07fb8c05b7fe505a8cec457b5701b346d"} Oct 13 17:37:31 crc kubenswrapper[4720]: I1013 17:37:31.076089 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-4jfq5" Oct 13 17:37:31 crc kubenswrapper[4720]: E1013 17:37:31.076557 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:a17fc270857869fd1efe5020b2a1cb8c2abbd838f08de88f3a6a59e8754ec351\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757dd6zq7" podUID="b7c96c4b-b0c5-4c82-a6ab-3878c394eab0" Oct 13 17:37:31 crc kubenswrapper[4720]: E1013 17:37:31.079935 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:4b4a17fe08ce00e375afaaec6a28835f5c1784f03d11c4558376ac04130f3a9e\\\"\"" pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-g4th9" podUID="935d79c8-281f-4ad8-8c6d-404c0e89653e" Oct 13 17:37:31 crc kubenswrapper[4720]: I1013 17:37:31.103823 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-59cdc64769-nlnsl" podStartSLOduration=4.384331463 podStartE2EDuration="19.103760744s" podCreationTimestamp="2025-10-13 17:37:12 +0000 UTC" firstStartedPulling="2025-10-13 17:37:14.05217214 +0000 UTC m=+779.509422272" lastFinishedPulling="2025-10-13 17:37:28.771601421 +0000 UTC m=+794.228851553" observedRunningTime="2025-10-13 17:37:31.093834659 +0000 UTC m=+796.551084801" watchObservedRunningTime="2025-10-13 17:37:31.103760744 +0000 UTC m=+796.561010896" Oct 13 17:37:31 crc kubenswrapper[4720]: I1013 17:37:31.150055 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-664664cb68-xwn8g" podStartSLOduration=4.732233359 podStartE2EDuration="18.150039385s" podCreationTimestamp="2025-10-13 17:37:13 +0000 UTC" firstStartedPulling="2025-10-13 17:37:15.357987412 +0000 UTC m=+780.815237544" lastFinishedPulling="2025-10-13 17:37:28.775793438 +0000 UTC m=+794.233043570" observedRunningTime="2025-10-13 17:37:31.149113051 +0000 UTC m=+796.606363183" watchObservedRunningTime="2025-10-13 17:37:31.150039385 +0000 UTC m=+796.607289517" Oct 13 17:37:31 crc kubenswrapper[4720]: I1013 17:37:31.168333 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-646675d848-c8bnc" podStartSLOduration=4.742995495 podStartE2EDuration="18.168323055s" podCreationTimestamp="2025-10-13 17:37:13 +0000 UTC" firstStartedPulling="2025-10-13 17:37:15.383439227 +0000 UTC m=+780.840689359" lastFinishedPulling="2025-10-13 17:37:28.808766787 +0000 UTC m=+794.266016919" observedRunningTime="2025-10-13 17:37:31.164750823 +0000 UTC m=+796.622000955" watchObservedRunningTime="2025-10-13 17:37:31.168323055 +0000 UTC m=+796.625573187" Oct 13 17:37:31 crc kubenswrapper[4720]: I1013 17:37:31.209391 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-64f84fcdbb-bk68v" podStartSLOduration=4.3036305089999995 podStartE2EDuration="19.209373311s" podCreationTimestamp="2025-10-13 17:37:12 +0000 UTC" firstStartedPulling="2025-10-13 17:37:13.867601583 +0000 UTC m=+779.324851715" lastFinishedPulling="2025-10-13 17:37:28.773344385 +0000 UTC m=+794.230594517" observedRunningTime="2025-10-13 17:37:31.190712371 +0000 UTC m=+796.647962503" watchObservedRunningTime="2025-10-13 17:37:31.209373311 +0000 UTC m=+796.666623443" Oct 13 17:37:31 crc kubenswrapper[4720]: I1013 17:37:31.210384 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-ddb98f99b-qqmcw" podStartSLOduration=5.067971939 podStartE2EDuration="19.210374867s" podCreationTimestamp="2025-10-13 17:37:12 +0000 UTC" firstStartedPulling="2025-10-13 17:37:14.634275283 +0000 UTC m=+780.091525415" lastFinishedPulling="2025-10-13 17:37:28.776678211 +0000 UTC m=+794.233928343" observedRunningTime="2025-10-13 17:37:31.205158833 +0000 UTC m=+796.662408985" watchObservedRunningTime="2025-10-13 17:37:31.210374867 +0000 UTC m=+796.667624999" Oct 13 17:37:31 crc kubenswrapper[4720]: I1013 17:37:31.230929 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-7bb46cd7d-v7vx2" podStartSLOduration=4.91524787 podStartE2EDuration="19.230912966s" podCreationTimestamp="2025-10-13 17:37:12 +0000 UTC" firstStartedPulling="2025-10-13 17:37:14.459684991 +0000 UTC m=+779.916935123" lastFinishedPulling="2025-10-13 17:37:28.775350087 +0000 UTC m=+794.232600219" observedRunningTime="2025-10-13 17:37:31.228643437 +0000 UTC m=+796.685893569" watchObservedRunningTime="2025-10-13 17:37:31.230912966 +0000 UTC m=+796.688163098" Oct 13 17:37:31 crc kubenswrapper[4720]: I1013 17:37:31.245864 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-6d9967f8dd-qvv82" podStartSLOduration=4.932175145 podStartE2EDuration="19.24584717s" podCreationTimestamp="2025-10-13 17:37:12 +0000 UTC" firstStartedPulling="2025-10-13 17:37:14.463016146 +0000 UTC m=+779.920266278" lastFinishedPulling="2025-10-13 17:37:28.776688171 +0000 UTC m=+794.233938303" observedRunningTime="2025-10-13 17:37:31.243960511 +0000 UTC m=+796.701210643" watchObservedRunningTime="2025-10-13 17:37:31.24584717 +0000 UTC m=+796.703097302" Oct 13 17:37:31 crc kubenswrapper[4720]: I1013 17:37:31.263394 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-578874c84d-62vnr" podStartSLOduration=4.857873212 podStartE2EDuration="18.263377601s" podCreationTimestamp="2025-10-13 17:37:13 +0000 UTC" firstStartedPulling="2025-10-13 17:37:15.371380487 +0000 UTC m=+780.828630609" lastFinishedPulling="2025-10-13 17:37:28.776884856 +0000 UTC m=+794.234134998" observedRunningTime="2025-10-13 17:37:31.2563659 +0000 UTC m=+796.713616032" watchObservedRunningTime="2025-10-13 17:37:31.263377601 +0000 UTC m=+796.720627733" Oct 13 17:37:31 crc kubenswrapper[4720]: I1013 17:37:31.277808 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-4jfq5" podStartSLOduration=4.831271287 podStartE2EDuration="18.277170416s" podCreationTimestamp="2025-10-13 17:37:13 +0000 UTC" firstStartedPulling="2025-10-13 17:37:15.331046149 +0000 UTC m=+780.788296271" lastFinishedPulling="2025-10-13 17:37:28.776945258 +0000 UTC m=+794.234195400" observedRunningTime="2025-10-13 17:37:31.26958301 +0000 UTC m=+796.726833142" watchObservedRunningTime="2025-10-13 17:37:31.277170416 +0000 UTC m=+796.734420548" Oct 13 17:37:31 crc kubenswrapper[4720]: I1013 17:37:31.285973 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-687df44cdb-ldlvt" podStartSLOduration=4.960026202 podStartE2EDuration="19.285950982s" podCreationTimestamp="2025-10-13 17:37:12 +0000 UTC" firstStartedPulling="2025-10-13 17:37:14.451018078 +0000 UTC m=+779.908268200" lastFinishedPulling="2025-10-13 17:37:28.776942848 +0000 UTC m=+794.234192980" observedRunningTime="2025-10-13 17:37:31.28123111 +0000 UTC m=+796.738481252" watchObservedRunningTime="2025-10-13 17:37:31.285950982 +0000 UTC m=+796.743201114" Oct 13 17:37:31 crc kubenswrapper[4720]: I1013 17:37:31.299117 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-74cb5cbc49-bxtwp" podStartSLOduration=5.172349824 podStartE2EDuration="19.29909954s" podCreationTimestamp="2025-10-13 17:37:12 +0000 UTC" firstStartedPulling="2025-10-13 17:37:14.648140119 +0000 UTC m=+780.105390251" lastFinishedPulling="2025-10-13 17:37:28.774889835 +0000 UTC m=+794.232139967" observedRunningTime="2025-10-13 17:37:31.29522052 +0000 UTC m=+796.752470652" watchObservedRunningTime="2025-10-13 17:37:31.29909954 +0000 UTC m=+796.756349672" Oct 13 17:37:32 crc kubenswrapper[4720]: I1013 17:37:32.097946 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-585fc5b659-2f6bd" event={"ID":"45c8f080-0f28-47b5-80df-e1877c3f77bb","Type":"ContainerStarted","Data":"c8037acb7de0914deee8d95471872952b2fa6887be9905ed729c38b0bdd6a4ac"} Oct 13 17:37:32 crc kubenswrapper[4720]: I1013 17:37:32.138170 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-585fc5b659-2f6bd" podStartSLOduration=3.914710016 podStartE2EDuration="20.138149518s" podCreationTimestamp="2025-10-13 17:37:12 +0000 UTC" firstStartedPulling="2025-10-13 17:37:15.49703692 +0000 UTC m=+780.954287052" lastFinishedPulling="2025-10-13 17:37:31.720476412 +0000 UTC m=+797.177726554" observedRunningTime="2025-10-13 17:37:32.132651617 +0000 UTC m=+797.589901759" watchObservedRunningTime="2025-10-13 17:37:32.138149518 +0000 UTC m=+797.595399660" Oct 13 17:37:33 crc kubenswrapper[4720]: I1013 17:37:33.113379 4720 generic.go:334] "Generic (PLEG): container finished" podID="ddda9c4b-88a0-4d21-9edd-6ca1ac64ab1a" containerID="f0923667066ee90ae971b6eeaab94a00f6975275c446bb8bf633f3e83244ff0a" exitCode=0 Oct 13 17:37:33 crc kubenswrapper[4720]: I1013 17:37:33.113489 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kkvnm" event={"ID":"ddda9c4b-88a0-4d21-9edd-6ca1ac64ab1a","Type":"ContainerDied","Data":"f0923667066ee90ae971b6eeaab94a00f6975275c446bb8bf633f3e83244ff0a"} Oct 13 17:37:33 crc kubenswrapper[4720]: I1013 17:37:33.827555 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-585fc5b659-2f6bd" Oct 13 17:37:34 crc kubenswrapper[4720]: I1013 17:37:34.126304 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kkvnm" event={"ID":"ddda9c4b-88a0-4d21-9edd-6ca1ac64ab1a","Type":"ContainerStarted","Data":"e9302daaac73b7a79e6bc68d98203319786aaa7e42ec664c97aca2a544f651fe"} Oct 13 17:37:34 crc kubenswrapper[4720]: I1013 17:37:34.147652 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kkvnm" podStartSLOduration=12.103137757 podStartE2EDuration="14.147630089s" podCreationTimestamp="2025-10-13 17:37:20 +0000 UTC" firstStartedPulling="2025-10-13 17:37:31.664350488 +0000 UTC m=+797.121600660" lastFinishedPulling="2025-10-13 17:37:33.70884282 +0000 UTC m=+799.166092992" observedRunningTime="2025-10-13 17:37:34.141732258 +0000 UTC m=+799.598982390" watchObservedRunningTime="2025-10-13 17:37:34.147630089 +0000 UTC m=+799.604880221" Oct 13 17:37:41 crc kubenswrapper[4720]: I1013 17:37:41.270285 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kkvnm" Oct 13 17:37:41 crc kubenswrapper[4720]: I1013 17:37:41.270921 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kkvnm" Oct 13 17:37:41 crc kubenswrapper[4720]: I1013 17:37:41.331475 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kkvnm" Oct 13 17:37:42 crc kubenswrapper[4720]: I1013 17:37:42.206811 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-57bb74c7bf-dkbrd" event={"ID":"246b649b-7481-433e-aaf0-30cebf5543d8","Type":"ContainerStarted","Data":"f3120ff69ef1b9d608b1975e5a7cf473178e8b52cb0fcb56bc7e0de52db3462c"} Oct 13 17:37:42 crc kubenswrapper[4720]: I1013 17:37:42.207324 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-57bb74c7bf-dkbrd" Oct 13 17:37:42 crc kubenswrapper[4720]: I1013 17:37:42.210346 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-797d478b46-m44cs" event={"ID":"ccb8b109-9d10-48f5-b8ce-65ad05b5e1a4","Type":"ContainerStarted","Data":"a18b7a8e5c9f0ace9ad971b50ff07978a3fdaf739f219b7b3333d903842f960c"} Oct 13 17:37:42 crc kubenswrapper[4720]: I1013 17:37:42.212658 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-797d478b46-m44cs" Oct 13 17:37:42 crc kubenswrapper[4720]: I1013 17:37:42.219442 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-ffcdd6c94-xm8vk" event={"ID":"5c9d42bc-4b65-42f2-beda-164c7c5ba3e2","Type":"ContainerStarted","Data":"88c618d8dc80248259ed8c85e61e25c41df3e98238915ad1c66267644fc1a72f"} Oct 13 17:37:42 crc kubenswrapper[4720]: I1013 17:37:42.248341 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-57bb74c7bf-dkbrd" podStartSLOduration=8.632983581 podStartE2EDuration="29.248311231s" podCreationTimestamp="2025-10-13 17:37:13 +0000 UTC" firstStartedPulling="2025-10-13 17:37:15.410717639 +0000 UTC m=+780.867967771" lastFinishedPulling="2025-10-13 17:37:36.026045279 +0000 UTC m=+801.483295421" observedRunningTime="2025-10-13 17:37:42.238855458 +0000 UTC m=+807.696105630" watchObservedRunningTime="2025-10-13 17:37:42.248311231 +0000 UTC m=+807.705561393" Oct 13 17:37:42 crc kubenswrapper[4720]: I1013 17:37:42.276960 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-797d478b46-m44cs" podStartSLOduration=9.69552756 podStartE2EDuration="30.276931698s" podCreationTimestamp="2025-10-13 17:37:12 +0000 UTC" firstStartedPulling="2025-10-13 17:37:15.420160112 +0000 UTC m=+780.877410234" lastFinishedPulling="2025-10-13 17:37:36.00156422 +0000 UTC m=+801.458814372" observedRunningTime="2025-10-13 17:37:42.264941459 +0000 UTC m=+807.722191641" watchObservedRunningTime="2025-10-13 17:37:42.276931698 +0000 UTC m=+807.734181860" Oct 13 17:37:42 crc kubenswrapper[4720]: I1013 17:37:42.300850 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-ffcdd6c94-xm8vk" podStartSLOduration=8.772564752 podStartE2EDuration="29.300811692s" podCreationTimestamp="2025-10-13 17:37:13 +0000 UTC" firstStartedPulling="2025-10-13 17:37:15.497248345 +0000 UTC m=+780.954498477" lastFinishedPulling="2025-10-13 17:37:36.025495285 +0000 UTC m=+801.482745417" observedRunningTime="2025-10-13 17:37:42.287986842 +0000 UTC m=+807.745237004" watchObservedRunningTime="2025-10-13 17:37:42.300811692 +0000 UTC m=+807.758061864" Oct 13 17:37:42 crc kubenswrapper[4720]: I1013 17:37:42.301845 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kkvnm" Oct 13 17:37:42 crc kubenswrapper[4720]: I1013 17:37:42.374797 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kkvnm"] Oct 13 17:37:43 crc kubenswrapper[4720]: I1013 17:37:43.117761 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-59cdc64769-nlnsl" Oct 13 17:37:43 crc kubenswrapper[4720]: I1013 17:37:43.134843 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-64f84fcdbb-bk68v" Oct 13 17:37:43 crc kubenswrapper[4720]: I1013 17:37:43.189390 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-687df44cdb-ldlvt" Oct 13 17:37:43 crc kubenswrapper[4720]: I1013 17:37:43.189475 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-7bb46cd7d-v7vx2" Oct 13 17:37:43 crc kubenswrapper[4720]: I1013 17:37:43.199723 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-6d9967f8dd-qvv82" Oct 13 17:37:43 crc kubenswrapper[4720]: I1013 17:37:43.261484 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-6d74794d9b-hvb85" Oct 13 17:37:43 crc kubenswrapper[4720]: I1013 17:37:43.276888 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-h959b" event={"ID":"f7d32fd1-190f-46ec-a313-b3c0b2c58556","Type":"ContainerStarted","Data":"beb2185a86418e74431c0489fb25cabafce03a1410a45409c03ded2b9b8a76da"} Oct 13 17:37:43 crc kubenswrapper[4720]: I1013 17:37:43.289786 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-74cb5cbc49-bxtwp" Oct 13 17:37:43 crc kubenswrapper[4720]: I1013 17:37:43.303344 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-ddb98f99b-qqmcw" Oct 13 17:37:43 crc kubenswrapper[4720]: I1013 17:37:43.355812 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-h959b" podStartSLOduration=3.024941012 podStartE2EDuration="30.355792426s" podCreationTimestamp="2025-10-13 17:37:13 +0000 UTC" firstStartedPulling="2025-10-13 17:37:15.380932642 +0000 UTC m=+780.838182774" lastFinishedPulling="2025-10-13 17:37:42.711784016 +0000 UTC m=+808.169034188" observedRunningTime="2025-10-13 17:37:43.351758852 +0000 UTC m=+808.809008984" watchObservedRunningTime="2025-10-13 17:37:43.355792426 +0000 UTC m=+808.813042568" Oct 13 17:37:43 crc kubenswrapper[4720]: I1013 17:37:43.408726 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-59578bc799-mhlvn" Oct 13 17:37:43 crc kubenswrapper[4720]: I1013 17:37:43.525541 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-5777b4f897-mpdxw" Oct 13 17:37:43 crc kubenswrapper[4720]: I1013 17:37:43.567234 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6d7c7ddf95-4jfq5" Oct 13 17:37:43 crc kubenswrapper[4720]: I1013 17:37:43.591737 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-869cc7797f-gzwxn" Oct 13 17:37:43 crc kubenswrapper[4720]: I1013 17:37:43.615681 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-664664cb68-xwn8g" Oct 13 17:37:43 crc kubenswrapper[4720]: I1013 17:37:43.694573 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-578874c84d-62vnr" Oct 13 17:37:43 crc kubenswrapper[4720]: I1013 17:37:43.734029 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-ffcdd6c94-xm8vk" Oct 13 17:37:43 crc kubenswrapper[4720]: I1013 17:37:43.832893 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-585fc5b659-2f6bd" Oct 13 17:37:43 crc kubenswrapper[4720]: I1013 17:37:43.882666 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-646675d848-c8bnc" Oct 13 17:37:43 crc kubenswrapper[4720]: I1013 17:37:43.981057 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jmr8f"] Oct 13 17:37:43 crc kubenswrapper[4720]: I1013 17:37:43.982669 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jmr8f" Oct 13 17:37:44 crc kubenswrapper[4720]: I1013 17:37:44.004276 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jmr8f"] Oct 13 17:37:44 crc kubenswrapper[4720]: I1013 17:37:44.113091 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8869e974-e5a9-4fc3-acb7-b1df6e207103-catalog-content\") pod \"redhat-operators-jmr8f\" (UID: \"8869e974-e5a9-4fc3-acb7-b1df6e207103\") " pod="openshift-marketplace/redhat-operators-jmr8f" Oct 13 17:37:44 crc kubenswrapper[4720]: I1013 17:37:44.113181 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8869e974-e5a9-4fc3-acb7-b1df6e207103-utilities\") pod \"redhat-operators-jmr8f\" (UID: \"8869e974-e5a9-4fc3-acb7-b1df6e207103\") " pod="openshift-marketplace/redhat-operators-jmr8f" Oct 13 17:37:44 crc kubenswrapper[4720]: I1013 17:37:44.113365 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4chnt\" (UniqueName: \"kubernetes.io/projected/8869e974-e5a9-4fc3-acb7-b1df6e207103-kube-api-access-4chnt\") pod \"redhat-operators-jmr8f\" (UID: \"8869e974-e5a9-4fc3-acb7-b1df6e207103\") " pod="openshift-marketplace/redhat-operators-jmr8f" Oct 13 17:37:44 crc kubenswrapper[4720]: I1013 17:37:44.214308 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4chnt\" (UniqueName: \"kubernetes.io/projected/8869e974-e5a9-4fc3-acb7-b1df6e207103-kube-api-access-4chnt\") pod \"redhat-operators-jmr8f\" (UID: \"8869e974-e5a9-4fc3-acb7-b1df6e207103\") " pod="openshift-marketplace/redhat-operators-jmr8f" Oct 13 17:37:44 crc kubenswrapper[4720]: I1013 17:37:44.214412 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8869e974-e5a9-4fc3-acb7-b1df6e207103-catalog-content\") pod \"redhat-operators-jmr8f\" (UID: \"8869e974-e5a9-4fc3-acb7-b1df6e207103\") " pod="openshift-marketplace/redhat-operators-jmr8f" Oct 13 17:37:44 crc kubenswrapper[4720]: I1013 17:37:44.214434 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8869e974-e5a9-4fc3-acb7-b1df6e207103-utilities\") pod \"redhat-operators-jmr8f\" (UID: \"8869e974-e5a9-4fc3-acb7-b1df6e207103\") " pod="openshift-marketplace/redhat-operators-jmr8f" Oct 13 17:37:44 crc kubenswrapper[4720]: I1013 17:37:44.214930 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8869e974-e5a9-4fc3-acb7-b1df6e207103-utilities\") pod \"redhat-operators-jmr8f\" (UID: \"8869e974-e5a9-4fc3-acb7-b1df6e207103\") " pod="openshift-marketplace/redhat-operators-jmr8f" Oct 13 17:37:44 crc kubenswrapper[4720]: I1013 17:37:44.215504 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8869e974-e5a9-4fc3-acb7-b1df6e207103-catalog-content\") pod \"redhat-operators-jmr8f\" (UID: \"8869e974-e5a9-4fc3-acb7-b1df6e207103\") " pod="openshift-marketplace/redhat-operators-jmr8f" Oct 13 17:37:44 crc kubenswrapper[4720]: I1013 17:37:44.243605 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4chnt\" (UniqueName: \"kubernetes.io/projected/8869e974-e5a9-4fc3-acb7-b1df6e207103-kube-api-access-4chnt\") pod \"redhat-operators-jmr8f\" (UID: \"8869e974-e5a9-4fc3-acb7-b1df6e207103\") " pod="openshift-marketplace/redhat-operators-jmr8f" Oct 13 17:37:44 crc kubenswrapper[4720]: I1013 17:37:44.282251 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kkvnm" podUID="ddda9c4b-88a0-4d21-9edd-6ca1ac64ab1a" containerName="registry-server" containerID="cri-o://e9302daaac73b7a79e6bc68d98203319786aaa7e42ec664c97aca2a544f651fe" gracePeriod=2 Oct 13 17:37:44 crc kubenswrapper[4720]: I1013 17:37:44.300312 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jmr8f" Oct 13 17:37:44 crc kubenswrapper[4720]: I1013 17:37:44.744016 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kkvnm" Oct 13 17:37:44 crc kubenswrapper[4720]: I1013 17:37:44.758687 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jmr8f"] Oct 13 17:37:44 crc kubenswrapper[4720]: W1013 17:37:44.763887 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8869e974_e5a9_4fc3_acb7_b1df6e207103.slice/crio-e7b276e1d6086d2598a6ffbb25682de5fe9f3a8ca5db04317870cfcfe37332ce WatchSource:0}: Error finding container e7b276e1d6086d2598a6ffbb25682de5fe9f3a8ca5db04317870cfcfe37332ce: Status 404 returned error can't find the container with id e7b276e1d6086d2598a6ffbb25682de5fe9f3a8ca5db04317870cfcfe37332ce Oct 13 17:37:44 crc kubenswrapper[4720]: I1013 17:37:44.822273 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77892\" (UniqueName: \"kubernetes.io/projected/ddda9c4b-88a0-4d21-9edd-6ca1ac64ab1a-kube-api-access-77892\") pod \"ddda9c4b-88a0-4d21-9edd-6ca1ac64ab1a\" (UID: \"ddda9c4b-88a0-4d21-9edd-6ca1ac64ab1a\") " Oct 13 17:37:44 crc kubenswrapper[4720]: I1013 17:37:44.822386 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddda9c4b-88a0-4d21-9edd-6ca1ac64ab1a-utilities\") pod \"ddda9c4b-88a0-4d21-9edd-6ca1ac64ab1a\" (UID: \"ddda9c4b-88a0-4d21-9edd-6ca1ac64ab1a\") " Oct 13 17:37:44 crc kubenswrapper[4720]: I1013 17:37:44.822410 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddda9c4b-88a0-4d21-9edd-6ca1ac64ab1a-catalog-content\") pod \"ddda9c4b-88a0-4d21-9edd-6ca1ac64ab1a\" (UID: \"ddda9c4b-88a0-4d21-9edd-6ca1ac64ab1a\") " Oct 13 17:37:44 crc kubenswrapper[4720]: I1013 17:37:44.823503 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ddda9c4b-88a0-4d21-9edd-6ca1ac64ab1a-utilities" (OuterVolumeSpecName: "utilities") pod "ddda9c4b-88a0-4d21-9edd-6ca1ac64ab1a" (UID: "ddda9c4b-88a0-4d21-9edd-6ca1ac64ab1a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:37:44 crc kubenswrapper[4720]: I1013 17:37:44.827785 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddda9c4b-88a0-4d21-9edd-6ca1ac64ab1a-kube-api-access-77892" (OuterVolumeSpecName: "kube-api-access-77892") pod "ddda9c4b-88a0-4d21-9edd-6ca1ac64ab1a" (UID: "ddda9c4b-88a0-4d21-9edd-6ca1ac64ab1a"). InnerVolumeSpecName "kube-api-access-77892". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:37:44 crc kubenswrapper[4720]: I1013 17:37:44.836793 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ddda9c4b-88a0-4d21-9edd-6ca1ac64ab1a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ddda9c4b-88a0-4d21-9edd-6ca1ac64ab1a" (UID: "ddda9c4b-88a0-4d21-9edd-6ca1ac64ab1a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:37:44 crc kubenswrapper[4720]: I1013 17:37:44.923896 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddda9c4b-88a0-4d21-9edd-6ca1ac64ab1a-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 17:37:44 crc kubenswrapper[4720]: I1013 17:37:44.923974 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddda9c4b-88a0-4d21-9edd-6ca1ac64ab1a-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 17:37:44 crc kubenswrapper[4720]: I1013 17:37:44.924001 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77892\" (UniqueName: \"kubernetes.io/projected/ddda9c4b-88a0-4d21-9edd-6ca1ac64ab1a-kube-api-access-77892\") on node \"crc\" DevicePath \"\"" Oct 13 17:37:45 crc kubenswrapper[4720]: I1013 17:37:45.290429 4720 generic.go:334] "Generic (PLEG): container finished" podID="ddda9c4b-88a0-4d21-9edd-6ca1ac64ab1a" containerID="e9302daaac73b7a79e6bc68d98203319786aaa7e42ec664c97aca2a544f651fe" exitCode=0 Oct 13 17:37:45 crc kubenswrapper[4720]: I1013 17:37:45.290522 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kkvnm" event={"ID":"ddda9c4b-88a0-4d21-9edd-6ca1ac64ab1a","Type":"ContainerDied","Data":"e9302daaac73b7a79e6bc68d98203319786aaa7e42ec664c97aca2a544f651fe"} Oct 13 17:37:45 crc kubenswrapper[4720]: I1013 17:37:45.290539 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kkvnm" Oct 13 17:37:45 crc kubenswrapper[4720]: I1013 17:37:45.290775 4720 scope.go:117] "RemoveContainer" containerID="e9302daaac73b7a79e6bc68d98203319786aaa7e42ec664c97aca2a544f651fe" Oct 13 17:37:45 crc kubenswrapper[4720]: I1013 17:37:45.290757 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kkvnm" event={"ID":"ddda9c4b-88a0-4d21-9edd-6ca1ac64ab1a","Type":"ContainerDied","Data":"48a4406537690bc54dff92da548c54b9f928f2903f73570a355b43a15c21dadd"} Oct 13 17:37:45 crc kubenswrapper[4720]: I1013 17:37:45.294971 4720 generic.go:334] "Generic (PLEG): container finished" podID="8869e974-e5a9-4fc3-acb7-b1df6e207103" containerID="cf3f0073fb7f23aad3a1373c221f84533ec4d88eef82a0379210cc558915355e" exitCode=0 Oct 13 17:37:45 crc kubenswrapper[4720]: I1013 17:37:45.295018 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jmr8f" event={"ID":"8869e974-e5a9-4fc3-acb7-b1df6e207103","Type":"ContainerDied","Data":"cf3f0073fb7f23aad3a1373c221f84533ec4d88eef82a0379210cc558915355e"} Oct 13 17:37:45 crc kubenswrapper[4720]: I1013 17:37:45.295046 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jmr8f" event={"ID":"8869e974-e5a9-4fc3-acb7-b1df6e207103","Type":"ContainerStarted","Data":"e7b276e1d6086d2598a6ffbb25682de5fe9f3a8ca5db04317870cfcfe37332ce"} Oct 13 17:37:45 crc kubenswrapper[4720]: I1013 17:37:45.307870 4720 scope.go:117] "RemoveContainer" containerID="f0923667066ee90ae971b6eeaab94a00f6975275c446bb8bf633f3e83244ff0a" Oct 13 17:37:45 crc kubenswrapper[4720]: I1013 17:37:45.325165 4720 scope.go:117] "RemoveContainer" containerID="5af2586a2f5f7d31cf2b02ff6f4be96bb97455dae4e71ff1897b2a08be29e2ac" Oct 13 17:37:45 crc kubenswrapper[4720]: I1013 17:37:45.331399 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kkvnm"] Oct 13 17:37:45 crc kubenswrapper[4720]: I1013 17:37:45.335131 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kkvnm"] Oct 13 17:37:45 crc kubenswrapper[4720]: I1013 17:37:45.354343 4720 scope.go:117] "RemoveContainer" containerID="e9302daaac73b7a79e6bc68d98203319786aaa7e42ec664c97aca2a544f651fe" Oct 13 17:37:45 crc kubenswrapper[4720]: E1013 17:37:45.354710 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9302daaac73b7a79e6bc68d98203319786aaa7e42ec664c97aca2a544f651fe\": container with ID starting with e9302daaac73b7a79e6bc68d98203319786aaa7e42ec664c97aca2a544f651fe not found: ID does not exist" containerID="e9302daaac73b7a79e6bc68d98203319786aaa7e42ec664c97aca2a544f651fe" Oct 13 17:37:45 crc kubenswrapper[4720]: I1013 17:37:45.354743 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9302daaac73b7a79e6bc68d98203319786aaa7e42ec664c97aca2a544f651fe"} err="failed to get container status \"e9302daaac73b7a79e6bc68d98203319786aaa7e42ec664c97aca2a544f651fe\": rpc error: code = NotFound desc = could not find container \"e9302daaac73b7a79e6bc68d98203319786aaa7e42ec664c97aca2a544f651fe\": container with ID starting with e9302daaac73b7a79e6bc68d98203319786aaa7e42ec664c97aca2a544f651fe not found: ID does not exist" Oct 13 17:37:45 crc kubenswrapper[4720]: I1013 17:37:45.354765 4720 scope.go:117] "RemoveContainer" containerID="f0923667066ee90ae971b6eeaab94a00f6975275c446bb8bf633f3e83244ff0a" Oct 13 17:37:45 crc kubenswrapper[4720]: E1013 17:37:45.354995 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0923667066ee90ae971b6eeaab94a00f6975275c446bb8bf633f3e83244ff0a\": container with ID starting with f0923667066ee90ae971b6eeaab94a00f6975275c446bb8bf633f3e83244ff0a not found: ID does not exist" containerID="f0923667066ee90ae971b6eeaab94a00f6975275c446bb8bf633f3e83244ff0a" Oct 13 17:37:45 crc kubenswrapper[4720]: I1013 17:37:45.355018 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0923667066ee90ae971b6eeaab94a00f6975275c446bb8bf633f3e83244ff0a"} err="failed to get container status \"f0923667066ee90ae971b6eeaab94a00f6975275c446bb8bf633f3e83244ff0a\": rpc error: code = NotFound desc = could not find container \"f0923667066ee90ae971b6eeaab94a00f6975275c446bb8bf633f3e83244ff0a\": container with ID starting with f0923667066ee90ae971b6eeaab94a00f6975275c446bb8bf633f3e83244ff0a not found: ID does not exist" Oct 13 17:37:45 crc kubenswrapper[4720]: I1013 17:37:45.355030 4720 scope.go:117] "RemoveContainer" containerID="5af2586a2f5f7d31cf2b02ff6f4be96bb97455dae4e71ff1897b2a08be29e2ac" Oct 13 17:37:45 crc kubenswrapper[4720]: E1013 17:37:45.355244 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5af2586a2f5f7d31cf2b02ff6f4be96bb97455dae4e71ff1897b2a08be29e2ac\": container with ID starting with 5af2586a2f5f7d31cf2b02ff6f4be96bb97455dae4e71ff1897b2a08be29e2ac not found: ID does not exist" containerID="5af2586a2f5f7d31cf2b02ff6f4be96bb97455dae4e71ff1897b2a08be29e2ac" Oct 13 17:37:45 crc kubenswrapper[4720]: I1013 17:37:45.355264 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5af2586a2f5f7d31cf2b02ff6f4be96bb97455dae4e71ff1897b2a08be29e2ac"} err="failed to get container status \"5af2586a2f5f7d31cf2b02ff6f4be96bb97455dae4e71ff1897b2a08be29e2ac\": rpc error: code = NotFound desc = could not find container \"5af2586a2f5f7d31cf2b02ff6f4be96bb97455dae4e71ff1897b2a08be29e2ac\": container with ID starting with 5af2586a2f5f7d31cf2b02ff6f4be96bb97455dae4e71ff1897b2a08be29e2ac not found: ID does not exist" Oct 13 17:37:47 crc kubenswrapper[4720]: I1013 17:37:47.202950 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddda9c4b-88a0-4d21-9edd-6ca1ac64ab1a" path="/var/lib/kubelet/pods/ddda9c4b-88a0-4d21-9edd-6ca1ac64ab1a/volumes" Oct 13 17:37:47 crc kubenswrapper[4720]: I1013 17:37:47.327402 4720 generic.go:334] "Generic (PLEG): container finished" podID="8869e974-e5a9-4fc3-acb7-b1df6e207103" containerID="8eb1260dfac155e1cdc3eb55a4469d40f02fb80d1fb7f609b210ca348f51c7eb" exitCode=0 Oct 13 17:37:47 crc kubenswrapper[4720]: I1013 17:37:47.327468 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jmr8f" event={"ID":"8869e974-e5a9-4fc3-acb7-b1df6e207103","Type":"ContainerDied","Data":"8eb1260dfac155e1cdc3eb55a4469d40f02fb80d1fb7f609b210ca348f51c7eb"} Oct 13 17:37:47 crc kubenswrapper[4720]: I1013 17:37:47.337449 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-g4th9" event={"ID":"935d79c8-281f-4ad8-8c6d-404c0e89653e","Type":"ContainerStarted","Data":"fad83b3394be0ef10337839aba7c76ad8647b7d18ac54d9936629565496bc5d8"} Oct 13 17:37:47 crc kubenswrapper[4720]: I1013 17:37:47.337758 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-g4th9" Oct 13 17:37:47 crc kubenswrapper[4720]: I1013 17:37:47.342458 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757dd6zq7" event={"ID":"b7c96c4b-b0c5-4c82-a6ab-3878c394eab0","Type":"ContainerStarted","Data":"a0606aa324fb19d544409346f2dff3420f1d06038fa85f2d760f4a2704acfd27"} Oct 13 17:37:47 crc kubenswrapper[4720]: I1013 17:37:47.342690 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757dd6zq7" Oct 13 17:37:47 crc kubenswrapper[4720]: I1013 17:37:47.394582 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757dd6zq7" podStartSLOduration=3.544815897 podStartE2EDuration="34.394553658s" podCreationTimestamp="2025-10-13 17:37:13 +0000 UTC" firstStartedPulling="2025-10-13 17:37:15.756756612 +0000 UTC m=+781.214006744" lastFinishedPulling="2025-10-13 17:37:46.606494363 +0000 UTC m=+812.063744505" observedRunningTime="2025-10-13 17:37:47.386691996 +0000 UTC m=+812.843942128" watchObservedRunningTime="2025-10-13 17:37:47.394553658 +0000 UTC m=+812.851803830" Oct 13 17:37:47 crc kubenswrapper[4720]: I1013 17:37:47.411886 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-g4th9" podStartSLOduration=3.1679605 podStartE2EDuration="34.411862253s" podCreationTimestamp="2025-10-13 17:37:13 +0000 UTC" firstStartedPulling="2025-10-13 17:37:15.496360852 +0000 UTC m=+780.953610984" lastFinishedPulling="2025-10-13 17:37:46.740262595 +0000 UTC m=+812.197512737" observedRunningTime="2025-10-13 17:37:47.404867483 +0000 UTC m=+812.862117655" watchObservedRunningTime="2025-10-13 17:37:47.411862253 +0000 UTC m=+812.869112425" Oct 13 17:37:48 crc kubenswrapper[4720]: I1013 17:37:48.354091 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jmr8f" event={"ID":"8869e974-e5a9-4fc3-acb7-b1df6e207103","Type":"ContainerStarted","Data":"a8149019b0c282cd88c101e9149ee093ac019531bc2403b292427a7bf4dae962"} Oct 13 17:37:48 crc kubenswrapper[4720]: I1013 17:37:48.394175 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jmr8f" podStartSLOduration=2.956890679 podStartE2EDuration="5.394151987s" podCreationTimestamp="2025-10-13 17:37:43 +0000 UTC" firstStartedPulling="2025-10-13 17:37:45.296621751 +0000 UTC m=+810.753871873" lastFinishedPulling="2025-10-13 17:37:47.733883049 +0000 UTC m=+813.191133181" observedRunningTime="2025-10-13 17:37:48.38726656 +0000 UTC m=+813.844516742" watchObservedRunningTime="2025-10-13 17:37:48.394151987 +0000 UTC m=+813.851402149" Oct 13 17:37:49 crc kubenswrapper[4720]: I1013 17:37:49.588987 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2tds4"] Oct 13 17:37:49 crc kubenswrapper[4720]: E1013 17:37:49.589478 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddda9c4b-88a0-4d21-9edd-6ca1ac64ab1a" containerName="extract-utilities" Oct 13 17:37:49 crc kubenswrapper[4720]: I1013 17:37:49.589499 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddda9c4b-88a0-4d21-9edd-6ca1ac64ab1a" containerName="extract-utilities" Oct 13 17:37:49 crc kubenswrapper[4720]: E1013 17:37:49.589542 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddda9c4b-88a0-4d21-9edd-6ca1ac64ab1a" containerName="registry-server" Oct 13 17:37:49 crc kubenswrapper[4720]: I1013 17:37:49.589555 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddda9c4b-88a0-4d21-9edd-6ca1ac64ab1a" containerName="registry-server" Oct 13 17:37:49 crc kubenswrapper[4720]: E1013 17:37:49.589570 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddda9c4b-88a0-4d21-9edd-6ca1ac64ab1a" containerName="extract-content" Oct 13 17:37:49 crc kubenswrapper[4720]: I1013 17:37:49.589583 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddda9c4b-88a0-4d21-9edd-6ca1ac64ab1a" containerName="extract-content" Oct 13 17:37:49 crc kubenswrapper[4720]: I1013 17:37:49.589820 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddda9c4b-88a0-4d21-9edd-6ca1ac64ab1a" containerName="registry-server" Oct 13 17:37:49 crc kubenswrapper[4720]: I1013 17:37:49.591528 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2tds4" Oct 13 17:37:49 crc kubenswrapper[4720]: I1013 17:37:49.615141 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2tds4"] Oct 13 17:37:49 crc kubenswrapper[4720]: I1013 17:37:49.710722 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4a336d8-1a62-4304-8b89-b85bc4e66229-catalog-content\") pod \"community-operators-2tds4\" (UID: \"c4a336d8-1a62-4304-8b89-b85bc4e66229\") " pod="openshift-marketplace/community-operators-2tds4" Oct 13 17:37:49 crc kubenswrapper[4720]: I1013 17:37:49.710849 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4a336d8-1a62-4304-8b89-b85bc4e66229-utilities\") pod \"community-operators-2tds4\" (UID: \"c4a336d8-1a62-4304-8b89-b85bc4e66229\") " pod="openshift-marketplace/community-operators-2tds4" Oct 13 17:37:49 crc kubenswrapper[4720]: I1013 17:37:49.710918 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cr4d\" (UniqueName: \"kubernetes.io/projected/c4a336d8-1a62-4304-8b89-b85bc4e66229-kube-api-access-4cr4d\") pod \"community-operators-2tds4\" (UID: \"c4a336d8-1a62-4304-8b89-b85bc4e66229\") " pod="openshift-marketplace/community-operators-2tds4" Oct 13 17:37:49 crc kubenswrapper[4720]: I1013 17:37:49.812665 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cr4d\" (UniqueName: \"kubernetes.io/projected/c4a336d8-1a62-4304-8b89-b85bc4e66229-kube-api-access-4cr4d\") pod \"community-operators-2tds4\" (UID: \"c4a336d8-1a62-4304-8b89-b85bc4e66229\") " pod="openshift-marketplace/community-operators-2tds4" Oct 13 17:37:49 crc kubenswrapper[4720]: I1013 17:37:49.812762 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4a336d8-1a62-4304-8b89-b85bc4e66229-catalog-content\") pod \"community-operators-2tds4\" (UID: \"c4a336d8-1a62-4304-8b89-b85bc4e66229\") " pod="openshift-marketplace/community-operators-2tds4" Oct 13 17:37:49 crc kubenswrapper[4720]: I1013 17:37:49.813582 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4a336d8-1a62-4304-8b89-b85bc4e66229-catalog-content\") pod \"community-operators-2tds4\" (UID: \"c4a336d8-1a62-4304-8b89-b85bc4e66229\") " pod="openshift-marketplace/community-operators-2tds4" Oct 13 17:37:49 crc kubenswrapper[4720]: I1013 17:37:49.813880 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4a336d8-1a62-4304-8b89-b85bc4e66229-utilities\") pod \"community-operators-2tds4\" (UID: \"c4a336d8-1a62-4304-8b89-b85bc4e66229\") " pod="openshift-marketplace/community-operators-2tds4" Oct 13 17:37:49 crc kubenswrapper[4720]: I1013 17:37:49.814451 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4a336d8-1a62-4304-8b89-b85bc4e66229-utilities\") pod \"community-operators-2tds4\" (UID: \"c4a336d8-1a62-4304-8b89-b85bc4e66229\") " pod="openshift-marketplace/community-operators-2tds4" Oct 13 17:37:49 crc kubenswrapper[4720]: I1013 17:37:49.849487 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cr4d\" (UniqueName: \"kubernetes.io/projected/c4a336d8-1a62-4304-8b89-b85bc4e66229-kube-api-access-4cr4d\") pod \"community-operators-2tds4\" (UID: \"c4a336d8-1a62-4304-8b89-b85bc4e66229\") " pod="openshift-marketplace/community-operators-2tds4" Oct 13 17:37:49 crc kubenswrapper[4720]: I1013 17:37:49.913648 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2tds4" Oct 13 17:37:50 crc kubenswrapper[4720]: I1013 17:37:50.452525 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2tds4"] Oct 13 17:37:51 crc kubenswrapper[4720]: I1013 17:37:51.380448 4720 generic.go:334] "Generic (PLEG): container finished" podID="c4a336d8-1a62-4304-8b89-b85bc4e66229" containerID="715180b08472a147ab291ad26f9acc602eca7e1f36d4cd1df638bcc57bc3581f" exitCode=0 Oct 13 17:37:51 crc kubenswrapper[4720]: I1013 17:37:51.380516 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2tds4" event={"ID":"c4a336d8-1a62-4304-8b89-b85bc4e66229","Type":"ContainerDied","Data":"715180b08472a147ab291ad26f9acc602eca7e1f36d4cd1df638bcc57bc3581f"} Oct 13 17:37:51 crc kubenswrapper[4720]: I1013 17:37:51.380986 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2tds4" event={"ID":"c4a336d8-1a62-4304-8b89-b85bc4e66229","Type":"ContainerStarted","Data":"aead2952901b17eea9a33ad50ce8cea7b9142c385e1d0a26eefea8f04e85654d"} Oct 13 17:37:52 crc kubenswrapper[4720]: I1013 17:37:52.390574 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2tds4" event={"ID":"c4a336d8-1a62-4304-8b89-b85bc4e66229","Type":"ContainerStarted","Data":"76f869c44b8621d6741d5b18ddb99008443b22446a2fca9737c1eeb9debcbb73"} Oct 13 17:37:53 crc kubenswrapper[4720]: I1013 17:37:53.403135 4720 generic.go:334] "Generic (PLEG): container finished" podID="c4a336d8-1a62-4304-8b89-b85bc4e66229" containerID="76f869c44b8621d6741d5b18ddb99008443b22446a2fca9737c1eeb9debcbb73" exitCode=0 Oct 13 17:37:53 crc kubenswrapper[4720]: I1013 17:37:53.403260 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2tds4" event={"ID":"c4a336d8-1a62-4304-8b89-b85bc4e66229","Type":"ContainerDied","Data":"76f869c44b8621d6741d5b18ddb99008443b22446a2fca9737c1eeb9debcbb73"} Oct 13 17:37:53 crc kubenswrapper[4720]: I1013 17:37:53.500542 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-797d478b46-m44cs" Oct 13 17:37:53 crc kubenswrapper[4720]: I1013 17:37:53.551447 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-57bb74c7bf-dkbrd" Oct 13 17:37:53 crc kubenswrapper[4720]: I1013 17:37:53.623684 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-5f4d5dfdc6-g4th9" Oct 13 17:37:53 crc kubenswrapper[4720]: I1013 17:37:53.736278 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-ffcdd6c94-xm8vk" Oct 13 17:37:54 crc kubenswrapper[4720]: I1013 17:37:54.301269 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jmr8f" Oct 13 17:37:54 crc kubenswrapper[4720]: I1013 17:37:54.302749 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jmr8f" Oct 13 17:37:54 crc kubenswrapper[4720]: I1013 17:37:54.379717 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jmr8f" Oct 13 17:37:54 crc kubenswrapper[4720]: I1013 17:37:54.415528 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2tds4" event={"ID":"c4a336d8-1a62-4304-8b89-b85bc4e66229","Type":"ContainerStarted","Data":"56dd79976c0d843f97942f415671f735c052f500bfae477f4e634ec0ecdfd824"} Oct 13 17:37:54 crc kubenswrapper[4720]: I1013 17:37:54.452868 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2tds4" podStartSLOduration=2.813738498 podStartE2EDuration="5.45282762s" podCreationTimestamp="2025-10-13 17:37:49 +0000 UTC" firstStartedPulling="2025-10-13 17:37:51.383256433 +0000 UTC m=+816.840506605" lastFinishedPulling="2025-10-13 17:37:54.022345605 +0000 UTC m=+819.479595727" observedRunningTime="2025-10-13 17:37:54.452099461 +0000 UTC m=+819.909349663" watchObservedRunningTime="2025-10-13 17:37:54.45282762 +0000 UTC m=+819.910077782" Oct 13 17:37:54 crc kubenswrapper[4720]: I1013 17:37:54.480461 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jmr8f" Oct 13 17:37:55 crc kubenswrapper[4720]: I1013 17:37:55.087623 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cc7fb757dd6zq7" Oct 13 17:37:56 crc kubenswrapper[4720]: I1013 17:37:56.776756 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jmr8f"] Oct 13 17:37:57 crc kubenswrapper[4720]: I1013 17:37:57.449772 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jmr8f" podUID="8869e974-e5a9-4fc3-acb7-b1df6e207103" containerName="registry-server" containerID="cri-o://a8149019b0c282cd88c101e9149ee093ac019531bc2403b292427a7bf4dae962" gracePeriod=2 Oct 13 17:37:57 crc kubenswrapper[4720]: I1013 17:37:57.962939 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jmr8f" Oct 13 17:37:58 crc kubenswrapper[4720]: I1013 17:37:58.045101 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4chnt\" (UniqueName: \"kubernetes.io/projected/8869e974-e5a9-4fc3-acb7-b1df6e207103-kube-api-access-4chnt\") pod \"8869e974-e5a9-4fc3-acb7-b1df6e207103\" (UID: \"8869e974-e5a9-4fc3-acb7-b1df6e207103\") " Oct 13 17:37:58 crc kubenswrapper[4720]: I1013 17:37:58.045304 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8869e974-e5a9-4fc3-acb7-b1df6e207103-utilities\") pod \"8869e974-e5a9-4fc3-acb7-b1df6e207103\" (UID: \"8869e974-e5a9-4fc3-acb7-b1df6e207103\") " Oct 13 17:37:58 crc kubenswrapper[4720]: I1013 17:37:58.045505 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8869e974-e5a9-4fc3-acb7-b1df6e207103-catalog-content\") pod \"8869e974-e5a9-4fc3-acb7-b1df6e207103\" (UID: \"8869e974-e5a9-4fc3-acb7-b1df6e207103\") " Oct 13 17:37:58 crc kubenswrapper[4720]: I1013 17:37:58.046442 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8869e974-e5a9-4fc3-acb7-b1df6e207103-utilities" (OuterVolumeSpecName: "utilities") pod "8869e974-e5a9-4fc3-acb7-b1df6e207103" (UID: "8869e974-e5a9-4fc3-acb7-b1df6e207103"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:37:58 crc kubenswrapper[4720]: I1013 17:37:58.052991 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8869e974-e5a9-4fc3-acb7-b1df6e207103-kube-api-access-4chnt" (OuterVolumeSpecName: "kube-api-access-4chnt") pod "8869e974-e5a9-4fc3-acb7-b1df6e207103" (UID: "8869e974-e5a9-4fc3-acb7-b1df6e207103"). InnerVolumeSpecName "kube-api-access-4chnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:37:58 crc kubenswrapper[4720]: I1013 17:37:58.147217 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4chnt\" (UniqueName: \"kubernetes.io/projected/8869e974-e5a9-4fc3-acb7-b1df6e207103-kube-api-access-4chnt\") on node \"crc\" DevicePath \"\"" Oct 13 17:37:58 crc kubenswrapper[4720]: I1013 17:37:58.147248 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8869e974-e5a9-4fc3-acb7-b1df6e207103-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 17:37:58 crc kubenswrapper[4720]: I1013 17:37:58.463050 4720 generic.go:334] "Generic (PLEG): container finished" podID="8869e974-e5a9-4fc3-acb7-b1df6e207103" containerID="a8149019b0c282cd88c101e9149ee093ac019531bc2403b292427a7bf4dae962" exitCode=0 Oct 13 17:37:58 crc kubenswrapper[4720]: I1013 17:37:58.463118 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jmr8f" event={"ID":"8869e974-e5a9-4fc3-acb7-b1df6e207103","Type":"ContainerDied","Data":"a8149019b0c282cd88c101e9149ee093ac019531bc2403b292427a7bf4dae962"} Oct 13 17:37:58 crc kubenswrapper[4720]: I1013 17:37:58.463154 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jmr8f" Oct 13 17:37:58 crc kubenswrapper[4720]: I1013 17:37:58.463177 4720 scope.go:117] "RemoveContainer" containerID="a8149019b0c282cd88c101e9149ee093ac019531bc2403b292427a7bf4dae962" Oct 13 17:37:58 crc kubenswrapper[4720]: I1013 17:37:58.463160 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jmr8f" event={"ID":"8869e974-e5a9-4fc3-acb7-b1df6e207103","Type":"ContainerDied","Data":"e7b276e1d6086d2598a6ffbb25682de5fe9f3a8ca5db04317870cfcfe37332ce"} Oct 13 17:37:58 crc kubenswrapper[4720]: I1013 17:37:58.496758 4720 scope.go:117] "RemoveContainer" containerID="8eb1260dfac155e1cdc3eb55a4469d40f02fb80d1fb7f609b210ca348f51c7eb" Oct 13 17:37:58 crc kubenswrapper[4720]: I1013 17:37:58.526747 4720 scope.go:117] "RemoveContainer" containerID="cf3f0073fb7f23aad3a1373c221f84533ec4d88eef82a0379210cc558915355e" Oct 13 17:37:58 crc kubenswrapper[4720]: I1013 17:37:58.571209 4720 scope.go:117] "RemoveContainer" containerID="a8149019b0c282cd88c101e9149ee093ac019531bc2403b292427a7bf4dae962" Oct 13 17:37:58 crc kubenswrapper[4720]: E1013 17:37:58.571907 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8149019b0c282cd88c101e9149ee093ac019531bc2403b292427a7bf4dae962\": container with ID starting with a8149019b0c282cd88c101e9149ee093ac019531bc2403b292427a7bf4dae962 not found: ID does not exist" containerID="a8149019b0c282cd88c101e9149ee093ac019531bc2403b292427a7bf4dae962" Oct 13 17:37:58 crc kubenswrapper[4720]: I1013 17:37:58.571975 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8149019b0c282cd88c101e9149ee093ac019531bc2403b292427a7bf4dae962"} err="failed to get container status \"a8149019b0c282cd88c101e9149ee093ac019531bc2403b292427a7bf4dae962\": rpc error: code = NotFound desc = could not find container \"a8149019b0c282cd88c101e9149ee093ac019531bc2403b292427a7bf4dae962\": container with ID starting with a8149019b0c282cd88c101e9149ee093ac019531bc2403b292427a7bf4dae962 not found: ID does not exist" Oct 13 17:37:58 crc kubenswrapper[4720]: I1013 17:37:58.572018 4720 scope.go:117] "RemoveContainer" containerID="8eb1260dfac155e1cdc3eb55a4469d40f02fb80d1fb7f609b210ca348f51c7eb" Oct 13 17:37:58 crc kubenswrapper[4720]: E1013 17:37:58.572652 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8eb1260dfac155e1cdc3eb55a4469d40f02fb80d1fb7f609b210ca348f51c7eb\": container with ID starting with 8eb1260dfac155e1cdc3eb55a4469d40f02fb80d1fb7f609b210ca348f51c7eb not found: ID does not exist" containerID="8eb1260dfac155e1cdc3eb55a4469d40f02fb80d1fb7f609b210ca348f51c7eb" Oct 13 17:37:58 crc kubenswrapper[4720]: I1013 17:37:58.572688 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8eb1260dfac155e1cdc3eb55a4469d40f02fb80d1fb7f609b210ca348f51c7eb"} err="failed to get container status \"8eb1260dfac155e1cdc3eb55a4469d40f02fb80d1fb7f609b210ca348f51c7eb\": rpc error: code = NotFound desc = could not find container \"8eb1260dfac155e1cdc3eb55a4469d40f02fb80d1fb7f609b210ca348f51c7eb\": container with ID starting with 8eb1260dfac155e1cdc3eb55a4469d40f02fb80d1fb7f609b210ca348f51c7eb not found: ID does not exist" Oct 13 17:37:58 crc kubenswrapper[4720]: I1013 17:37:58.572716 4720 scope.go:117] "RemoveContainer" containerID="cf3f0073fb7f23aad3a1373c221f84533ec4d88eef82a0379210cc558915355e" Oct 13 17:37:58 crc kubenswrapper[4720]: E1013 17:37:58.573303 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf3f0073fb7f23aad3a1373c221f84533ec4d88eef82a0379210cc558915355e\": container with ID starting with cf3f0073fb7f23aad3a1373c221f84533ec4d88eef82a0379210cc558915355e not found: ID does not exist" containerID="cf3f0073fb7f23aad3a1373c221f84533ec4d88eef82a0379210cc558915355e" Oct 13 17:37:58 crc kubenswrapper[4720]: I1013 17:37:58.573338 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf3f0073fb7f23aad3a1373c221f84533ec4d88eef82a0379210cc558915355e"} err="failed to get container status \"cf3f0073fb7f23aad3a1373c221f84533ec4d88eef82a0379210cc558915355e\": rpc error: code = NotFound desc = could not find container \"cf3f0073fb7f23aad3a1373c221f84533ec4d88eef82a0379210cc558915355e\": container with ID starting with cf3f0073fb7f23aad3a1373c221f84533ec4d88eef82a0379210cc558915355e not found: ID does not exist" Oct 13 17:37:59 crc kubenswrapper[4720]: I1013 17:37:59.027384 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8869e974-e5a9-4fc3-acb7-b1df6e207103-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8869e974-e5a9-4fc3-acb7-b1df6e207103" (UID: "8869e974-e5a9-4fc3-acb7-b1df6e207103"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:37:59 crc kubenswrapper[4720]: I1013 17:37:59.061081 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8869e974-e5a9-4fc3-acb7-b1df6e207103-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 17:37:59 crc kubenswrapper[4720]: I1013 17:37:59.111779 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jmr8f"] Oct 13 17:37:59 crc kubenswrapper[4720]: I1013 17:37:59.120742 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jmr8f"] Oct 13 17:37:59 crc kubenswrapper[4720]: I1013 17:37:59.183497 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8869e974-e5a9-4fc3-acb7-b1df6e207103" path="/var/lib/kubelet/pods/8869e974-e5a9-4fc3-acb7-b1df6e207103/volumes" Oct 13 17:37:59 crc kubenswrapper[4720]: I1013 17:37:59.914450 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2tds4" Oct 13 17:37:59 crc kubenswrapper[4720]: I1013 17:37:59.914540 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2tds4" Oct 13 17:37:59 crc kubenswrapper[4720]: I1013 17:37:59.975904 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2tds4" Oct 13 17:38:00 crc kubenswrapper[4720]: I1013 17:38:00.558889 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2tds4" Oct 13 17:38:01 crc kubenswrapper[4720]: I1013 17:38:01.576720 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2tds4"] Oct 13 17:38:02 crc kubenswrapper[4720]: I1013 17:38:02.499842 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2tds4" podUID="c4a336d8-1a62-4304-8b89-b85bc4e66229" containerName="registry-server" containerID="cri-o://56dd79976c0d843f97942f415671f735c052f500bfae477f4e634ec0ecdfd824" gracePeriod=2 Oct 13 17:38:03 crc kubenswrapper[4720]: I1013 17:38:03.279802 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2tds4" Oct 13 17:38:03 crc kubenswrapper[4720]: I1013 17:38:03.327025 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4a336d8-1a62-4304-8b89-b85bc4e66229-utilities\") pod \"c4a336d8-1a62-4304-8b89-b85bc4e66229\" (UID: \"c4a336d8-1a62-4304-8b89-b85bc4e66229\") " Oct 13 17:38:03 crc kubenswrapper[4720]: I1013 17:38:03.327093 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cr4d\" (UniqueName: \"kubernetes.io/projected/c4a336d8-1a62-4304-8b89-b85bc4e66229-kube-api-access-4cr4d\") pod \"c4a336d8-1a62-4304-8b89-b85bc4e66229\" (UID: \"c4a336d8-1a62-4304-8b89-b85bc4e66229\") " Oct 13 17:38:03 crc kubenswrapper[4720]: I1013 17:38:03.327133 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4a336d8-1a62-4304-8b89-b85bc4e66229-catalog-content\") pod \"c4a336d8-1a62-4304-8b89-b85bc4e66229\" (UID: \"c4a336d8-1a62-4304-8b89-b85bc4e66229\") " Oct 13 17:38:03 crc kubenswrapper[4720]: I1013 17:38:03.329918 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4a336d8-1a62-4304-8b89-b85bc4e66229-utilities" (OuterVolumeSpecName: "utilities") pod "c4a336d8-1a62-4304-8b89-b85bc4e66229" (UID: "c4a336d8-1a62-4304-8b89-b85bc4e66229"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:38:03 crc kubenswrapper[4720]: I1013 17:38:03.336115 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4a336d8-1a62-4304-8b89-b85bc4e66229-kube-api-access-4cr4d" (OuterVolumeSpecName: "kube-api-access-4cr4d") pod "c4a336d8-1a62-4304-8b89-b85bc4e66229" (UID: "c4a336d8-1a62-4304-8b89-b85bc4e66229"). InnerVolumeSpecName "kube-api-access-4cr4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:38:03 crc kubenswrapper[4720]: I1013 17:38:03.375180 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4a336d8-1a62-4304-8b89-b85bc4e66229-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c4a336d8-1a62-4304-8b89-b85bc4e66229" (UID: "c4a336d8-1a62-4304-8b89-b85bc4e66229"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:38:03 crc kubenswrapper[4720]: I1013 17:38:03.429247 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4a336d8-1a62-4304-8b89-b85bc4e66229-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 17:38:03 crc kubenswrapper[4720]: I1013 17:38:03.429336 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cr4d\" (UniqueName: \"kubernetes.io/projected/c4a336d8-1a62-4304-8b89-b85bc4e66229-kube-api-access-4cr4d\") on node \"crc\" DevicePath \"\"" Oct 13 17:38:03 crc kubenswrapper[4720]: I1013 17:38:03.429359 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4a336d8-1a62-4304-8b89-b85bc4e66229-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 17:38:03 crc kubenswrapper[4720]: I1013 17:38:03.513102 4720 generic.go:334] "Generic (PLEG): container finished" podID="c4a336d8-1a62-4304-8b89-b85bc4e66229" containerID="56dd79976c0d843f97942f415671f735c052f500bfae477f4e634ec0ecdfd824" exitCode=0 Oct 13 17:38:03 crc kubenswrapper[4720]: I1013 17:38:03.513179 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2tds4" Oct 13 17:38:03 crc kubenswrapper[4720]: I1013 17:38:03.513186 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2tds4" event={"ID":"c4a336d8-1a62-4304-8b89-b85bc4e66229","Type":"ContainerDied","Data":"56dd79976c0d843f97942f415671f735c052f500bfae477f4e634ec0ecdfd824"} Oct 13 17:38:03 crc kubenswrapper[4720]: I1013 17:38:03.513315 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2tds4" event={"ID":"c4a336d8-1a62-4304-8b89-b85bc4e66229","Type":"ContainerDied","Data":"aead2952901b17eea9a33ad50ce8cea7b9142c385e1d0a26eefea8f04e85654d"} Oct 13 17:38:03 crc kubenswrapper[4720]: I1013 17:38:03.513369 4720 scope.go:117] "RemoveContainer" containerID="56dd79976c0d843f97942f415671f735c052f500bfae477f4e634ec0ecdfd824" Oct 13 17:38:03 crc kubenswrapper[4720]: I1013 17:38:03.542546 4720 scope.go:117] "RemoveContainer" containerID="76f869c44b8621d6741d5b18ddb99008443b22446a2fca9737c1eeb9debcbb73" Oct 13 17:38:03 crc kubenswrapper[4720]: I1013 17:38:03.559391 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2tds4"] Oct 13 17:38:03 crc kubenswrapper[4720]: I1013 17:38:03.567614 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2tds4"] Oct 13 17:38:03 crc kubenswrapper[4720]: I1013 17:38:03.587873 4720 scope.go:117] "RemoveContainer" containerID="715180b08472a147ab291ad26f9acc602eca7e1f36d4cd1df638bcc57bc3581f" Oct 13 17:38:03 crc kubenswrapper[4720]: I1013 17:38:03.613828 4720 scope.go:117] "RemoveContainer" containerID="56dd79976c0d843f97942f415671f735c052f500bfae477f4e634ec0ecdfd824" Oct 13 17:38:03 crc kubenswrapper[4720]: E1013 17:38:03.614343 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56dd79976c0d843f97942f415671f735c052f500bfae477f4e634ec0ecdfd824\": container with ID starting with 56dd79976c0d843f97942f415671f735c052f500bfae477f4e634ec0ecdfd824 not found: ID does not exist" containerID="56dd79976c0d843f97942f415671f735c052f500bfae477f4e634ec0ecdfd824" Oct 13 17:38:03 crc kubenswrapper[4720]: I1013 17:38:03.614402 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56dd79976c0d843f97942f415671f735c052f500bfae477f4e634ec0ecdfd824"} err="failed to get container status \"56dd79976c0d843f97942f415671f735c052f500bfae477f4e634ec0ecdfd824\": rpc error: code = NotFound desc = could not find container \"56dd79976c0d843f97942f415671f735c052f500bfae477f4e634ec0ecdfd824\": container with ID starting with 56dd79976c0d843f97942f415671f735c052f500bfae477f4e634ec0ecdfd824 not found: ID does not exist" Oct 13 17:38:03 crc kubenswrapper[4720]: I1013 17:38:03.614421 4720 scope.go:117] "RemoveContainer" containerID="76f869c44b8621d6741d5b18ddb99008443b22446a2fca9737c1eeb9debcbb73" Oct 13 17:38:03 crc kubenswrapper[4720]: E1013 17:38:03.615077 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76f869c44b8621d6741d5b18ddb99008443b22446a2fca9737c1eeb9debcbb73\": container with ID starting with 76f869c44b8621d6741d5b18ddb99008443b22446a2fca9737c1eeb9debcbb73 not found: ID does not exist" containerID="76f869c44b8621d6741d5b18ddb99008443b22446a2fca9737c1eeb9debcbb73" Oct 13 17:38:03 crc kubenswrapper[4720]: I1013 17:38:03.615105 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76f869c44b8621d6741d5b18ddb99008443b22446a2fca9737c1eeb9debcbb73"} err="failed to get container status \"76f869c44b8621d6741d5b18ddb99008443b22446a2fca9737c1eeb9debcbb73\": rpc error: code = NotFound desc = could not find container \"76f869c44b8621d6741d5b18ddb99008443b22446a2fca9737c1eeb9debcbb73\": container with ID starting with 76f869c44b8621d6741d5b18ddb99008443b22446a2fca9737c1eeb9debcbb73 not found: ID does not exist" Oct 13 17:38:03 crc kubenswrapper[4720]: I1013 17:38:03.615125 4720 scope.go:117] "RemoveContainer" containerID="715180b08472a147ab291ad26f9acc602eca7e1f36d4cd1df638bcc57bc3581f" Oct 13 17:38:03 crc kubenswrapper[4720]: E1013 17:38:03.615454 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"715180b08472a147ab291ad26f9acc602eca7e1f36d4cd1df638bcc57bc3581f\": container with ID starting with 715180b08472a147ab291ad26f9acc602eca7e1f36d4cd1df638bcc57bc3581f not found: ID does not exist" containerID="715180b08472a147ab291ad26f9acc602eca7e1f36d4cd1df638bcc57bc3581f" Oct 13 17:38:03 crc kubenswrapper[4720]: I1013 17:38:03.615478 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"715180b08472a147ab291ad26f9acc602eca7e1f36d4cd1df638bcc57bc3581f"} err="failed to get container status \"715180b08472a147ab291ad26f9acc602eca7e1f36d4cd1df638bcc57bc3581f\": rpc error: code = NotFound desc = could not find container \"715180b08472a147ab291ad26f9acc602eca7e1f36d4cd1df638bcc57bc3581f\": container with ID starting with 715180b08472a147ab291ad26f9acc602eca7e1f36d4cd1df638bcc57bc3581f not found: ID does not exist" Oct 13 17:38:05 crc kubenswrapper[4720]: I1013 17:38:05.183730 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4a336d8-1a62-4304-8b89-b85bc4e66229" path="/var/lib/kubelet/pods/c4a336d8-1a62-4304-8b89-b85bc4e66229/volumes" Oct 13 17:38:19 crc kubenswrapper[4720]: I1013 17:38:19.144879 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-wv5jv"] Oct 13 17:38:19 crc kubenswrapper[4720]: E1013 17:38:19.145782 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4a336d8-1a62-4304-8b89-b85bc4e66229" containerName="extract-content" Oct 13 17:38:19 crc kubenswrapper[4720]: I1013 17:38:19.145800 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4a336d8-1a62-4304-8b89-b85bc4e66229" containerName="extract-content" Oct 13 17:38:19 crc kubenswrapper[4720]: E1013 17:38:19.145818 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4a336d8-1a62-4304-8b89-b85bc4e66229" containerName="extract-utilities" Oct 13 17:38:19 crc kubenswrapper[4720]: I1013 17:38:19.145826 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4a336d8-1a62-4304-8b89-b85bc4e66229" containerName="extract-utilities" Oct 13 17:38:19 crc kubenswrapper[4720]: E1013 17:38:19.145844 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8869e974-e5a9-4fc3-acb7-b1df6e207103" containerName="extract-content" Oct 13 17:38:19 crc kubenswrapper[4720]: I1013 17:38:19.145852 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="8869e974-e5a9-4fc3-acb7-b1df6e207103" containerName="extract-content" Oct 13 17:38:19 crc kubenswrapper[4720]: E1013 17:38:19.145870 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4a336d8-1a62-4304-8b89-b85bc4e66229" containerName="registry-server" Oct 13 17:38:19 crc kubenswrapper[4720]: I1013 17:38:19.145878 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4a336d8-1a62-4304-8b89-b85bc4e66229" containerName="registry-server" Oct 13 17:38:19 crc kubenswrapper[4720]: E1013 17:38:19.145893 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8869e974-e5a9-4fc3-acb7-b1df6e207103" containerName="extract-utilities" Oct 13 17:38:19 crc kubenswrapper[4720]: I1013 17:38:19.145902 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="8869e974-e5a9-4fc3-acb7-b1df6e207103" containerName="extract-utilities" Oct 13 17:38:19 crc kubenswrapper[4720]: E1013 17:38:19.145930 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8869e974-e5a9-4fc3-acb7-b1df6e207103" containerName="registry-server" Oct 13 17:38:19 crc kubenswrapper[4720]: I1013 17:38:19.145937 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="8869e974-e5a9-4fc3-acb7-b1df6e207103" containerName="registry-server" Oct 13 17:38:19 crc kubenswrapper[4720]: I1013 17:38:19.146112 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="8869e974-e5a9-4fc3-acb7-b1df6e207103" containerName="registry-server" Oct 13 17:38:19 crc kubenswrapper[4720]: I1013 17:38:19.146125 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4a336d8-1a62-4304-8b89-b85bc4e66229" containerName="registry-server" Oct 13 17:38:19 crc kubenswrapper[4720]: I1013 17:38:19.146988 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-wv5jv" Oct 13 17:38:19 crc kubenswrapper[4720]: I1013 17:38:19.155556 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-psqzc" Oct 13 17:38:19 crc kubenswrapper[4720]: I1013 17:38:19.155894 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Oct 13 17:38:19 crc kubenswrapper[4720]: I1013 17:38:19.156038 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Oct 13 17:38:19 crc kubenswrapper[4720]: I1013 17:38:19.156065 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Oct 13 17:38:19 crc kubenswrapper[4720]: I1013 17:38:19.159295 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-wv5jv"] Oct 13 17:38:19 crc kubenswrapper[4720]: I1013 17:38:19.194779 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78b891cd-26d1-4448-8675-bef9fbce4793-config\") pod \"dnsmasq-dns-675f4bcbfc-wv5jv\" (UID: \"78b891cd-26d1-4448-8675-bef9fbce4793\") " pod="openstack/dnsmasq-dns-675f4bcbfc-wv5jv" Oct 13 17:38:19 crc kubenswrapper[4720]: I1013 17:38:19.195381 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfj55\" (UniqueName: \"kubernetes.io/projected/78b891cd-26d1-4448-8675-bef9fbce4793-kube-api-access-gfj55\") pod \"dnsmasq-dns-675f4bcbfc-wv5jv\" (UID: \"78b891cd-26d1-4448-8675-bef9fbce4793\") " pod="openstack/dnsmasq-dns-675f4bcbfc-wv5jv" Oct 13 17:38:19 crc kubenswrapper[4720]: I1013 17:38:19.229028 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-nz4rl"] Oct 13 17:38:19 crc kubenswrapper[4720]: I1013 17:38:19.231058 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-nz4rl" Oct 13 17:38:19 crc kubenswrapper[4720]: I1013 17:38:19.234818 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Oct 13 17:38:19 crc kubenswrapper[4720]: I1013 17:38:19.237588 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-nz4rl"] Oct 13 17:38:19 crc kubenswrapper[4720]: I1013 17:38:19.296745 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78b891cd-26d1-4448-8675-bef9fbce4793-config\") pod \"dnsmasq-dns-675f4bcbfc-wv5jv\" (UID: \"78b891cd-26d1-4448-8675-bef9fbce4793\") " pod="openstack/dnsmasq-dns-675f4bcbfc-wv5jv" Oct 13 17:38:19 crc kubenswrapper[4720]: I1013 17:38:19.296839 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb7c286e-7815-4c65-98bd-cad6da562987-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-nz4rl\" (UID: \"cb7c286e-7815-4c65-98bd-cad6da562987\") " pod="openstack/dnsmasq-dns-78dd6ddcc-nz4rl" Oct 13 17:38:19 crc kubenswrapper[4720]: I1013 17:38:19.296959 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb7c286e-7815-4c65-98bd-cad6da562987-config\") pod \"dnsmasq-dns-78dd6ddcc-nz4rl\" (UID: \"cb7c286e-7815-4c65-98bd-cad6da562987\") " pod="openstack/dnsmasq-dns-78dd6ddcc-nz4rl" Oct 13 17:38:19 crc kubenswrapper[4720]: I1013 17:38:19.297003 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrscm\" (UniqueName: \"kubernetes.io/projected/cb7c286e-7815-4c65-98bd-cad6da562987-kube-api-access-vrscm\") pod \"dnsmasq-dns-78dd6ddcc-nz4rl\" (UID: \"cb7c286e-7815-4c65-98bd-cad6da562987\") " pod="openstack/dnsmasq-dns-78dd6ddcc-nz4rl" Oct 13 17:38:19 crc kubenswrapper[4720]: I1013 17:38:19.297048 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfj55\" (UniqueName: \"kubernetes.io/projected/78b891cd-26d1-4448-8675-bef9fbce4793-kube-api-access-gfj55\") pod \"dnsmasq-dns-675f4bcbfc-wv5jv\" (UID: \"78b891cd-26d1-4448-8675-bef9fbce4793\") " pod="openstack/dnsmasq-dns-675f4bcbfc-wv5jv" Oct 13 17:38:19 crc kubenswrapper[4720]: I1013 17:38:19.297665 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78b891cd-26d1-4448-8675-bef9fbce4793-config\") pod \"dnsmasq-dns-675f4bcbfc-wv5jv\" (UID: \"78b891cd-26d1-4448-8675-bef9fbce4793\") " pod="openstack/dnsmasq-dns-675f4bcbfc-wv5jv" Oct 13 17:38:19 crc kubenswrapper[4720]: I1013 17:38:19.314957 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfj55\" (UniqueName: \"kubernetes.io/projected/78b891cd-26d1-4448-8675-bef9fbce4793-kube-api-access-gfj55\") pod \"dnsmasq-dns-675f4bcbfc-wv5jv\" (UID: \"78b891cd-26d1-4448-8675-bef9fbce4793\") " pod="openstack/dnsmasq-dns-675f4bcbfc-wv5jv" Oct 13 17:38:19 crc kubenswrapper[4720]: I1013 17:38:19.398606 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb7c286e-7815-4c65-98bd-cad6da562987-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-nz4rl\" (UID: \"cb7c286e-7815-4c65-98bd-cad6da562987\") " pod="openstack/dnsmasq-dns-78dd6ddcc-nz4rl" Oct 13 17:38:19 crc kubenswrapper[4720]: I1013 17:38:19.398752 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb7c286e-7815-4c65-98bd-cad6da562987-config\") pod \"dnsmasq-dns-78dd6ddcc-nz4rl\" (UID: \"cb7c286e-7815-4c65-98bd-cad6da562987\") " pod="openstack/dnsmasq-dns-78dd6ddcc-nz4rl" Oct 13 17:38:19 crc kubenswrapper[4720]: I1013 17:38:19.398824 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrscm\" (UniqueName: \"kubernetes.io/projected/cb7c286e-7815-4c65-98bd-cad6da562987-kube-api-access-vrscm\") pod \"dnsmasq-dns-78dd6ddcc-nz4rl\" (UID: \"cb7c286e-7815-4c65-98bd-cad6da562987\") " pod="openstack/dnsmasq-dns-78dd6ddcc-nz4rl" Oct 13 17:38:19 crc kubenswrapper[4720]: I1013 17:38:19.399999 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb7c286e-7815-4c65-98bd-cad6da562987-config\") pod \"dnsmasq-dns-78dd6ddcc-nz4rl\" (UID: \"cb7c286e-7815-4c65-98bd-cad6da562987\") " pod="openstack/dnsmasq-dns-78dd6ddcc-nz4rl" Oct 13 17:38:19 crc kubenswrapper[4720]: I1013 17:38:19.401083 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb7c286e-7815-4c65-98bd-cad6da562987-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-nz4rl\" (UID: \"cb7c286e-7815-4c65-98bd-cad6da562987\") " pod="openstack/dnsmasq-dns-78dd6ddcc-nz4rl" Oct 13 17:38:19 crc kubenswrapper[4720]: I1013 17:38:19.416037 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrscm\" (UniqueName: \"kubernetes.io/projected/cb7c286e-7815-4c65-98bd-cad6da562987-kube-api-access-vrscm\") pod \"dnsmasq-dns-78dd6ddcc-nz4rl\" (UID: \"cb7c286e-7815-4c65-98bd-cad6da562987\") " pod="openstack/dnsmasq-dns-78dd6ddcc-nz4rl" Oct 13 17:38:19 crc kubenswrapper[4720]: I1013 17:38:19.472795 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-wv5jv" Oct 13 17:38:19 crc kubenswrapper[4720]: I1013 17:38:19.544498 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-nz4rl" Oct 13 17:38:19 crc kubenswrapper[4720]: I1013 17:38:19.981348 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-wv5jv"] Oct 13 17:38:20 crc kubenswrapper[4720]: I1013 17:38:20.044994 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-nz4rl"] Oct 13 17:38:20 crc kubenswrapper[4720]: W1013 17:38:20.048565 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb7c286e_7815_4c65_98bd_cad6da562987.slice/crio-b506e5c169c8b8d2e7208a7bcd164046401ce12d876b0fba9c95f29caa82b5de WatchSource:0}: Error finding container b506e5c169c8b8d2e7208a7bcd164046401ce12d876b0fba9c95f29caa82b5de: Status 404 returned error can't find the container with id b506e5c169c8b8d2e7208a7bcd164046401ce12d876b0fba9c95f29caa82b5de Oct 13 17:38:20 crc kubenswrapper[4720]: I1013 17:38:20.678906 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-wv5jv" event={"ID":"78b891cd-26d1-4448-8675-bef9fbce4793","Type":"ContainerStarted","Data":"62d48523441796e3b35df04fff56b3c259a482636a0c7ecffd0c5abea30cb703"} Oct 13 17:38:20 crc kubenswrapper[4720]: I1013 17:38:20.681140 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-nz4rl" event={"ID":"cb7c286e-7815-4c65-98bd-cad6da562987","Type":"ContainerStarted","Data":"b506e5c169c8b8d2e7208a7bcd164046401ce12d876b0fba9c95f29caa82b5de"} Oct 13 17:38:22 crc kubenswrapper[4720]: I1013 17:38:22.194434 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-wv5jv"] Oct 13 17:38:22 crc kubenswrapper[4720]: I1013 17:38:22.207742 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-4mnjx"] Oct 13 17:38:22 crc kubenswrapper[4720]: I1013 17:38:22.213623 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-4mnjx" Oct 13 17:38:22 crc kubenswrapper[4720]: I1013 17:38:22.229601 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-4mnjx"] Oct 13 17:38:22 crc kubenswrapper[4720]: I1013 17:38:22.251465 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b20e6d23-eea6-4e05-95fc-100aa77c82f7-config\") pod \"dnsmasq-dns-666b6646f7-4mnjx\" (UID: \"b20e6d23-eea6-4e05-95fc-100aa77c82f7\") " pod="openstack/dnsmasq-dns-666b6646f7-4mnjx" Oct 13 17:38:22 crc kubenswrapper[4720]: I1013 17:38:22.251530 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b20e6d23-eea6-4e05-95fc-100aa77c82f7-dns-svc\") pod \"dnsmasq-dns-666b6646f7-4mnjx\" (UID: \"b20e6d23-eea6-4e05-95fc-100aa77c82f7\") " pod="openstack/dnsmasq-dns-666b6646f7-4mnjx" Oct 13 17:38:22 crc kubenswrapper[4720]: I1013 17:38:22.251642 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r84z8\" (UniqueName: \"kubernetes.io/projected/b20e6d23-eea6-4e05-95fc-100aa77c82f7-kube-api-access-r84z8\") pod \"dnsmasq-dns-666b6646f7-4mnjx\" (UID: \"b20e6d23-eea6-4e05-95fc-100aa77c82f7\") " pod="openstack/dnsmasq-dns-666b6646f7-4mnjx" Oct 13 17:38:22 crc kubenswrapper[4720]: I1013 17:38:22.352685 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b20e6d23-eea6-4e05-95fc-100aa77c82f7-config\") pod \"dnsmasq-dns-666b6646f7-4mnjx\" (UID: \"b20e6d23-eea6-4e05-95fc-100aa77c82f7\") " pod="openstack/dnsmasq-dns-666b6646f7-4mnjx" Oct 13 17:38:22 crc kubenswrapper[4720]: I1013 17:38:22.352738 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b20e6d23-eea6-4e05-95fc-100aa77c82f7-dns-svc\") pod \"dnsmasq-dns-666b6646f7-4mnjx\" (UID: \"b20e6d23-eea6-4e05-95fc-100aa77c82f7\") " pod="openstack/dnsmasq-dns-666b6646f7-4mnjx" Oct 13 17:38:22 crc kubenswrapper[4720]: I1013 17:38:22.352790 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r84z8\" (UniqueName: \"kubernetes.io/projected/b20e6d23-eea6-4e05-95fc-100aa77c82f7-kube-api-access-r84z8\") pod \"dnsmasq-dns-666b6646f7-4mnjx\" (UID: \"b20e6d23-eea6-4e05-95fc-100aa77c82f7\") " pod="openstack/dnsmasq-dns-666b6646f7-4mnjx" Oct 13 17:38:22 crc kubenswrapper[4720]: I1013 17:38:22.354755 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b20e6d23-eea6-4e05-95fc-100aa77c82f7-config\") pod \"dnsmasq-dns-666b6646f7-4mnjx\" (UID: \"b20e6d23-eea6-4e05-95fc-100aa77c82f7\") " pod="openstack/dnsmasq-dns-666b6646f7-4mnjx" Oct 13 17:38:22 crc kubenswrapper[4720]: I1013 17:38:22.355160 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b20e6d23-eea6-4e05-95fc-100aa77c82f7-dns-svc\") pod \"dnsmasq-dns-666b6646f7-4mnjx\" (UID: \"b20e6d23-eea6-4e05-95fc-100aa77c82f7\") " pod="openstack/dnsmasq-dns-666b6646f7-4mnjx" Oct 13 17:38:22 crc kubenswrapper[4720]: I1013 17:38:22.394879 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r84z8\" (UniqueName: \"kubernetes.io/projected/b20e6d23-eea6-4e05-95fc-100aa77c82f7-kube-api-access-r84z8\") pod \"dnsmasq-dns-666b6646f7-4mnjx\" (UID: \"b20e6d23-eea6-4e05-95fc-100aa77c82f7\") " pod="openstack/dnsmasq-dns-666b6646f7-4mnjx" Oct 13 17:38:22 crc kubenswrapper[4720]: I1013 17:38:22.455157 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-nz4rl"] Oct 13 17:38:22 crc kubenswrapper[4720]: I1013 17:38:22.488396 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-9jdhm"] Oct 13 17:38:22 crc kubenswrapper[4720]: I1013 17:38:22.490030 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-9jdhm" Oct 13 17:38:22 crc kubenswrapper[4720]: I1013 17:38:22.517932 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-9jdhm"] Oct 13 17:38:22 crc kubenswrapper[4720]: I1013 17:38:22.549517 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-4mnjx" Oct 13 17:38:22 crc kubenswrapper[4720]: I1013 17:38:22.662977 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54s2t\" (UniqueName: \"kubernetes.io/projected/7349a4fa-fffe-44e9-aebb-ceb486b7fb45-kube-api-access-54s2t\") pod \"dnsmasq-dns-57d769cc4f-9jdhm\" (UID: \"7349a4fa-fffe-44e9-aebb-ceb486b7fb45\") " pod="openstack/dnsmasq-dns-57d769cc4f-9jdhm" Oct 13 17:38:22 crc kubenswrapper[4720]: I1013 17:38:22.663031 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7349a4fa-fffe-44e9-aebb-ceb486b7fb45-config\") pod \"dnsmasq-dns-57d769cc4f-9jdhm\" (UID: \"7349a4fa-fffe-44e9-aebb-ceb486b7fb45\") " pod="openstack/dnsmasq-dns-57d769cc4f-9jdhm" Oct 13 17:38:22 crc kubenswrapper[4720]: I1013 17:38:22.663083 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7349a4fa-fffe-44e9-aebb-ceb486b7fb45-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-9jdhm\" (UID: \"7349a4fa-fffe-44e9-aebb-ceb486b7fb45\") " pod="openstack/dnsmasq-dns-57d769cc4f-9jdhm" Oct 13 17:38:22 crc kubenswrapper[4720]: I1013 17:38:22.764058 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54s2t\" (UniqueName: \"kubernetes.io/projected/7349a4fa-fffe-44e9-aebb-ceb486b7fb45-kube-api-access-54s2t\") pod \"dnsmasq-dns-57d769cc4f-9jdhm\" (UID: \"7349a4fa-fffe-44e9-aebb-ceb486b7fb45\") " pod="openstack/dnsmasq-dns-57d769cc4f-9jdhm" Oct 13 17:38:22 crc kubenswrapper[4720]: I1013 17:38:22.764106 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7349a4fa-fffe-44e9-aebb-ceb486b7fb45-config\") pod \"dnsmasq-dns-57d769cc4f-9jdhm\" (UID: \"7349a4fa-fffe-44e9-aebb-ceb486b7fb45\") " pod="openstack/dnsmasq-dns-57d769cc4f-9jdhm" Oct 13 17:38:22 crc kubenswrapper[4720]: I1013 17:38:22.764156 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7349a4fa-fffe-44e9-aebb-ceb486b7fb45-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-9jdhm\" (UID: \"7349a4fa-fffe-44e9-aebb-ceb486b7fb45\") " pod="openstack/dnsmasq-dns-57d769cc4f-9jdhm" Oct 13 17:38:22 crc kubenswrapper[4720]: I1013 17:38:22.765072 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7349a4fa-fffe-44e9-aebb-ceb486b7fb45-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-9jdhm\" (UID: \"7349a4fa-fffe-44e9-aebb-ceb486b7fb45\") " pod="openstack/dnsmasq-dns-57d769cc4f-9jdhm" Oct 13 17:38:22 crc kubenswrapper[4720]: I1013 17:38:22.765120 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7349a4fa-fffe-44e9-aebb-ceb486b7fb45-config\") pod \"dnsmasq-dns-57d769cc4f-9jdhm\" (UID: \"7349a4fa-fffe-44e9-aebb-ceb486b7fb45\") " pod="openstack/dnsmasq-dns-57d769cc4f-9jdhm" Oct 13 17:38:22 crc kubenswrapper[4720]: I1013 17:38:22.782122 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54s2t\" (UniqueName: \"kubernetes.io/projected/7349a4fa-fffe-44e9-aebb-ceb486b7fb45-kube-api-access-54s2t\") pod \"dnsmasq-dns-57d769cc4f-9jdhm\" (UID: \"7349a4fa-fffe-44e9-aebb-ceb486b7fb45\") " pod="openstack/dnsmasq-dns-57d769cc4f-9jdhm" Oct 13 17:38:22 crc kubenswrapper[4720]: I1013 17:38:22.815581 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-9jdhm" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.344828 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.346719 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.353482 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.353561 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.353731 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-4rmtl" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.353761 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.353927 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.354027 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.355782 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.359600 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.483613 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/af59309d-fcea-47ce-85b5-0eafbf780d08-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"af59309d-fcea-47ce-85b5-0eafbf780d08\") " pod="openstack/rabbitmq-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.483672 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"af59309d-fcea-47ce-85b5-0eafbf780d08\") " pod="openstack/rabbitmq-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.483704 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwjgj\" (UniqueName: \"kubernetes.io/projected/af59309d-fcea-47ce-85b5-0eafbf780d08-kube-api-access-kwjgj\") pod \"rabbitmq-server-0\" (UID: \"af59309d-fcea-47ce-85b5-0eafbf780d08\") " pod="openstack/rabbitmq-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.483755 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/af59309d-fcea-47ce-85b5-0eafbf780d08-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"af59309d-fcea-47ce-85b5-0eafbf780d08\") " pod="openstack/rabbitmq-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.484059 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/af59309d-fcea-47ce-85b5-0eafbf780d08-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"af59309d-fcea-47ce-85b5-0eafbf780d08\") " pod="openstack/rabbitmq-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.484225 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/af59309d-fcea-47ce-85b5-0eafbf780d08-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"af59309d-fcea-47ce-85b5-0eafbf780d08\") " pod="openstack/rabbitmq-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.484357 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/af59309d-fcea-47ce-85b5-0eafbf780d08-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"af59309d-fcea-47ce-85b5-0eafbf780d08\") " pod="openstack/rabbitmq-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.484400 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/af59309d-fcea-47ce-85b5-0eafbf780d08-pod-info\") pod \"rabbitmq-server-0\" (UID: \"af59309d-fcea-47ce-85b5-0eafbf780d08\") " pod="openstack/rabbitmq-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.484641 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/af59309d-fcea-47ce-85b5-0eafbf780d08-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"af59309d-fcea-47ce-85b5-0eafbf780d08\") " pod="openstack/rabbitmq-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.484685 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/af59309d-fcea-47ce-85b5-0eafbf780d08-server-conf\") pod \"rabbitmq-server-0\" (UID: \"af59309d-fcea-47ce-85b5-0eafbf780d08\") " pod="openstack/rabbitmq-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.484776 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/af59309d-fcea-47ce-85b5-0eafbf780d08-config-data\") pod \"rabbitmq-server-0\" (UID: \"af59309d-fcea-47ce-85b5-0eafbf780d08\") " pod="openstack/rabbitmq-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.587011 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/af59309d-fcea-47ce-85b5-0eafbf780d08-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"af59309d-fcea-47ce-85b5-0eafbf780d08\") " pod="openstack/rabbitmq-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.587099 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/af59309d-fcea-47ce-85b5-0eafbf780d08-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"af59309d-fcea-47ce-85b5-0eafbf780d08\") " pod="openstack/rabbitmq-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.587122 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/af59309d-fcea-47ce-85b5-0eafbf780d08-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"af59309d-fcea-47ce-85b5-0eafbf780d08\") " pod="openstack/rabbitmq-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.587144 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/af59309d-fcea-47ce-85b5-0eafbf780d08-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"af59309d-fcea-47ce-85b5-0eafbf780d08\") " pod="openstack/rabbitmq-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.587165 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/af59309d-fcea-47ce-85b5-0eafbf780d08-pod-info\") pod \"rabbitmq-server-0\" (UID: \"af59309d-fcea-47ce-85b5-0eafbf780d08\") " pod="openstack/rabbitmq-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.587226 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/af59309d-fcea-47ce-85b5-0eafbf780d08-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"af59309d-fcea-47ce-85b5-0eafbf780d08\") " pod="openstack/rabbitmq-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.587254 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/af59309d-fcea-47ce-85b5-0eafbf780d08-server-conf\") pod \"rabbitmq-server-0\" (UID: \"af59309d-fcea-47ce-85b5-0eafbf780d08\") " pod="openstack/rabbitmq-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.587277 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/af59309d-fcea-47ce-85b5-0eafbf780d08-config-data\") pod \"rabbitmq-server-0\" (UID: \"af59309d-fcea-47ce-85b5-0eafbf780d08\") " pod="openstack/rabbitmq-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.587295 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/af59309d-fcea-47ce-85b5-0eafbf780d08-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"af59309d-fcea-47ce-85b5-0eafbf780d08\") " pod="openstack/rabbitmq-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.587317 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"af59309d-fcea-47ce-85b5-0eafbf780d08\") " pod="openstack/rabbitmq-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.587336 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwjgj\" (UniqueName: \"kubernetes.io/projected/af59309d-fcea-47ce-85b5-0eafbf780d08-kube-api-access-kwjgj\") pod \"rabbitmq-server-0\" (UID: \"af59309d-fcea-47ce-85b5-0eafbf780d08\") " pod="openstack/rabbitmq-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.588904 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/af59309d-fcea-47ce-85b5-0eafbf780d08-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"af59309d-fcea-47ce-85b5-0eafbf780d08\") " pod="openstack/rabbitmq-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.589449 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/af59309d-fcea-47ce-85b5-0eafbf780d08-config-data\") pod \"rabbitmq-server-0\" (UID: \"af59309d-fcea-47ce-85b5-0eafbf780d08\") " pod="openstack/rabbitmq-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.589736 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/af59309d-fcea-47ce-85b5-0eafbf780d08-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"af59309d-fcea-47ce-85b5-0eafbf780d08\") " pod="openstack/rabbitmq-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.590462 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/af59309d-fcea-47ce-85b5-0eafbf780d08-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"af59309d-fcea-47ce-85b5-0eafbf780d08\") " pod="openstack/rabbitmq-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.591357 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/af59309d-fcea-47ce-85b5-0eafbf780d08-server-conf\") pod \"rabbitmq-server-0\" (UID: \"af59309d-fcea-47ce-85b5-0eafbf780d08\") " pod="openstack/rabbitmq-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.591618 4720 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"af59309d-fcea-47ce-85b5-0eafbf780d08\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.592208 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/af59309d-fcea-47ce-85b5-0eafbf780d08-pod-info\") pod \"rabbitmq-server-0\" (UID: \"af59309d-fcea-47ce-85b5-0eafbf780d08\") " pod="openstack/rabbitmq-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.592323 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/af59309d-fcea-47ce-85b5-0eafbf780d08-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"af59309d-fcea-47ce-85b5-0eafbf780d08\") " pod="openstack/rabbitmq-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.594047 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/af59309d-fcea-47ce-85b5-0eafbf780d08-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"af59309d-fcea-47ce-85b5-0eafbf780d08\") " pod="openstack/rabbitmq-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.606552 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/af59309d-fcea-47ce-85b5-0eafbf780d08-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"af59309d-fcea-47ce-85b5-0eafbf780d08\") " pod="openstack/rabbitmq-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.608622 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwjgj\" (UniqueName: \"kubernetes.io/projected/af59309d-fcea-47ce-85b5-0eafbf780d08-kube-api-access-kwjgj\") pod \"rabbitmq-server-0\" (UID: \"af59309d-fcea-47ce-85b5-0eafbf780d08\") " pod="openstack/rabbitmq-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.613255 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"af59309d-fcea-47ce-85b5-0eafbf780d08\") " pod="openstack/rabbitmq-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.672101 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.693625 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.695653 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.705224 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.705960 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.705262 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.706298 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.705318 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-cvg4z" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.705362 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.705695 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.726721 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.789951 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/76c17d7a-8441-4b23-839b-f95ac54a6b24-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"76c17d7a-8441-4b23-839b-f95ac54a6b24\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.790012 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/76c17d7a-8441-4b23-839b-f95ac54a6b24-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"76c17d7a-8441-4b23-839b-f95ac54a6b24\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.790046 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6dc9\" (UniqueName: \"kubernetes.io/projected/76c17d7a-8441-4b23-839b-f95ac54a6b24-kube-api-access-p6dc9\") pod \"rabbitmq-cell1-server-0\" (UID: \"76c17d7a-8441-4b23-839b-f95ac54a6b24\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.790081 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/76c17d7a-8441-4b23-839b-f95ac54a6b24-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"76c17d7a-8441-4b23-839b-f95ac54a6b24\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.790156 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/76c17d7a-8441-4b23-839b-f95ac54a6b24-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"76c17d7a-8441-4b23-839b-f95ac54a6b24\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.790212 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/76c17d7a-8441-4b23-839b-f95ac54a6b24-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"76c17d7a-8441-4b23-839b-f95ac54a6b24\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.790237 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"76c17d7a-8441-4b23-839b-f95ac54a6b24\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.790271 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/76c17d7a-8441-4b23-839b-f95ac54a6b24-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"76c17d7a-8441-4b23-839b-f95ac54a6b24\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.790292 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/76c17d7a-8441-4b23-839b-f95ac54a6b24-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"76c17d7a-8441-4b23-839b-f95ac54a6b24\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.790331 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/76c17d7a-8441-4b23-839b-f95ac54a6b24-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"76c17d7a-8441-4b23-839b-f95ac54a6b24\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.790365 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/76c17d7a-8441-4b23-839b-f95ac54a6b24-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"76c17d7a-8441-4b23-839b-f95ac54a6b24\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.891413 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/76c17d7a-8441-4b23-839b-f95ac54a6b24-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"76c17d7a-8441-4b23-839b-f95ac54a6b24\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.891482 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/76c17d7a-8441-4b23-839b-f95ac54a6b24-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"76c17d7a-8441-4b23-839b-f95ac54a6b24\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.891531 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/76c17d7a-8441-4b23-839b-f95ac54a6b24-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"76c17d7a-8441-4b23-839b-f95ac54a6b24\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.891557 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/76c17d7a-8441-4b23-839b-f95ac54a6b24-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"76c17d7a-8441-4b23-839b-f95ac54a6b24\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.891585 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6dc9\" (UniqueName: \"kubernetes.io/projected/76c17d7a-8441-4b23-839b-f95ac54a6b24-kube-api-access-p6dc9\") pod \"rabbitmq-cell1-server-0\" (UID: \"76c17d7a-8441-4b23-839b-f95ac54a6b24\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.891612 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/76c17d7a-8441-4b23-839b-f95ac54a6b24-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"76c17d7a-8441-4b23-839b-f95ac54a6b24\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.891665 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/76c17d7a-8441-4b23-839b-f95ac54a6b24-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"76c17d7a-8441-4b23-839b-f95ac54a6b24\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.891698 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/76c17d7a-8441-4b23-839b-f95ac54a6b24-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"76c17d7a-8441-4b23-839b-f95ac54a6b24\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.891720 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"76c17d7a-8441-4b23-839b-f95ac54a6b24\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.891753 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/76c17d7a-8441-4b23-839b-f95ac54a6b24-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"76c17d7a-8441-4b23-839b-f95ac54a6b24\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.891773 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/76c17d7a-8441-4b23-839b-f95ac54a6b24-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"76c17d7a-8441-4b23-839b-f95ac54a6b24\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.893079 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/76c17d7a-8441-4b23-839b-f95ac54a6b24-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"76c17d7a-8441-4b23-839b-f95ac54a6b24\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.893391 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/76c17d7a-8441-4b23-839b-f95ac54a6b24-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"76c17d7a-8441-4b23-839b-f95ac54a6b24\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.893890 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/76c17d7a-8441-4b23-839b-f95ac54a6b24-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"76c17d7a-8441-4b23-839b-f95ac54a6b24\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.897390 4720 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"76c17d7a-8441-4b23-839b-f95ac54a6b24\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.899705 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/76c17d7a-8441-4b23-839b-f95ac54a6b24-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"76c17d7a-8441-4b23-839b-f95ac54a6b24\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.901245 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/76c17d7a-8441-4b23-839b-f95ac54a6b24-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"76c17d7a-8441-4b23-839b-f95ac54a6b24\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.901401 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/76c17d7a-8441-4b23-839b-f95ac54a6b24-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"76c17d7a-8441-4b23-839b-f95ac54a6b24\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.902838 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/76c17d7a-8441-4b23-839b-f95ac54a6b24-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"76c17d7a-8441-4b23-839b-f95ac54a6b24\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.910715 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/76c17d7a-8441-4b23-839b-f95ac54a6b24-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"76c17d7a-8441-4b23-839b-f95ac54a6b24\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.911328 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/76c17d7a-8441-4b23-839b-f95ac54a6b24-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"76c17d7a-8441-4b23-839b-f95ac54a6b24\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.924782 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6dc9\" (UniqueName: \"kubernetes.io/projected/76c17d7a-8441-4b23-839b-f95ac54a6b24-kube-api-access-p6dc9\") pod \"rabbitmq-cell1-server-0\" (UID: \"76c17d7a-8441-4b23-839b-f95ac54a6b24\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:38:23 crc kubenswrapper[4720]: I1013 17:38:23.933463 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"76c17d7a-8441-4b23-839b-f95ac54a6b24\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:38:24 crc kubenswrapper[4720]: I1013 17:38:24.024494 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:38:25 crc kubenswrapper[4720]: I1013 17:38:25.145994 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Oct 13 17:38:25 crc kubenswrapper[4720]: I1013 17:38:25.150318 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Oct 13 17:38:25 crc kubenswrapper[4720]: I1013 17:38:25.155049 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-l6pld" Oct 13 17:38:25 crc kubenswrapper[4720]: I1013 17:38:25.155671 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Oct 13 17:38:25 crc kubenswrapper[4720]: I1013 17:38:25.156120 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Oct 13 17:38:25 crc kubenswrapper[4720]: I1013 17:38:25.156308 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Oct 13 17:38:25 crc kubenswrapper[4720]: I1013 17:38:25.156819 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Oct 13 17:38:25 crc kubenswrapper[4720]: I1013 17:38:25.174695 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Oct 13 17:38:25 crc kubenswrapper[4720]: I1013 17:38:25.200337 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Oct 13 17:38:25 crc kubenswrapper[4720]: I1013 17:38:25.216349 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"fe36eeb1-7f7f-424c-a56c-e96cffc3046d\") " pod="openstack/openstack-galera-0" Oct 13 17:38:25 crc kubenswrapper[4720]: I1013 17:38:25.216686 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/fe36eeb1-7f7f-424c-a56c-e96cffc3046d-config-data-default\") pod \"openstack-galera-0\" (UID: \"fe36eeb1-7f7f-424c-a56c-e96cffc3046d\") " pod="openstack/openstack-galera-0" Oct 13 17:38:25 crc kubenswrapper[4720]: I1013 17:38:25.216984 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzf6z\" (UniqueName: \"kubernetes.io/projected/fe36eeb1-7f7f-424c-a56c-e96cffc3046d-kube-api-access-dzf6z\") pod \"openstack-galera-0\" (UID: \"fe36eeb1-7f7f-424c-a56c-e96cffc3046d\") " pod="openstack/openstack-galera-0" Oct 13 17:38:25 crc kubenswrapper[4720]: I1013 17:38:25.217227 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe36eeb1-7f7f-424c-a56c-e96cffc3046d-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"fe36eeb1-7f7f-424c-a56c-e96cffc3046d\") " pod="openstack/openstack-galera-0" Oct 13 17:38:25 crc kubenswrapper[4720]: I1013 17:38:25.217422 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/fe36eeb1-7f7f-424c-a56c-e96cffc3046d-secrets\") pod \"openstack-galera-0\" (UID: \"fe36eeb1-7f7f-424c-a56c-e96cffc3046d\") " pod="openstack/openstack-galera-0" Oct 13 17:38:25 crc kubenswrapper[4720]: I1013 17:38:25.217634 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/fe36eeb1-7f7f-424c-a56c-e96cffc3046d-config-data-generated\") pod \"openstack-galera-0\" (UID: \"fe36eeb1-7f7f-424c-a56c-e96cffc3046d\") " pod="openstack/openstack-galera-0" Oct 13 17:38:25 crc kubenswrapper[4720]: I1013 17:38:25.217841 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe36eeb1-7f7f-424c-a56c-e96cffc3046d-operator-scripts\") pod \"openstack-galera-0\" (UID: \"fe36eeb1-7f7f-424c-a56c-e96cffc3046d\") " pod="openstack/openstack-galera-0" Oct 13 17:38:25 crc kubenswrapper[4720]: I1013 17:38:25.218037 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe36eeb1-7f7f-424c-a56c-e96cffc3046d-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"fe36eeb1-7f7f-424c-a56c-e96cffc3046d\") " pod="openstack/openstack-galera-0" Oct 13 17:38:25 crc kubenswrapper[4720]: I1013 17:38:25.218156 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fe36eeb1-7f7f-424c-a56c-e96cffc3046d-kolla-config\") pod \"openstack-galera-0\" (UID: \"fe36eeb1-7f7f-424c-a56c-e96cffc3046d\") " pod="openstack/openstack-galera-0" Oct 13 17:38:25 crc kubenswrapper[4720]: I1013 17:38:25.320270 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/fe36eeb1-7f7f-424c-a56c-e96cffc3046d-config-data-generated\") pod \"openstack-galera-0\" (UID: \"fe36eeb1-7f7f-424c-a56c-e96cffc3046d\") " pod="openstack/openstack-galera-0" Oct 13 17:38:25 crc kubenswrapper[4720]: I1013 17:38:25.320330 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe36eeb1-7f7f-424c-a56c-e96cffc3046d-operator-scripts\") pod \"openstack-galera-0\" (UID: \"fe36eeb1-7f7f-424c-a56c-e96cffc3046d\") " pod="openstack/openstack-galera-0" Oct 13 17:38:25 crc kubenswrapper[4720]: I1013 17:38:25.320372 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe36eeb1-7f7f-424c-a56c-e96cffc3046d-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"fe36eeb1-7f7f-424c-a56c-e96cffc3046d\") " pod="openstack/openstack-galera-0" Oct 13 17:38:25 crc kubenswrapper[4720]: I1013 17:38:25.320397 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fe36eeb1-7f7f-424c-a56c-e96cffc3046d-kolla-config\") pod \"openstack-galera-0\" (UID: \"fe36eeb1-7f7f-424c-a56c-e96cffc3046d\") " pod="openstack/openstack-galera-0" Oct 13 17:38:25 crc kubenswrapper[4720]: I1013 17:38:25.320432 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"fe36eeb1-7f7f-424c-a56c-e96cffc3046d\") " pod="openstack/openstack-galera-0" Oct 13 17:38:25 crc kubenswrapper[4720]: I1013 17:38:25.320474 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/fe36eeb1-7f7f-424c-a56c-e96cffc3046d-config-data-default\") pod \"openstack-galera-0\" (UID: \"fe36eeb1-7f7f-424c-a56c-e96cffc3046d\") " pod="openstack/openstack-galera-0" Oct 13 17:38:25 crc kubenswrapper[4720]: I1013 17:38:25.320529 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzf6z\" (UniqueName: \"kubernetes.io/projected/fe36eeb1-7f7f-424c-a56c-e96cffc3046d-kube-api-access-dzf6z\") pod \"openstack-galera-0\" (UID: \"fe36eeb1-7f7f-424c-a56c-e96cffc3046d\") " pod="openstack/openstack-galera-0" Oct 13 17:38:25 crc kubenswrapper[4720]: I1013 17:38:25.320551 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe36eeb1-7f7f-424c-a56c-e96cffc3046d-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"fe36eeb1-7f7f-424c-a56c-e96cffc3046d\") " pod="openstack/openstack-galera-0" Oct 13 17:38:25 crc kubenswrapper[4720]: I1013 17:38:25.320578 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/fe36eeb1-7f7f-424c-a56c-e96cffc3046d-secrets\") pod \"openstack-galera-0\" (UID: \"fe36eeb1-7f7f-424c-a56c-e96cffc3046d\") " pod="openstack/openstack-galera-0" Oct 13 17:38:25 crc kubenswrapper[4720]: I1013 17:38:25.321434 4720 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"fe36eeb1-7f7f-424c-a56c-e96cffc3046d\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/openstack-galera-0" Oct 13 17:38:25 crc kubenswrapper[4720]: I1013 17:38:25.322361 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fe36eeb1-7f7f-424c-a56c-e96cffc3046d-kolla-config\") pod \"openstack-galera-0\" (UID: \"fe36eeb1-7f7f-424c-a56c-e96cffc3046d\") " pod="openstack/openstack-galera-0" Oct 13 17:38:25 crc kubenswrapper[4720]: I1013 17:38:25.322410 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/fe36eeb1-7f7f-424c-a56c-e96cffc3046d-config-data-generated\") pod \"openstack-galera-0\" (UID: \"fe36eeb1-7f7f-424c-a56c-e96cffc3046d\") " pod="openstack/openstack-galera-0" Oct 13 17:38:25 crc kubenswrapper[4720]: I1013 17:38:25.322552 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/fe36eeb1-7f7f-424c-a56c-e96cffc3046d-config-data-default\") pod \"openstack-galera-0\" (UID: \"fe36eeb1-7f7f-424c-a56c-e96cffc3046d\") " pod="openstack/openstack-galera-0" Oct 13 17:38:25 crc kubenswrapper[4720]: I1013 17:38:25.322869 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe36eeb1-7f7f-424c-a56c-e96cffc3046d-operator-scripts\") pod \"openstack-galera-0\" (UID: \"fe36eeb1-7f7f-424c-a56c-e96cffc3046d\") " pod="openstack/openstack-galera-0" Oct 13 17:38:25 crc kubenswrapper[4720]: I1013 17:38:25.325503 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe36eeb1-7f7f-424c-a56c-e96cffc3046d-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"fe36eeb1-7f7f-424c-a56c-e96cffc3046d\") " pod="openstack/openstack-galera-0" Oct 13 17:38:25 crc kubenswrapper[4720]: I1013 17:38:25.327358 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe36eeb1-7f7f-424c-a56c-e96cffc3046d-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"fe36eeb1-7f7f-424c-a56c-e96cffc3046d\") " pod="openstack/openstack-galera-0" Oct 13 17:38:25 crc kubenswrapper[4720]: I1013 17:38:25.332653 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/fe36eeb1-7f7f-424c-a56c-e96cffc3046d-secrets\") pod \"openstack-galera-0\" (UID: \"fe36eeb1-7f7f-424c-a56c-e96cffc3046d\") " pod="openstack/openstack-galera-0" Oct 13 17:38:25 crc kubenswrapper[4720]: I1013 17:38:25.345354 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzf6z\" (UniqueName: \"kubernetes.io/projected/fe36eeb1-7f7f-424c-a56c-e96cffc3046d-kube-api-access-dzf6z\") pod \"openstack-galera-0\" (UID: \"fe36eeb1-7f7f-424c-a56c-e96cffc3046d\") " pod="openstack/openstack-galera-0" Oct 13 17:38:25 crc kubenswrapper[4720]: I1013 17:38:25.353344 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"fe36eeb1-7f7f-424c-a56c-e96cffc3046d\") " pod="openstack/openstack-galera-0" Oct 13 17:38:25 crc kubenswrapper[4720]: I1013 17:38:25.473352 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Oct 13 17:38:26 crc kubenswrapper[4720]: I1013 17:38:26.615267 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 13 17:38:26 crc kubenswrapper[4720]: I1013 17:38:26.618127 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Oct 13 17:38:26 crc kubenswrapper[4720]: I1013 17:38:26.621745 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Oct 13 17:38:26 crc kubenswrapper[4720]: I1013 17:38:26.621946 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Oct 13 17:38:26 crc kubenswrapper[4720]: I1013 17:38:26.622082 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Oct 13 17:38:26 crc kubenswrapper[4720]: I1013 17:38:26.622251 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-hm2r8" Oct 13 17:38:26 crc kubenswrapper[4720]: I1013 17:38:26.635540 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 13 17:38:26 crc kubenswrapper[4720]: I1013 17:38:26.647338 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8338f95-b766-4ce8-b60e-020957cdee12-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"f8338f95-b766-4ce8-b60e-020957cdee12\") " pod="openstack/openstack-cell1-galera-0" Oct 13 17:38:26 crc kubenswrapper[4720]: I1013 17:38:26.647402 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f8338f95-b766-4ce8-b60e-020957cdee12-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"f8338f95-b766-4ce8-b60e-020957cdee12\") " pod="openstack/openstack-cell1-galera-0" Oct 13 17:38:26 crc kubenswrapper[4720]: I1013 17:38:26.647468 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6w5c\" (UniqueName: \"kubernetes.io/projected/f8338f95-b766-4ce8-b60e-020957cdee12-kube-api-access-m6w5c\") pod \"openstack-cell1-galera-0\" (UID: \"f8338f95-b766-4ce8-b60e-020957cdee12\") " pod="openstack/openstack-cell1-galera-0" Oct 13 17:38:26 crc kubenswrapper[4720]: I1013 17:38:26.647495 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f8338f95-b766-4ce8-b60e-020957cdee12-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"f8338f95-b766-4ce8-b60e-020957cdee12\") " pod="openstack/openstack-cell1-galera-0" Oct 13 17:38:26 crc kubenswrapper[4720]: I1013 17:38:26.647514 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8338f95-b766-4ce8-b60e-020957cdee12-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"f8338f95-b766-4ce8-b60e-020957cdee12\") " pod="openstack/openstack-cell1-galera-0" Oct 13 17:38:26 crc kubenswrapper[4720]: I1013 17:38:26.647552 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"f8338f95-b766-4ce8-b60e-020957cdee12\") " pod="openstack/openstack-cell1-galera-0" Oct 13 17:38:26 crc kubenswrapper[4720]: I1013 17:38:26.647582 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f8338f95-b766-4ce8-b60e-020957cdee12-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"f8338f95-b766-4ce8-b60e-020957cdee12\") " pod="openstack/openstack-cell1-galera-0" Oct 13 17:38:26 crc kubenswrapper[4720]: I1013 17:38:26.647603 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8338f95-b766-4ce8-b60e-020957cdee12-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"f8338f95-b766-4ce8-b60e-020957cdee12\") " pod="openstack/openstack-cell1-galera-0" Oct 13 17:38:26 crc kubenswrapper[4720]: I1013 17:38:26.647624 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/f8338f95-b766-4ce8-b60e-020957cdee12-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"f8338f95-b766-4ce8-b60e-020957cdee12\") " pod="openstack/openstack-cell1-galera-0" Oct 13 17:38:26 crc kubenswrapper[4720]: I1013 17:38:26.748393 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6w5c\" (UniqueName: \"kubernetes.io/projected/f8338f95-b766-4ce8-b60e-020957cdee12-kube-api-access-m6w5c\") pod \"openstack-cell1-galera-0\" (UID: \"f8338f95-b766-4ce8-b60e-020957cdee12\") " pod="openstack/openstack-cell1-galera-0" Oct 13 17:38:26 crc kubenswrapper[4720]: I1013 17:38:26.748438 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f8338f95-b766-4ce8-b60e-020957cdee12-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"f8338f95-b766-4ce8-b60e-020957cdee12\") " pod="openstack/openstack-cell1-galera-0" Oct 13 17:38:26 crc kubenswrapper[4720]: I1013 17:38:26.748456 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8338f95-b766-4ce8-b60e-020957cdee12-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"f8338f95-b766-4ce8-b60e-020957cdee12\") " pod="openstack/openstack-cell1-galera-0" Oct 13 17:38:26 crc kubenswrapper[4720]: I1013 17:38:26.748476 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"f8338f95-b766-4ce8-b60e-020957cdee12\") " pod="openstack/openstack-cell1-galera-0" Oct 13 17:38:26 crc kubenswrapper[4720]: I1013 17:38:26.748501 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f8338f95-b766-4ce8-b60e-020957cdee12-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"f8338f95-b766-4ce8-b60e-020957cdee12\") " pod="openstack/openstack-cell1-galera-0" Oct 13 17:38:26 crc kubenswrapper[4720]: I1013 17:38:26.748521 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8338f95-b766-4ce8-b60e-020957cdee12-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"f8338f95-b766-4ce8-b60e-020957cdee12\") " pod="openstack/openstack-cell1-galera-0" Oct 13 17:38:26 crc kubenswrapper[4720]: I1013 17:38:26.748541 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/f8338f95-b766-4ce8-b60e-020957cdee12-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"f8338f95-b766-4ce8-b60e-020957cdee12\") " pod="openstack/openstack-cell1-galera-0" Oct 13 17:38:26 crc kubenswrapper[4720]: I1013 17:38:26.748980 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8338f95-b766-4ce8-b60e-020957cdee12-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"f8338f95-b766-4ce8-b60e-020957cdee12\") " pod="openstack/openstack-cell1-galera-0" Oct 13 17:38:26 crc kubenswrapper[4720]: I1013 17:38:26.749012 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f8338f95-b766-4ce8-b60e-020957cdee12-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"f8338f95-b766-4ce8-b60e-020957cdee12\") " pod="openstack/openstack-cell1-galera-0" Oct 13 17:38:26 crc kubenswrapper[4720]: I1013 17:38:26.749635 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f8338f95-b766-4ce8-b60e-020957cdee12-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"f8338f95-b766-4ce8-b60e-020957cdee12\") " pod="openstack/openstack-cell1-galera-0" Oct 13 17:38:26 crc kubenswrapper[4720]: I1013 17:38:26.750559 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f8338f95-b766-4ce8-b60e-020957cdee12-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"f8338f95-b766-4ce8-b60e-020957cdee12\") " pod="openstack/openstack-cell1-galera-0" Oct 13 17:38:26 crc kubenswrapper[4720]: I1013 17:38:26.751543 4720 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"f8338f95-b766-4ce8-b60e-020957cdee12\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/openstack-cell1-galera-0" Oct 13 17:38:26 crc kubenswrapper[4720]: I1013 17:38:26.752001 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f8338f95-b766-4ce8-b60e-020957cdee12-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"f8338f95-b766-4ce8-b60e-020957cdee12\") " pod="openstack/openstack-cell1-galera-0" Oct 13 17:38:26 crc kubenswrapper[4720]: I1013 17:38:26.752057 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8338f95-b766-4ce8-b60e-020957cdee12-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"f8338f95-b766-4ce8-b60e-020957cdee12\") " pod="openstack/openstack-cell1-galera-0" Oct 13 17:38:26 crc kubenswrapper[4720]: I1013 17:38:26.758514 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8338f95-b766-4ce8-b60e-020957cdee12-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"f8338f95-b766-4ce8-b60e-020957cdee12\") " pod="openstack/openstack-cell1-galera-0" Oct 13 17:38:26 crc kubenswrapper[4720]: I1013 17:38:26.759084 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/f8338f95-b766-4ce8-b60e-020957cdee12-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"f8338f95-b766-4ce8-b60e-020957cdee12\") " pod="openstack/openstack-cell1-galera-0" Oct 13 17:38:26 crc kubenswrapper[4720]: I1013 17:38:26.760068 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8338f95-b766-4ce8-b60e-020957cdee12-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"f8338f95-b766-4ce8-b60e-020957cdee12\") " pod="openstack/openstack-cell1-galera-0" Oct 13 17:38:26 crc kubenswrapper[4720]: I1013 17:38:26.775892 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6w5c\" (UniqueName: \"kubernetes.io/projected/f8338f95-b766-4ce8-b60e-020957cdee12-kube-api-access-m6w5c\") pod \"openstack-cell1-galera-0\" (UID: \"f8338f95-b766-4ce8-b60e-020957cdee12\") " pod="openstack/openstack-cell1-galera-0" Oct 13 17:38:26 crc kubenswrapper[4720]: I1013 17:38:26.785465 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"f8338f95-b766-4ce8-b60e-020957cdee12\") " pod="openstack/openstack-cell1-galera-0" Oct 13 17:38:26 crc kubenswrapper[4720]: I1013 17:38:26.962495 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Oct 13 17:38:26 crc kubenswrapper[4720]: I1013 17:38:26.971652 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Oct 13 17:38:26 crc kubenswrapper[4720]: I1013 17:38:26.972633 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Oct 13 17:38:26 crc kubenswrapper[4720]: I1013 17:38:26.978314 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Oct 13 17:38:26 crc kubenswrapper[4720]: I1013 17:38:26.981688 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Oct 13 17:38:26 crc kubenswrapper[4720]: I1013 17:38:26.981713 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Oct 13 17:38:26 crc kubenswrapper[4720]: I1013 17:38:26.982018 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-4m72s" Oct 13 17:38:27 crc kubenswrapper[4720]: I1013 17:38:27.154133 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx9rm\" (UniqueName: \"kubernetes.io/projected/e6b1817f-f719-4727-ad61-56061b241d4b-kube-api-access-sx9rm\") pod \"memcached-0\" (UID: \"e6b1817f-f719-4727-ad61-56061b241d4b\") " pod="openstack/memcached-0" Oct 13 17:38:27 crc kubenswrapper[4720]: I1013 17:38:27.154217 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6b1817f-f719-4727-ad61-56061b241d4b-memcached-tls-certs\") pod \"memcached-0\" (UID: \"e6b1817f-f719-4727-ad61-56061b241d4b\") " pod="openstack/memcached-0" Oct 13 17:38:27 crc kubenswrapper[4720]: I1013 17:38:27.154271 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e6b1817f-f719-4727-ad61-56061b241d4b-kolla-config\") pod \"memcached-0\" (UID: \"e6b1817f-f719-4727-ad61-56061b241d4b\") " pod="openstack/memcached-0" Oct 13 17:38:27 crc kubenswrapper[4720]: I1013 17:38:27.154336 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e6b1817f-f719-4727-ad61-56061b241d4b-config-data\") pod \"memcached-0\" (UID: \"e6b1817f-f719-4727-ad61-56061b241d4b\") " pod="openstack/memcached-0" Oct 13 17:38:27 crc kubenswrapper[4720]: I1013 17:38:27.154442 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6b1817f-f719-4727-ad61-56061b241d4b-combined-ca-bundle\") pod \"memcached-0\" (UID: \"e6b1817f-f719-4727-ad61-56061b241d4b\") " pod="openstack/memcached-0" Oct 13 17:38:27 crc kubenswrapper[4720]: I1013 17:38:27.255226 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6b1817f-f719-4727-ad61-56061b241d4b-combined-ca-bundle\") pod \"memcached-0\" (UID: \"e6b1817f-f719-4727-ad61-56061b241d4b\") " pod="openstack/memcached-0" Oct 13 17:38:27 crc kubenswrapper[4720]: I1013 17:38:27.255283 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sx9rm\" (UniqueName: \"kubernetes.io/projected/e6b1817f-f719-4727-ad61-56061b241d4b-kube-api-access-sx9rm\") pod \"memcached-0\" (UID: \"e6b1817f-f719-4727-ad61-56061b241d4b\") " pod="openstack/memcached-0" Oct 13 17:38:27 crc kubenswrapper[4720]: I1013 17:38:27.255311 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6b1817f-f719-4727-ad61-56061b241d4b-memcached-tls-certs\") pod \"memcached-0\" (UID: \"e6b1817f-f719-4727-ad61-56061b241d4b\") " pod="openstack/memcached-0" Oct 13 17:38:27 crc kubenswrapper[4720]: I1013 17:38:27.255332 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e6b1817f-f719-4727-ad61-56061b241d4b-kolla-config\") pod \"memcached-0\" (UID: \"e6b1817f-f719-4727-ad61-56061b241d4b\") " pod="openstack/memcached-0" Oct 13 17:38:27 crc kubenswrapper[4720]: I1013 17:38:27.255373 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e6b1817f-f719-4727-ad61-56061b241d4b-config-data\") pod \"memcached-0\" (UID: \"e6b1817f-f719-4727-ad61-56061b241d4b\") " pod="openstack/memcached-0" Oct 13 17:38:27 crc kubenswrapper[4720]: I1013 17:38:27.256478 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e6b1817f-f719-4727-ad61-56061b241d4b-config-data\") pod \"memcached-0\" (UID: \"e6b1817f-f719-4727-ad61-56061b241d4b\") " pod="openstack/memcached-0" Oct 13 17:38:27 crc kubenswrapper[4720]: I1013 17:38:27.256502 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e6b1817f-f719-4727-ad61-56061b241d4b-kolla-config\") pod \"memcached-0\" (UID: \"e6b1817f-f719-4727-ad61-56061b241d4b\") " pod="openstack/memcached-0" Oct 13 17:38:27 crc kubenswrapper[4720]: I1013 17:38:27.258503 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6b1817f-f719-4727-ad61-56061b241d4b-combined-ca-bundle\") pod \"memcached-0\" (UID: \"e6b1817f-f719-4727-ad61-56061b241d4b\") " pod="openstack/memcached-0" Oct 13 17:38:27 crc kubenswrapper[4720]: I1013 17:38:27.264728 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6b1817f-f719-4727-ad61-56061b241d4b-memcached-tls-certs\") pod \"memcached-0\" (UID: \"e6b1817f-f719-4727-ad61-56061b241d4b\") " pod="openstack/memcached-0" Oct 13 17:38:27 crc kubenswrapper[4720]: I1013 17:38:27.272359 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sx9rm\" (UniqueName: \"kubernetes.io/projected/e6b1817f-f719-4727-ad61-56061b241d4b-kube-api-access-sx9rm\") pod \"memcached-0\" (UID: \"e6b1817f-f719-4727-ad61-56061b241d4b\") " pod="openstack/memcached-0" Oct 13 17:38:27 crc kubenswrapper[4720]: I1013 17:38:27.300056 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Oct 13 17:38:29 crc kubenswrapper[4720]: I1013 17:38:29.078936 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Oct 13 17:38:29 crc kubenswrapper[4720]: I1013 17:38:29.080551 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 13 17:38:29 crc kubenswrapper[4720]: I1013 17:38:29.087619 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-tdsmx" Oct 13 17:38:29 crc kubenswrapper[4720]: I1013 17:38:29.090288 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 13 17:38:29 crc kubenswrapper[4720]: I1013 17:38:29.181498 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6rg8\" (UniqueName: \"kubernetes.io/projected/0b352587-867c-4276-93b5-89c6f922a2cc-kube-api-access-c6rg8\") pod \"kube-state-metrics-0\" (UID: \"0b352587-867c-4276-93b5-89c6f922a2cc\") " pod="openstack/kube-state-metrics-0" Oct 13 17:38:29 crc kubenswrapper[4720]: I1013 17:38:29.283131 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6rg8\" (UniqueName: \"kubernetes.io/projected/0b352587-867c-4276-93b5-89c6f922a2cc-kube-api-access-c6rg8\") pod \"kube-state-metrics-0\" (UID: \"0b352587-867c-4276-93b5-89c6f922a2cc\") " pod="openstack/kube-state-metrics-0" Oct 13 17:38:29 crc kubenswrapper[4720]: I1013 17:38:29.304095 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6rg8\" (UniqueName: \"kubernetes.io/projected/0b352587-867c-4276-93b5-89c6f922a2cc-kube-api-access-c6rg8\") pod \"kube-state-metrics-0\" (UID: \"0b352587-867c-4276-93b5-89c6f922a2cc\") " pod="openstack/kube-state-metrics-0" Oct 13 17:38:29 crc kubenswrapper[4720]: I1013 17:38:29.419909 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.736134 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-vbc6h"] Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.737625 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vbc6h" Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.740295 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.740838 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-2ldr7" Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.745637 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.746589 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.747325 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.750783 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.750876 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.751470 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-qfmtq" Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.753132 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.753407 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.755469 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-vbc6h"] Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.779420 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.818218 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-cz99q"] Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.821763 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-cz99q" Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.826885 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-cz99q"] Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.848167 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a3b2dccc-71b7-4dd6-9c8d-f1c12382a832-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"a3b2dccc-71b7-4dd6-9c8d-f1c12382a832\") " pod="openstack/ovsdbserver-nb-0" Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.848554 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3b2dccc-71b7-4dd6-9c8d-f1c12382a832-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"a3b2dccc-71b7-4dd6-9c8d-f1c12382a832\") " pod="openstack/ovsdbserver-nb-0" Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.848604 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c46d7\" (UniqueName: \"kubernetes.io/projected/283c0b58-d0a1-4cf1-af87-3859306c4a60-kube-api-access-c46d7\") pod \"ovn-controller-vbc6h\" (UID: \"283c0b58-d0a1-4cf1-af87-3859306c4a60\") " pod="openstack/ovn-controller-vbc6h" Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.848637 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/283c0b58-d0a1-4cf1-af87-3859306c4a60-var-run\") pod \"ovn-controller-vbc6h\" (UID: \"283c0b58-d0a1-4cf1-af87-3859306c4a60\") " pod="openstack/ovn-controller-vbc6h" Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.848671 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/283c0b58-d0a1-4cf1-af87-3859306c4a60-scripts\") pod \"ovn-controller-vbc6h\" (UID: \"283c0b58-d0a1-4cf1-af87-3859306c4a60\") " pod="openstack/ovn-controller-vbc6h" Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.848719 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/283c0b58-d0a1-4cf1-af87-3859306c4a60-ovn-controller-tls-certs\") pod \"ovn-controller-vbc6h\" (UID: \"283c0b58-d0a1-4cf1-af87-3859306c4a60\") " pod="openstack/ovn-controller-vbc6h" Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.848785 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/283c0b58-d0a1-4cf1-af87-3859306c4a60-var-run-ovn\") pod \"ovn-controller-vbc6h\" (UID: \"283c0b58-d0a1-4cf1-af87-3859306c4a60\") " pod="openstack/ovn-controller-vbc6h" Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.848822 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"a3b2dccc-71b7-4dd6-9c8d-f1c12382a832\") " pod="openstack/ovsdbserver-nb-0" Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.848876 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a3b2dccc-71b7-4dd6-9c8d-f1c12382a832-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"a3b2dccc-71b7-4dd6-9c8d-f1c12382a832\") " pod="openstack/ovsdbserver-nb-0" Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.848934 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/283c0b58-d0a1-4cf1-af87-3859306c4a60-combined-ca-bundle\") pod \"ovn-controller-vbc6h\" (UID: \"283c0b58-d0a1-4cf1-af87-3859306c4a60\") " pod="openstack/ovn-controller-vbc6h" Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.848982 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5sh6\" (UniqueName: \"kubernetes.io/projected/a3b2dccc-71b7-4dd6-9c8d-f1c12382a832-kube-api-access-h5sh6\") pod \"ovsdbserver-nb-0\" (UID: \"a3b2dccc-71b7-4dd6-9c8d-f1c12382a832\") " pod="openstack/ovsdbserver-nb-0" Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.849020 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3b2dccc-71b7-4dd6-9c8d-f1c12382a832-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a3b2dccc-71b7-4dd6-9c8d-f1c12382a832\") " pod="openstack/ovsdbserver-nb-0" Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.849083 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3b2dccc-71b7-4dd6-9c8d-f1c12382a832-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a3b2dccc-71b7-4dd6-9c8d-f1c12382a832\") " pod="openstack/ovsdbserver-nb-0" Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.849108 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/283c0b58-d0a1-4cf1-af87-3859306c4a60-var-log-ovn\") pod \"ovn-controller-vbc6h\" (UID: \"283c0b58-d0a1-4cf1-af87-3859306c4a60\") " pod="openstack/ovn-controller-vbc6h" Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.849182 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3b2dccc-71b7-4dd6-9c8d-f1c12382a832-config\") pod \"ovsdbserver-nb-0\" (UID: \"a3b2dccc-71b7-4dd6-9c8d-f1c12382a832\") " pod="openstack/ovsdbserver-nb-0" Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.950724 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a3b2dccc-71b7-4dd6-9c8d-f1c12382a832-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"a3b2dccc-71b7-4dd6-9c8d-f1c12382a832\") " pod="openstack/ovsdbserver-nb-0" Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.950772 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3b2dccc-71b7-4dd6-9c8d-f1c12382a832-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"a3b2dccc-71b7-4dd6-9c8d-f1c12382a832\") " pod="openstack/ovsdbserver-nb-0" Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.950802 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c46d7\" (UniqueName: \"kubernetes.io/projected/283c0b58-d0a1-4cf1-af87-3859306c4a60-kube-api-access-c46d7\") pod \"ovn-controller-vbc6h\" (UID: \"283c0b58-d0a1-4cf1-af87-3859306c4a60\") " pod="openstack/ovn-controller-vbc6h" Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.950826 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/283c0b58-d0a1-4cf1-af87-3859306c4a60-var-run\") pod \"ovn-controller-vbc6h\" (UID: \"283c0b58-d0a1-4cf1-af87-3859306c4a60\") " pod="openstack/ovn-controller-vbc6h" Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.950856 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/283c0b58-d0a1-4cf1-af87-3859306c4a60-scripts\") pod \"ovn-controller-vbc6h\" (UID: \"283c0b58-d0a1-4cf1-af87-3859306c4a60\") " pod="openstack/ovn-controller-vbc6h" Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.950884 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/283c0b58-d0a1-4cf1-af87-3859306c4a60-ovn-controller-tls-certs\") pod \"ovn-controller-vbc6h\" (UID: \"283c0b58-d0a1-4cf1-af87-3859306c4a60\") " pod="openstack/ovn-controller-vbc6h" Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.950924 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b92efbfa-6501-4601-9432-8c37dbe4e020-scripts\") pod \"ovn-controller-ovs-cz99q\" (UID: \"b92efbfa-6501-4601-9432-8c37dbe4e020\") " pod="openstack/ovn-controller-ovs-cz99q" Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.950952 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/283c0b58-d0a1-4cf1-af87-3859306c4a60-var-run-ovn\") pod \"ovn-controller-vbc6h\" (UID: \"283c0b58-d0a1-4cf1-af87-3859306c4a60\") " pod="openstack/ovn-controller-vbc6h" Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.950978 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9gxn\" (UniqueName: \"kubernetes.io/projected/b92efbfa-6501-4601-9432-8c37dbe4e020-kube-api-access-f9gxn\") pod \"ovn-controller-ovs-cz99q\" (UID: \"b92efbfa-6501-4601-9432-8c37dbe4e020\") " pod="openstack/ovn-controller-ovs-cz99q" Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.951006 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"a3b2dccc-71b7-4dd6-9c8d-f1c12382a832\") " pod="openstack/ovsdbserver-nb-0" Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.951043 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a3b2dccc-71b7-4dd6-9c8d-f1c12382a832-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"a3b2dccc-71b7-4dd6-9c8d-f1c12382a832\") " pod="openstack/ovsdbserver-nb-0" Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.951077 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/283c0b58-d0a1-4cf1-af87-3859306c4a60-combined-ca-bundle\") pod \"ovn-controller-vbc6h\" (UID: \"283c0b58-d0a1-4cf1-af87-3859306c4a60\") " pod="openstack/ovn-controller-vbc6h" Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.951100 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b92efbfa-6501-4601-9432-8c37dbe4e020-var-run\") pod \"ovn-controller-ovs-cz99q\" (UID: \"b92efbfa-6501-4601-9432-8c37dbe4e020\") " pod="openstack/ovn-controller-ovs-cz99q" Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.951151 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5sh6\" (UniqueName: \"kubernetes.io/projected/a3b2dccc-71b7-4dd6-9c8d-f1c12382a832-kube-api-access-h5sh6\") pod \"ovsdbserver-nb-0\" (UID: \"a3b2dccc-71b7-4dd6-9c8d-f1c12382a832\") " pod="openstack/ovsdbserver-nb-0" Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.951180 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3b2dccc-71b7-4dd6-9c8d-f1c12382a832-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a3b2dccc-71b7-4dd6-9c8d-f1c12382a832\") " pod="openstack/ovsdbserver-nb-0" Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.951242 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/b92efbfa-6501-4601-9432-8c37dbe4e020-etc-ovs\") pod \"ovn-controller-ovs-cz99q\" (UID: \"b92efbfa-6501-4601-9432-8c37dbe4e020\") " pod="openstack/ovn-controller-ovs-cz99q" Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.951298 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3b2dccc-71b7-4dd6-9c8d-f1c12382a832-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a3b2dccc-71b7-4dd6-9c8d-f1c12382a832\") " pod="openstack/ovsdbserver-nb-0" Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.951321 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/283c0b58-d0a1-4cf1-af87-3859306c4a60-var-log-ovn\") pod \"ovn-controller-vbc6h\" (UID: \"283c0b58-d0a1-4cf1-af87-3859306c4a60\") " pod="openstack/ovn-controller-vbc6h" Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.951387 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b92efbfa-6501-4601-9432-8c37dbe4e020-var-log\") pod \"ovn-controller-ovs-cz99q\" (UID: \"b92efbfa-6501-4601-9432-8c37dbe4e020\") " pod="openstack/ovn-controller-ovs-cz99q" Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.951416 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3b2dccc-71b7-4dd6-9c8d-f1c12382a832-config\") pod \"ovsdbserver-nb-0\" (UID: \"a3b2dccc-71b7-4dd6-9c8d-f1c12382a832\") " pod="openstack/ovsdbserver-nb-0" Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.951461 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/b92efbfa-6501-4601-9432-8c37dbe4e020-var-lib\") pod \"ovn-controller-ovs-cz99q\" (UID: \"b92efbfa-6501-4601-9432-8c37dbe4e020\") " pod="openstack/ovn-controller-ovs-cz99q" Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.952414 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a3b2dccc-71b7-4dd6-9c8d-f1c12382a832-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"a3b2dccc-71b7-4dd6-9c8d-f1c12382a832\") " pod="openstack/ovsdbserver-nb-0" Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.953316 4720 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"a3b2dccc-71b7-4dd6-9c8d-f1c12382a832\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/ovsdbserver-nb-0" Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.953677 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/283c0b58-d0a1-4cf1-af87-3859306c4a60-var-run-ovn\") pod \"ovn-controller-vbc6h\" (UID: \"283c0b58-d0a1-4cf1-af87-3859306c4a60\") " pod="openstack/ovn-controller-vbc6h" Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.953805 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/283c0b58-d0a1-4cf1-af87-3859306c4a60-var-run\") pod \"ovn-controller-vbc6h\" (UID: \"283c0b58-d0a1-4cf1-af87-3859306c4a60\") " pod="openstack/ovn-controller-vbc6h" Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.954012 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/283c0b58-d0a1-4cf1-af87-3859306c4a60-var-log-ovn\") pod \"ovn-controller-vbc6h\" (UID: \"283c0b58-d0a1-4cf1-af87-3859306c4a60\") " pod="openstack/ovn-controller-vbc6h" Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.954578 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a3b2dccc-71b7-4dd6-9c8d-f1c12382a832-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"a3b2dccc-71b7-4dd6-9c8d-f1c12382a832\") " pod="openstack/ovsdbserver-nb-0" Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.954763 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3b2dccc-71b7-4dd6-9c8d-f1c12382a832-config\") pod \"ovsdbserver-nb-0\" (UID: \"a3b2dccc-71b7-4dd6-9c8d-f1c12382a832\") " pod="openstack/ovsdbserver-nb-0" Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.958579 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/283c0b58-d0a1-4cf1-af87-3859306c4a60-scripts\") pod \"ovn-controller-vbc6h\" (UID: \"283c0b58-d0a1-4cf1-af87-3859306c4a60\") " pod="openstack/ovn-controller-vbc6h" Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.961124 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3b2dccc-71b7-4dd6-9c8d-f1c12382a832-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"a3b2dccc-71b7-4dd6-9c8d-f1c12382a832\") " pod="openstack/ovsdbserver-nb-0" Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.961362 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3b2dccc-71b7-4dd6-9c8d-f1c12382a832-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a3b2dccc-71b7-4dd6-9c8d-f1c12382a832\") " pod="openstack/ovsdbserver-nb-0" Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.962567 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/283c0b58-d0a1-4cf1-af87-3859306c4a60-combined-ca-bundle\") pod \"ovn-controller-vbc6h\" (UID: \"283c0b58-d0a1-4cf1-af87-3859306c4a60\") " pod="openstack/ovn-controller-vbc6h" Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.963843 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/283c0b58-d0a1-4cf1-af87-3859306c4a60-ovn-controller-tls-certs\") pod \"ovn-controller-vbc6h\" (UID: \"283c0b58-d0a1-4cf1-af87-3859306c4a60\") " pod="openstack/ovn-controller-vbc6h" Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.964545 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3b2dccc-71b7-4dd6-9c8d-f1c12382a832-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a3b2dccc-71b7-4dd6-9c8d-f1c12382a832\") " pod="openstack/ovsdbserver-nb-0" Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.971849 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c46d7\" (UniqueName: \"kubernetes.io/projected/283c0b58-d0a1-4cf1-af87-3859306c4a60-kube-api-access-c46d7\") pod \"ovn-controller-vbc6h\" (UID: \"283c0b58-d0a1-4cf1-af87-3859306c4a60\") " pod="openstack/ovn-controller-vbc6h" Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.981211 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5sh6\" (UniqueName: \"kubernetes.io/projected/a3b2dccc-71b7-4dd6-9c8d-f1c12382a832-kube-api-access-h5sh6\") pod \"ovsdbserver-nb-0\" (UID: \"a3b2dccc-71b7-4dd6-9c8d-f1c12382a832\") " pod="openstack/ovsdbserver-nb-0" Oct 13 17:38:32 crc kubenswrapper[4720]: I1013 17:38:32.985668 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"a3b2dccc-71b7-4dd6-9c8d-f1c12382a832\") " pod="openstack/ovsdbserver-nb-0" Oct 13 17:38:33 crc kubenswrapper[4720]: I1013 17:38:33.053432 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b92efbfa-6501-4601-9432-8c37dbe4e020-var-run\") pod \"ovn-controller-ovs-cz99q\" (UID: \"b92efbfa-6501-4601-9432-8c37dbe4e020\") " pod="openstack/ovn-controller-ovs-cz99q" Oct 13 17:38:33 crc kubenswrapper[4720]: I1013 17:38:33.053547 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/b92efbfa-6501-4601-9432-8c37dbe4e020-etc-ovs\") pod \"ovn-controller-ovs-cz99q\" (UID: \"b92efbfa-6501-4601-9432-8c37dbe4e020\") " pod="openstack/ovn-controller-ovs-cz99q" Oct 13 17:38:33 crc kubenswrapper[4720]: I1013 17:38:33.053617 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b92efbfa-6501-4601-9432-8c37dbe4e020-var-log\") pod \"ovn-controller-ovs-cz99q\" (UID: \"b92efbfa-6501-4601-9432-8c37dbe4e020\") " pod="openstack/ovn-controller-ovs-cz99q" Oct 13 17:38:33 crc kubenswrapper[4720]: I1013 17:38:33.053660 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/b92efbfa-6501-4601-9432-8c37dbe4e020-var-lib\") pod \"ovn-controller-ovs-cz99q\" (UID: \"b92efbfa-6501-4601-9432-8c37dbe4e020\") " pod="openstack/ovn-controller-ovs-cz99q" Oct 13 17:38:33 crc kubenswrapper[4720]: I1013 17:38:33.053590 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b92efbfa-6501-4601-9432-8c37dbe4e020-var-run\") pod \"ovn-controller-ovs-cz99q\" (UID: \"b92efbfa-6501-4601-9432-8c37dbe4e020\") " pod="openstack/ovn-controller-ovs-cz99q" Oct 13 17:38:33 crc kubenswrapper[4720]: I1013 17:38:33.053783 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b92efbfa-6501-4601-9432-8c37dbe4e020-scripts\") pod \"ovn-controller-ovs-cz99q\" (UID: \"b92efbfa-6501-4601-9432-8c37dbe4e020\") " pod="openstack/ovn-controller-ovs-cz99q" Oct 13 17:38:33 crc kubenswrapper[4720]: I1013 17:38:33.053830 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9gxn\" (UniqueName: \"kubernetes.io/projected/b92efbfa-6501-4601-9432-8c37dbe4e020-kube-api-access-f9gxn\") pod \"ovn-controller-ovs-cz99q\" (UID: \"b92efbfa-6501-4601-9432-8c37dbe4e020\") " pod="openstack/ovn-controller-ovs-cz99q" Oct 13 17:38:33 crc kubenswrapper[4720]: I1013 17:38:33.053882 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b92efbfa-6501-4601-9432-8c37dbe4e020-var-log\") pod \"ovn-controller-ovs-cz99q\" (UID: \"b92efbfa-6501-4601-9432-8c37dbe4e020\") " pod="openstack/ovn-controller-ovs-cz99q" Oct 13 17:38:33 crc kubenswrapper[4720]: I1013 17:38:33.053897 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/b92efbfa-6501-4601-9432-8c37dbe4e020-etc-ovs\") pod \"ovn-controller-ovs-cz99q\" (UID: \"b92efbfa-6501-4601-9432-8c37dbe4e020\") " pod="openstack/ovn-controller-ovs-cz99q" Oct 13 17:38:33 crc kubenswrapper[4720]: I1013 17:38:33.053990 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/b92efbfa-6501-4601-9432-8c37dbe4e020-var-lib\") pod \"ovn-controller-ovs-cz99q\" (UID: \"b92efbfa-6501-4601-9432-8c37dbe4e020\") " pod="openstack/ovn-controller-ovs-cz99q" Oct 13 17:38:33 crc kubenswrapper[4720]: I1013 17:38:33.057336 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b92efbfa-6501-4601-9432-8c37dbe4e020-scripts\") pod \"ovn-controller-ovs-cz99q\" (UID: \"b92efbfa-6501-4601-9432-8c37dbe4e020\") " pod="openstack/ovn-controller-ovs-cz99q" Oct 13 17:38:33 crc kubenswrapper[4720]: I1013 17:38:33.068783 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vbc6h" Oct 13 17:38:33 crc kubenswrapper[4720]: I1013 17:38:33.068995 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9gxn\" (UniqueName: \"kubernetes.io/projected/b92efbfa-6501-4601-9432-8c37dbe4e020-kube-api-access-f9gxn\") pod \"ovn-controller-ovs-cz99q\" (UID: \"b92efbfa-6501-4601-9432-8c37dbe4e020\") " pod="openstack/ovn-controller-ovs-cz99q" Oct 13 17:38:33 crc kubenswrapper[4720]: I1013 17:38:33.093910 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Oct 13 17:38:33 crc kubenswrapper[4720]: I1013 17:38:33.186806 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-cz99q" Oct 13 17:38:35 crc kubenswrapper[4720]: E1013 17:38:35.264288 4720 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Oct 13 17:38:35 crc kubenswrapper[4720]: E1013 17:38:35.264835 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vrscm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-nz4rl_openstack(cb7c286e-7815-4c65-98bd-cad6da562987): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 13 17:38:35 crc kubenswrapper[4720]: E1013 17:38:35.266177 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-nz4rl" podUID="cb7c286e-7815-4c65-98bd-cad6da562987" Oct 13 17:38:35 crc kubenswrapper[4720]: E1013 17:38:35.299346 4720 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Oct 13 17:38:35 crc kubenswrapper[4720]: E1013 17:38:35.299544 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gfj55,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-wv5jv_openstack(78b891cd-26d1-4448-8675-bef9fbce4793): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 13 17:38:35 crc kubenswrapper[4720]: E1013 17:38:35.300738 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-wv5jv" podUID="78b891cd-26d1-4448-8675-bef9fbce4793" Oct 13 17:38:35 crc kubenswrapper[4720]: I1013 17:38:35.763047 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Oct 13 17:38:35 crc kubenswrapper[4720]: I1013 17:38:35.772523 4720 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 13 17:38:35 crc kubenswrapper[4720]: I1013 17:38:35.858533 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"fe36eeb1-7f7f-424c-a56c-e96cffc3046d","Type":"ContainerStarted","Data":"eb5ed35a570eb9fbf2b437d977061d4be9beebefd130292e3981042b1b462824"} Oct 13 17:38:35 crc kubenswrapper[4720]: I1013 17:38:35.950838 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-4mnjx"] Oct 13 17:38:35 crc kubenswrapper[4720]: I1013 17:38:35.955698 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 13 17:38:35 crc kubenswrapper[4720]: W1013 17:38:35.959744 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf59309d_fcea_47ce_85b5_0eafbf780d08.slice/crio-856c28a7e34caccb6b2aa765bd1447c8f68ef73ec992d8f61b93699d97375d8e WatchSource:0}: Error finding container 856c28a7e34caccb6b2aa765bd1447c8f68ef73ec992d8f61b93699d97375d8e: Status 404 returned error can't find the container with id 856c28a7e34caccb6b2aa765bd1447c8f68ef73ec992d8f61b93699d97375d8e Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.058227 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 13 17:38:36 crc kubenswrapper[4720]: W1013 17:38:36.068077 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3b2dccc_71b7_4dd6_9c8d_f1c12382a832.slice/crio-cb9e159cd6cd2555d84e38966d504fc431eea6c217c12e852a301f39d7d5b550 WatchSource:0}: Error finding container cb9e159cd6cd2555d84e38966d504fc431eea6c217c12e852a301f39d7d5b550: Status 404 returned error can't find the container with id cb9e159cd6cd2555d84e38966d504fc431eea6c217c12e852a301f39d7d5b550 Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.403287 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 13 17:38:36 crc kubenswrapper[4720]: W1013 17:38:36.406287 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b352587_867c_4276_93b5_89c6f922a2cc.slice/crio-a0ec76a6b4375e14c758f893bba2c358bd5dad3c5f4766ea38354e96252993a9 WatchSource:0}: Error finding container a0ec76a6b4375e14c758f893bba2c358bd5dad3c5f4766ea38354e96252993a9: Status 404 returned error can't find the container with id a0ec76a6b4375e14c758f893bba2c358bd5dad3c5f4766ea38354e96252993a9 Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.423870 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.441149 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.456080 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-br2lw" Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.456928 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.457138 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.461806 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.472852 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.480062 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-wv5jv" Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.486604 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-nz4rl" Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.511896 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx97s\" (UniqueName: \"kubernetes.io/projected/01489c11-3710-4d60-a702-71fda5b496ea-kube-api-access-dx97s\") pod \"ovsdbserver-sb-0\" (UID: \"01489c11-3710-4d60-a702-71fda5b496ea\") " pod="openstack/ovsdbserver-sb-0" Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.511976 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01489c11-3710-4d60-a702-71fda5b496ea-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"01489c11-3710-4d60-a702-71fda5b496ea\") " pod="openstack/ovsdbserver-sb-0" Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.511995 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/01489c11-3710-4d60-a702-71fda5b496ea-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"01489c11-3710-4d60-a702-71fda5b496ea\") " pod="openstack/ovsdbserver-sb-0" Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.512075 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"01489c11-3710-4d60-a702-71fda5b496ea\") " pod="openstack/ovsdbserver-sb-0" Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.512103 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/01489c11-3710-4d60-a702-71fda5b496ea-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"01489c11-3710-4d60-a702-71fda5b496ea\") " pod="openstack/ovsdbserver-sb-0" Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.512119 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/01489c11-3710-4d60-a702-71fda5b496ea-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"01489c11-3710-4d60-a702-71fda5b496ea\") " pod="openstack/ovsdbserver-sb-0" Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.512145 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01489c11-3710-4d60-a702-71fda5b496ea-config\") pod \"ovsdbserver-sb-0\" (UID: \"01489c11-3710-4d60-a702-71fda5b496ea\") " pod="openstack/ovsdbserver-sb-0" Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.512168 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/01489c11-3710-4d60-a702-71fda5b496ea-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"01489c11-3710-4d60-a702-71fda5b496ea\") " pod="openstack/ovsdbserver-sb-0" Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.573678 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-vbc6h"] Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.602585 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-9jdhm"] Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.610224 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.620600 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrscm\" (UniqueName: \"kubernetes.io/projected/cb7c286e-7815-4c65-98bd-cad6da562987-kube-api-access-vrscm\") pod \"cb7c286e-7815-4c65-98bd-cad6da562987\" (UID: \"cb7c286e-7815-4c65-98bd-cad6da562987\") " Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.620665 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78b891cd-26d1-4448-8675-bef9fbce4793-config\") pod \"78b891cd-26d1-4448-8675-bef9fbce4793\" (UID: \"78b891cd-26d1-4448-8675-bef9fbce4793\") " Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.620695 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb7c286e-7815-4c65-98bd-cad6da562987-dns-svc\") pod \"cb7c286e-7815-4c65-98bd-cad6da562987\" (UID: \"cb7c286e-7815-4c65-98bd-cad6da562987\") " Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.620785 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb7c286e-7815-4c65-98bd-cad6da562987-config\") pod \"cb7c286e-7815-4c65-98bd-cad6da562987\" (UID: \"cb7c286e-7815-4c65-98bd-cad6da562987\") " Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.620821 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfj55\" (UniqueName: \"kubernetes.io/projected/78b891cd-26d1-4448-8675-bef9fbce4793-kube-api-access-gfj55\") pod \"78b891cd-26d1-4448-8675-bef9fbce4793\" (UID: \"78b891cd-26d1-4448-8675-bef9fbce4793\") " Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.621013 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01489c11-3710-4d60-a702-71fda5b496ea-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"01489c11-3710-4d60-a702-71fda5b496ea\") " pod="openstack/ovsdbserver-sb-0" Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.621032 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/01489c11-3710-4d60-a702-71fda5b496ea-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"01489c11-3710-4d60-a702-71fda5b496ea\") " pod="openstack/ovsdbserver-sb-0" Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.621091 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"01489c11-3710-4d60-a702-71fda5b496ea\") " pod="openstack/ovsdbserver-sb-0" Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.621108 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/01489c11-3710-4d60-a702-71fda5b496ea-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"01489c11-3710-4d60-a702-71fda5b496ea\") " pod="openstack/ovsdbserver-sb-0" Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.621121 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/01489c11-3710-4d60-a702-71fda5b496ea-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"01489c11-3710-4d60-a702-71fda5b496ea\") " pod="openstack/ovsdbserver-sb-0" Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.621142 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01489c11-3710-4d60-a702-71fda5b496ea-config\") pod \"ovsdbserver-sb-0\" (UID: \"01489c11-3710-4d60-a702-71fda5b496ea\") " pod="openstack/ovsdbserver-sb-0" Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.621157 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/01489c11-3710-4d60-a702-71fda5b496ea-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"01489c11-3710-4d60-a702-71fda5b496ea\") " pod="openstack/ovsdbserver-sb-0" Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.621211 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx97s\" (UniqueName: \"kubernetes.io/projected/01489c11-3710-4d60-a702-71fda5b496ea-kube-api-access-dx97s\") pod \"ovsdbserver-sb-0\" (UID: \"01489c11-3710-4d60-a702-71fda5b496ea\") " pod="openstack/ovsdbserver-sb-0" Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.621322 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78b891cd-26d1-4448-8675-bef9fbce4793-config" (OuterVolumeSpecName: "config") pod "78b891cd-26d1-4448-8675-bef9fbce4793" (UID: "78b891cd-26d1-4448-8675-bef9fbce4793"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.621766 4720 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"01489c11-3710-4d60-a702-71fda5b496ea\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/ovsdbserver-sb-0" Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.625614 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.630565 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/01489c11-3710-4d60-a702-71fda5b496ea-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"01489c11-3710-4d60-a702-71fda5b496ea\") " pod="openstack/ovsdbserver-sb-0" Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.632397 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb7c286e-7815-4c65-98bd-cad6da562987-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cb7c286e-7815-4c65-98bd-cad6da562987" (UID: "cb7c286e-7815-4c65-98bd-cad6da562987"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.633014 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb7c286e-7815-4c65-98bd-cad6da562987-config" (OuterVolumeSpecName: "config") pod "cb7c286e-7815-4c65-98bd-cad6da562987" (UID: "cb7c286e-7815-4c65-98bd-cad6da562987"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.635964 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/01489c11-3710-4d60-a702-71fda5b496ea-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"01489c11-3710-4d60-a702-71fda5b496ea\") " pod="openstack/ovsdbserver-sb-0" Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.636491 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/01489c11-3710-4d60-a702-71fda5b496ea-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"01489c11-3710-4d60-a702-71fda5b496ea\") " pod="openstack/ovsdbserver-sb-0" Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.637396 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb7c286e-7815-4c65-98bd-cad6da562987-kube-api-access-vrscm" (OuterVolumeSpecName: "kube-api-access-vrscm") pod "cb7c286e-7815-4c65-98bd-cad6da562987" (UID: "cb7c286e-7815-4c65-98bd-cad6da562987"). InnerVolumeSpecName "kube-api-access-vrscm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.637898 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01489c11-3710-4d60-a702-71fda5b496ea-config\") pod \"ovsdbserver-sb-0\" (UID: \"01489c11-3710-4d60-a702-71fda5b496ea\") " pod="openstack/ovsdbserver-sb-0" Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.638112 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78b891cd-26d1-4448-8675-bef9fbce4793-kube-api-access-gfj55" (OuterVolumeSpecName: "kube-api-access-gfj55") pod "78b891cd-26d1-4448-8675-bef9fbce4793" (UID: "78b891cd-26d1-4448-8675-bef9fbce4793"). InnerVolumeSpecName "kube-api-access-gfj55". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.640377 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx97s\" (UniqueName: \"kubernetes.io/projected/01489c11-3710-4d60-a702-71fda5b496ea-kube-api-access-dx97s\") pod \"ovsdbserver-sb-0\" (UID: \"01489c11-3710-4d60-a702-71fda5b496ea\") " pod="openstack/ovsdbserver-sb-0" Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.641142 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01489c11-3710-4d60-a702-71fda5b496ea-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"01489c11-3710-4d60-a702-71fda5b496ea\") " pod="openstack/ovsdbserver-sb-0" Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.650317 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/01489c11-3710-4d60-a702-71fda5b496ea-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"01489c11-3710-4d60-a702-71fda5b496ea\") " pod="openstack/ovsdbserver-sb-0" Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.659709 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.662404 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"01489c11-3710-4d60-a702-71fda5b496ea\") " pod="openstack/ovsdbserver-sb-0" Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.705748 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-cz99q"] Oct 13 17:38:36 crc kubenswrapper[4720]: W1013 17:38:36.715876 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb92efbfa_6501_4601_9432_8c37dbe4e020.slice/crio-14fc8268f5c113b8d40ff37841d584ebb8dc7849a140a293e7c77bfd8748b0d8 WatchSource:0}: Error finding container 14fc8268f5c113b8d40ff37841d584ebb8dc7849a140a293e7c77bfd8748b0d8: Status 404 returned error can't find the container with id 14fc8268f5c113b8d40ff37841d584ebb8dc7849a140a293e7c77bfd8748b0d8 Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.722854 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfj55\" (UniqueName: \"kubernetes.io/projected/78b891cd-26d1-4448-8675-bef9fbce4793-kube-api-access-gfj55\") on node \"crc\" DevicePath \"\"" Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.722994 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrscm\" (UniqueName: \"kubernetes.io/projected/cb7c286e-7815-4c65-98bd-cad6da562987-kube-api-access-vrscm\") on node \"crc\" DevicePath \"\"" Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.723011 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78b891cd-26d1-4448-8675-bef9fbce4793-config\") on node \"crc\" DevicePath \"\"" Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.723020 4720 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb7c286e-7815-4c65-98bd-cad6da562987-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.723029 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb7c286e-7815-4c65-98bd-cad6da562987-config\") on node \"crc\" DevicePath \"\"" Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.818961 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.871250 4720 generic.go:334] "Generic (PLEG): container finished" podID="b20e6d23-eea6-4e05-95fc-100aa77c82f7" containerID="3fd97b7c555c62c1f26df4ad05505b60ba103c48eb809b01fc50ddb9769818fa" exitCode=0 Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.871308 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-4mnjx" event={"ID":"b20e6d23-eea6-4e05-95fc-100aa77c82f7","Type":"ContainerDied","Data":"3fd97b7c555c62c1f26df4ad05505b60ba103c48eb809b01fc50ddb9769818fa"} Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.871333 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-4mnjx" event={"ID":"b20e6d23-eea6-4e05-95fc-100aa77c82f7","Type":"ContainerStarted","Data":"77ae69655fbb9038e9738ddf98830bb028bb7d65ade7dc4d893c097218f174fa"} Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.873449 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-wv5jv" event={"ID":"78b891cd-26d1-4448-8675-bef9fbce4793","Type":"ContainerDied","Data":"62d48523441796e3b35df04fff56b3c259a482636a0c7ecffd0c5abea30cb703"} Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.873498 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-wv5jv" Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.875683 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-nz4rl" event={"ID":"cb7c286e-7815-4c65-98bd-cad6da562987","Type":"ContainerDied","Data":"b506e5c169c8b8d2e7208a7bcd164046401ce12d876b0fba9c95f29caa82b5de"} Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.875767 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-nz4rl" Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.878519 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0b352587-867c-4276-93b5-89c6f922a2cc","Type":"ContainerStarted","Data":"a0ec76a6b4375e14c758f893bba2c358bd5dad3c5f4766ea38354e96252993a9"} Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.880295 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-cz99q" event={"ID":"b92efbfa-6501-4601-9432-8c37dbe4e020","Type":"ContainerStarted","Data":"14fc8268f5c113b8d40ff37841d584ebb8dc7849a140a293e7c77bfd8748b0d8"} Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.881579 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vbc6h" event={"ID":"283c0b58-d0a1-4cf1-af87-3859306c4a60","Type":"ContainerStarted","Data":"d3b6080334f270108efe2826ccf94e93ed38341a5c75cd012c31d8aa2c6c1ba0"} Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.882606 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"af59309d-fcea-47ce-85b5-0eafbf780d08","Type":"ContainerStarted","Data":"856c28a7e34caccb6b2aa765bd1447c8f68ef73ec992d8f61b93699d97375d8e"} Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.883569 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"a3b2dccc-71b7-4dd6-9c8d-f1c12382a832","Type":"ContainerStarted","Data":"cb9e159cd6cd2555d84e38966d504fc431eea6c217c12e852a301f39d7d5b550"} Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.884599 4720 generic.go:334] "Generic (PLEG): container finished" podID="7349a4fa-fffe-44e9-aebb-ceb486b7fb45" containerID="ccc38b4ea21424ac83eec59d0f5011b23087266abeba694e3a687280a24caf1f" exitCode=0 Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.884643 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-9jdhm" event={"ID":"7349a4fa-fffe-44e9-aebb-ceb486b7fb45","Type":"ContainerDied","Data":"ccc38b4ea21424ac83eec59d0f5011b23087266abeba694e3a687280a24caf1f"} Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.884658 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-9jdhm" event={"ID":"7349a4fa-fffe-44e9-aebb-ceb486b7fb45","Type":"ContainerStarted","Data":"721a5597baefb6d39cc3f7c24a3c6286dfcc2105dec9e3498d14e7118d0d582c"} Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.895577 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"e6b1817f-f719-4727-ad61-56061b241d4b","Type":"ContainerStarted","Data":"2cbc0afe89dd57b1fe3428aa49c16ab9cda3f3ae1905df68f72f5403de40f8bc"} Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.897944 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"76c17d7a-8441-4b23-839b-f95ac54a6b24","Type":"ContainerStarted","Data":"0ede3eaf7711748fcd53cb9d82922e79e784bf3efbfaaf169015e022023bc446"} Oct 13 17:38:36 crc kubenswrapper[4720]: I1013 17:38:36.910450 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f8338f95-b766-4ce8-b60e-020957cdee12","Type":"ContainerStarted","Data":"315e7a2a31a9ca73ba46349245cc78036b56d43f699474593d471dd17f0136e2"} Oct 13 17:38:37 crc kubenswrapper[4720]: I1013 17:38:37.007107 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-nz4rl"] Oct 13 17:38:37 crc kubenswrapper[4720]: I1013 17:38:37.023377 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-nz4rl"] Oct 13 17:38:37 crc kubenswrapper[4720]: I1013 17:38:37.042840 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-wv5jv"] Oct 13 17:38:37 crc kubenswrapper[4720]: I1013 17:38:37.050586 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-wv5jv"] Oct 13 17:38:37 crc kubenswrapper[4720]: E1013 17:38:37.083502 4720 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Oct 13 17:38:37 crc kubenswrapper[4720]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/b20e6d23-eea6-4e05-95fc-100aa77c82f7/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Oct 13 17:38:37 crc kubenswrapper[4720]: > podSandboxID="77ae69655fbb9038e9738ddf98830bb028bb7d65ade7dc4d893c097218f174fa" Oct 13 17:38:37 crc kubenswrapper[4720]: E1013 17:38:37.083883 4720 kuberuntime_manager.go:1274] "Unhandled Error" err=< Oct 13 17:38:37 crc kubenswrapper[4720]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r84z8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-4mnjx_openstack(b20e6d23-eea6-4e05-95fc-100aa77c82f7): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/b20e6d23-eea6-4e05-95fc-100aa77c82f7/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Oct 13 17:38:37 crc kubenswrapper[4720]: > logger="UnhandledError" Oct 13 17:38:37 crc kubenswrapper[4720]: E1013 17:38:37.085034 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/b20e6d23-eea6-4e05-95fc-100aa77c82f7/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-666b6646f7-4mnjx" podUID="b20e6d23-eea6-4e05-95fc-100aa77c82f7" Oct 13 17:38:37 crc kubenswrapper[4720]: I1013 17:38:37.179706 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78b891cd-26d1-4448-8675-bef9fbce4793" path="/var/lib/kubelet/pods/78b891cd-26d1-4448-8675-bef9fbce4793/volumes" Oct 13 17:38:37 crc kubenswrapper[4720]: I1013 17:38:37.180052 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb7c286e-7815-4c65-98bd-cad6da562987" path="/var/lib/kubelet/pods/cb7c286e-7815-4c65-98bd-cad6da562987/volumes" Oct 13 17:38:37 crc kubenswrapper[4720]: I1013 17:38:37.402765 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 13 17:38:37 crc kubenswrapper[4720]: W1013 17:38:37.688214 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01489c11_3710_4d60_a702_71fda5b496ea.slice/crio-d444e2a170d0df9e335086e4dec36272da83bce6be8942131754541b4839de20 WatchSource:0}: Error finding container d444e2a170d0df9e335086e4dec36272da83bce6be8942131754541b4839de20: Status 404 returned error can't find the container with id d444e2a170d0df9e335086e4dec36272da83bce6be8942131754541b4839de20 Oct 13 17:38:37 crc kubenswrapper[4720]: I1013 17:38:37.920650 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"01489c11-3710-4d60-a702-71fda5b496ea","Type":"ContainerStarted","Data":"d444e2a170d0df9e335086e4dec36272da83bce6be8942131754541b4839de20"} Oct 13 17:38:37 crc kubenswrapper[4720]: I1013 17:38:37.923419 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-9jdhm" event={"ID":"7349a4fa-fffe-44e9-aebb-ceb486b7fb45","Type":"ContainerStarted","Data":"be6edba14d70cbda2c79783058bd5c0f065dd429dce247c9be79501b1c5805a4"} Oct 13 17:38:37 crc kubenswrapper[4720]: I1013 17:38:37.943556 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-9jdhm" podStartSLOduration=15.943538955 podStartE2EDuration="15.943538955s" podCreationTimestamp="2025-10-13 17:38:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:38:37.941556834 +0000 UTC m=+863.398806966" watchObservedRunningTime="2025-10-13 17:38:37.943538955 +0000 UTC m=+863.400789087" Oct 13 17:38:38 crc kubenswrapper[4720]: I1013 17:38:38.933129 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-9jdhm" Oct 13 17:38:42 crc kubenswrapper[4720]: I1013 17:38:42.817359 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-9jdhm" Oct 13 17:38:42 crc kubenswrapper[4720]: I1013 17:38:42.883312 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-4mnjx"] Oct 13 17:38:45 crc kubenswrapper[4720]: I1013 17:38:45.212702 4720 patch_prober.go:28] interesting pod/machine-config-daemon-htwnl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 17:38:45 crc kubenswrapper[4720]: I1013 17:38:45.213066 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 17:38:49 crc kubenswrapper[4720]: I1013 17:38:49.010928 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-4mnjx" event={"ID":"b20e6d23-eea6-4e05-95fc-100aa77c82f7","Type":"ContainerStarted","Data":"6a7b2884b76881bd0ee57b10ca6d87925c37d26d0018c3b9155abb24bad1e714"} Oct 13 17:38:49 crc kubenswrapper[4720]: I1013 17:38:49.011493 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-4mnjx" Oct 13 17:38:49 crc kubenswrapper[4720]: I1013 17:38:49.011106 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-4mnjx" podUID="b20e6d23-eea6-4e05-95fc-100aa77c82f7" containerName="dnsmasq-dns" containerID="cri-o://6a7b2884b76881bd0ee57b10ca6d87925c37d26d0018c3b9155abb24bad1e714" gracePeriod=10 Oct 13 17:38:49 crc kubenswrapper[4720]: I1013 17:38:49.014260 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"e6b1817f-f719-4727-ad61-56061b241d4b","Type":"ContainerStarted","Data":"1cf23a6a50a618fc1417a583060b939a91f9eb5b8e68d73e942aa5a7734e357a"} Oct 13 17:38:49 crc kubenswrapper[4720]: I1013 17:38:49.014417 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Oct 13 17:38:49 crc kubenswrapper[4720]: I1013 17:38:49.017098 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f8338f95-b766-4ce8-b60e-020957cdee12","Type":"ContainerStarted","Data":"5ab7aaf4e098f9dfe721c054fda6deadec46425f81beb37dc4927cf6c99e8a15"} Oct 13 17:38:49 crc kubenswrapper[4720]: I1013 17:38:49.038926 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-4mnjx" podStartSLOduration=26.59009203 podStartE2EDuration="27.038897117s" podCreationTimestamp="2025-10-13 17:38:22 +0000 UTC" firstStartedPulling="2025-10-13 17:38:35.956126902 +0000 UTC m=+861.413377034" lastFinishedPulling="2025-10-13 17:38:36.404931979 +0000 UTC m=+861.862182121" observedRunningTime="2025-10-13 17:38:49.03240935 +0000 UTC m=+874.489659492" watchObservedRunningTime="2025-10-13 17:38:49.038897117 +0000 UTC m=+874.496147289" Oct 13 17:38:49 crc kubenswrapper[4720]: I1013 17:38:49.062101 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=12.813417816 podStartE2EDuration="23.062082143s" podCreationTimestamp="2025-10-13 17:38:26 +0000 UTC" firstStartedPulling="2025-10-13 17:38:36.645290693 +0000 UTC m=+862.102540825" lastFinishedPulling="2025-10-13 17:38:46.89395498 +0000 UTC m=+872.351205152" observedRunningTime="2025-10-13 17:38:49.060956524 +0000 UTC m=+874.518206666" watchObservedRunningTime="2025-10-13 17:38:49.062082143 +0000 UTC m=+874.519332285" Oct 13 17:38:49 crc kubenswrapper[4720]: I1013 17:38:49.746121 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-4mnjx" Oct 13 17:38:49 crc kubenswrapper[4720]: I1013 17:38:49.845976 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b20e6d23-eea6-4e05-95fc-100aa77c82f7-dns-svc\") pod \"b20e6d23-eea6-4e05-95fc-100aa77c82f7\" (UID: \"b20e6d23-eea6-4e05-95fc-100aa77c82f7\") " Oct 13 17:38:49 crc kubenswrapper[4720]: I1013 17:38:49.846036 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r84z8\" (UniqueName: \"kubernetes.io/projected/b20e6d23-eea6-4e05-95fc-100aa77c82f7-kube-api-access-r84z8\") pod \"b20e6d23-eea6-4e05-95fc-100aa77c82f7\" (UID: \"b20e6d23-eea6-4e05-95fc-100aa77c82f7\") " Oct 13 17:38:49 crc kubenswrapper[4720]: I1013 17:38:49.846063 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b20e6d23-eea6-4e05-95fc-100aa77c82f7-config\") pod \"b20e6d23-eea6-4e05-95fc-100aa77c82f7\" (UID: \"b20e6d23-eea6-4e05-95fc-100aa77c82f7\") " Oct 13 17:38:49 crc kubenswrapper[4720]: I1013 17:38:49.853629 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b20e6d23-eea6-4e05-95fc-100aa77c82f7-kube-api-access-r84z8" (OuterVolumeSpecName: "kube-api-access-r84z8") pod "b20e6d23-eea6-4e05-95fc-100aa77c82f7" (UID: "b20e6d23-eea6-4e05-95fc-100aa77c82f7"). InnerVolumeSpecName "kube-api-access-r84z8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:38:49 crc kubenswrapper[4720]: I1013 17:38:49.892938 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b20e6d23-eea6-4e05-95fc-100aa77c82f7-config" (OuterVolumeSpecName: "config") pod "b20e6d23-eea6-4e05-95fc-100aa77c82f7" (UID: "b20e6d23-eea6-4e05-95fc-100aa77c82f7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:38:49 crc kubenswrapper[4720]: I1013 17:38:49.900252 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b20e6d23-eea6-4e05-95fc-100aa77c82f7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b20e6d23-eea6-4e05-95fc-100aa77c82f7" (UID: "b20e6d23-eea6-4e05-95fc-100aa77c82f7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:38:49 crc kubenswrapper[4720]: I1013 17:38:49.947388 4720 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b20e6d23-eea6-4e05-95fc-100aa77c82f7-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 13 17:38:49 crc kubenswrapper[4720]: I1013 17:38:49.947421 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r84z8\" (UniqueName: \"kubernetes.io/projected/b20e6d23-eea6-4e05-95fc-100aa77c82f7-kube-api-access-r84z8\") on node \"crc\" DevicePath \"\"" Oct 13 17:38:49 crc kubenswrapper[4720]: I1013 17:38:49.947436 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b20e6d23-eea6-4e05-95fc-100aa77c82f7-config\") on node \"crc\" DevicePath \"\"" Oct 13 17:38:50 crc kubenswrapper[4720]: I1013 17:38:50.025405 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0b352587-867c-4276-93b5-89c6f922a2cc","Type":"ContainerStarted","Data":"efa665659a38bb5c901ba2e3ceb63eebfea09bf62d86dbbfe61157bad0e61c90"} Oct 13 17:38:50 crc kubenswrapper[4720]: I1013 17:38:50.025859 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Oct 13 17:38:50 crc kubenswrapper[4720]: I1013 17:38:50.027939 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vbc6h" event={"ID":"283c0b58-d0a1-4cf1-af87-3859306c4a60","Type":"ContainerStarted","Data":"0db4459814d70138498cba8d823665e323d507b2309f54a5fc0e36af13593aea"} Oct 13 17:38:50 crc kubenswrapper[4720]: I1013 17:38:50.028376 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-vbc6h" Oct 13 17:38:50 crc kubenswrapper[4720]: I1013 17:38:50.030313 4720 generic.go:334] "Generic (PLEG): container finished" podID="b20e6d23-eea6-4e05-95fc-100aa77c82f7" containerID="6a7b2884b76881bd0ee57b10ca6d87925c37d26d0018c3b9155abb24bad1e714" exitCode=0 Oct 13 17:38:50 crc kubenswrapper[4720]: I1013 17:38:50.030364 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-4mnjx" event={"ID":"b20e6d23-eea6-4e05-95fc-100aa77c82f7","Type":"ContainerDied","Data":"6a7b2884b76881bd0ee57b10ca6d87925c37d26d0018c3b9155abb24bad1e714"} Oct 13 17:38:50 crc kubenswrapper[4720]: I1013 17:38:50.030385 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-4mnjx" event={"ID":"b20e6d23-eea6-4e05-95fc-100aa77c82f7","Type":"ContainerDied","Data":"77ae69655fbb9038e9738ddf98830bb028bb7d65ade7dc4d893c097218f174fa"} Oct 13 17:38:50 crc kubenswrapper[4720]: I1013 17:38:50.030404 4720 scope.go:117] "RemoveContainer" containerID="6a7b2884b76881bd0ee57b10ca6d87925c37d26d0018c3b9155abb24bad1e714" Oct 13 17:38:50 crc kubenswrapper[4720]: I1013 17:38:50.030492 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-4mnjx" Oct 13 17:38:50 crc kubenswrapper[4720]: I1013 17:38:50.034303 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"76c17d7a-8441-4b23-839b-f95ac54a6b24","Type":"ContainerStarted","Data":"b82320f15b735d12b0964ce3b17f86b2a98cb4eabc1eed44f6074ca7c0dcf4e4"} Oct 13 17:38:50 crc kubenswrapper[4720]: I1013 17:38:50.036699 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"01489c11-3710-4d60-a702-71fda5b496ea","Type":"ContainerStarted","Data":"156774624aa1c4879446adacd37b8e7488bc0a32bdd37302720851cf0287d139"} Oct 13 17:38:50 crc kubenswrapper[4720]: I1013 17:38:50.038615 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"a3b2dccc-71b7-4dd6-9c8d-f1c12382a832","Type":"ContainerStarted","Data":"233936c6f6daa05eeeea3a579d336baaacc9ae1297f34fe55579ceee30a39fc0"} Oct 13 17:38:50 crc kubenswrapper[4720]: I1013 17:38:50.041491 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-cz99q" event={"ID":"b92efbfa-6501-4601-9432-8c37dbe4e020","Type":"ContainerStarted","Data":"dc54e9d387b5655c5bdcd129e701bd7f08f275fd8b4a35b2fc15446bba1777f7"} Oct 13 17:38:50 crc kubenswrapper[4720]: I1013 17:38:50.043759 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"fe36eeb1-7f7f-424c-a56c-e96cffc3046d","Type":"ContainerStarted","Data":"6aabf2f90ff05664878d195a40eceff8b7fbf015de1d995986e83296f24f6422"} Oct 13 17:38:50 crc kubenswrapper[4720]: I1013 17:38:50.046314 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"af59309d-fcea-47ce-85b5-0eafbf780d08","Type":"ContainerStarted","Data":"cbc606f7c5761bcc6663e0a2f8f1346ba2ab48d02354a5f22b7a9a7e5ee7cad0"} Oct 13 17:38:50 crc kubenswrapper[4720]: I1013 17:38:50.048265 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=9.279566772 podStartE2EDuration="21.048244896s" podCreationTimestamp="2025-10-13 17:38:29 +0000 UTC" firstStartedPulling="2025-10-13 17:38:36.413546081 +0000 UTC m=+861.870796233" lastFinishedPulling="2025-10-13 17:38:48.182224175 +0000 UTC m=+873.639474357" observedRunningTime="2025-10-13 17:38:50.041047011 +0000 UTC m=+875.498297143" watchObservedRunningTime="2025-10-13 17:38:50.048244896 +0000 UTC m=+875.505495028" Oct 13 17:38:50 crc kubenswrapper[4720]: I1013 17:38:50.051386 4720 scope.go:117] "RemoveContainer" containerID="3fd97b7c555c62c1f26df4ad05505b60ba103c48eb809b01fc50ddb9769818fa" Oct 13 17:38:50 crc kubenswrapper[4720]: I1013 17:38:50.066324 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-vbc6h" podStartSLOduration=7.225087339 podStartE2EDuration="18.066310941s" podCreationTimestamp="2025-10-13 17:38:32 +0000 UTC" firstStartedPulling="2025-10-13 17:38:36.609697408 +0000 UTC m=+862.066947540" lastFinishedPulling="2025-10-13 17:38:47.450921 +0000 UTC m=+872.908171142" observedRunningTime="2025-10-13 17:38:50.066080725 +0000 UTC m=+875.523330857" watchObservedRunningTime="2025-10-13 17:38:50.066310941 +0000 UTC m=+875.523561063" Oct 13 17:38:50 crc kubenswrapper[4720]: I1013 17:38:50.097635 4720 scope.go:117] "RemoveContainer" containerID="6a7b2884b76881bd0ee57b10ca6d87925c37d26d0018c3b9155abb24bad1e714" Oct 13 17:38:50 crc kubenswrapper[4720]: E1013 17:38:50.098395 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a7b2884b76881bd0ee57b10ca6d87925c37d26d0018c3b9155abb24bad1e714\": container with ID starting with 6a7b2884b76881bd0ee57b10ca6d87925c37d26d0018c3b9155abb24bad1e714 not found: ID does not exist" containerID="6a7b2884b76881bd0ee57b10ca6d87925c37d26d0018c3b9155abb24bad1e714" Oct 13 17:38:50 crc kubenswrapper[4720]: I1013 17:38:50.098474 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a7b2884b76881bd0ee57b10ca6d87925c37d26d0018c3b9155abb24bad1e714"} err="failed to get container status \"6a7b2884b76881bd0ee57b10ca6d87925c37d26d0018c3b9155abb24bad1e714\": rpc error: code = NotFound desc = could not find container \"6a7b2884b76881bd0ee57b10ca6d87925c37d26d0018c3b9155abb24bad1e714\": container with ID starting with 6a7b2884b76881bd0ee57b10ca6d87925c37d26d0018c3b9155abb24bad1e714 not found: ID does not exist" Oct 13 17:38:50 crc kubenswrapper[4720]: I1013 17:38:50.098494 4720 scope.go:117] "RemoveContainer" containerID="3fd97b7c555c62c1f26df4ad05505b60ba103c48eb809b01fc50ddb9769818fa" Oct 13 17:38:50 crc kubenswrapper[4720]: E1013 17:38:50.098774 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fd97b7c555c62c1f26df4ad05505b60ba103c48eb809b01fc50ddb9769818fa\": container with ID starting with 3fd97b7c555c62c1f26df4ad05505b60ba103c48eb809b01fc50ddb9769818fa not found: ID does not exist" containerID="3fd97b7c555c62c1f26df4ad05505b60ba103c48eb809b01fc50ddb9769818fa" Oct 13 17:38:50 crc kubenswrapper[4720]: I1013 17:38:50.098796 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fd97b7c555c62c1f26df4ad05505b60ba103c48eb809b01fc50ddb9769818fa"} err="failed to get container status \"3fd97b7c555c62c1f26df4ad05505b60ba103c48eb809b01fc50ddb9769818fa\": rpc error: code = NotFound desc = could not find container \"3fd97b7c555c62c1f26df4ad05505b60ba103c48eb809b01fc50ddb9769818fa\": container with ID starting with 3fd97b7c555c62c1f26df4ad05505b60ba103c48eb809b01fc50ddb9769818fa not found: ID does not exist" Oct 13 17:38:50 crc kubenswrapper[4720]: I1013 17:38:50.171678 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-4mnjx"] Oct 13 17:38:50 crc kubenswrapper[4720]: I1013 17:38:50.177947 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-4mnjx"] Oct 13 17:38:51 crc kubenswrapper[4720]: I1013 17:38:51.055528 4720 generic.go:334] "Generic (PLEG): container finished" podID="b92efbfa-6501-4601-9432-8c37dbe4e020" containerID="dc54e9d387b5655c5bdcd129e701bd7f08f275fd8b4a35b2fc15446bba1777f7" exitCode=0 Oct 13 17:38:51 crc kubenswrapper[4720]: I1013 17:38:51.055616 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-cz99q" event={"ID":"b92efbfa-6501-4601-9432-8c37dbe4e020","Type":"ContainerDied","Data":"dc54e9d387b5655c5bdcd129e701bd7f08f275fd8b4a35b2fc15446bba1777f7"} Oct 13 17:38:51 crc kubenswrapper[4720]: I1013 17:38:51.177542 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b20e6d23-eea6-4e05-95fc-100aa77c82f7" path="/var/lib/kubelet/pods/b20e6d23-eea6-4e05-95fc-100aa77c82f7/volumes" Oct 13 17:38:53 crc kubenswrapper[4720]: I1013 17:38:53.092725 4720 generic.go:334] "Generic (PLEG): container finished" podID="fe36eeb1-7f7f-424c-a56c-e96cffc3046d" containerID="6aabf2f90ff05664878d195a40eceff8b7fbf015de1d995986e83296f24f6422" exitCode=0 Oct 13 17:38:53 crc kubenswrapper[4720]: I1013 17:38:53.092830 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"fe36eeb1-7f7f-424c-a56c-e96cffc3046d","Type":"ContainerDied","Data":"6aabf2f90ff05664878d195a40eceff8b7fbf015de1d995986e83296f24f6422"} Oct 13 17:38:53 crc kubenswrapper[4720]: I1013 17:38:53.098624 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"01489c11-3710-4d60-a702-71fda5b496ea","Type":"ContainerStarted","Data":"d10252571b55d09eec681816ee3ad6e80de22a6fe8affd3e581202911bb6e347"} Oct 13 17:38:53 crc kubenswrapper[4720]: I1013 17:38:53.101237 4720 generic.go:334] "Generic (PLEG): container finished" podID="f8338f95-b766-4ce8-b60e-020957cdee12" containerID="5ab7aaf4e098f9dfe721c054fda6deadec46425f81beb37dc4927cf6c99e8a15" exitCode=0 Oct 13 17:38:53 crc kubenswrapper[4720]: I1013 17:38:53.101332 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f8338f95-b766-4ce8-b60e-020957cdee12","Type":"ContainerDied","Data":"5ab7aaf4e098f9dfe721c054fda6deadec46425f81beb37dc4927cf6c99e8a15"} Oct 13 17:38:53 crc kubenswrapper[4720]: I1013 17:38:53.103622 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"a3b2dccc-71b7-4dd6-9c8d-f1c12382a832","Type":"ContainerStarted","Data":"d685b59b239fb5a94c3e30d69f7b4fe704a1bdae44788e188642da2d7d90a9d8"} Oct 13 17:38:53 crc kubenswrapper[4720]: I1013 17:38:53.112100 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-cz99q" event={"ID":"b92efbfa-6501-4601-9432-8c37dbe4e020","Type":"ContainerStarted","Data":"c6c459bc9b2bf788dcc1f3726d6dd8c02c21f8b7a6059072a3a2503db5c44164"} Oct 13 17:38:53 crc kubenswrapper[4720]: I1013 17:38:53.171228 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=3.460710842 podStartE2EDuration="18.171211467s" podCreationTimestamp="2025-10-13 17:38:35 +0000 UTC" firstStartedPulling="2025-10-13 17:38:37.699743563 +0000 UTC m=+863.156993695" lastFinishedPulling="2025-10-13 17:38:52.410244178 +0000 UTC m=+877.867494320" observedRunningTime="2025-10-13 17:38:53.166005203 +0000 UTC m=+878.623255335" watchObservedRunningTime="2025-10-13 17:38:53.171211467 +0000 UTC m=+878.628461599" Oct 13 17:38:54 crc kubenswrapper[4720]: I1013 17:38:54.094594 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Oct 13 17:38:54 crc kubenswrapper[4720]: I1013 17:38:54.129373 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"fe36eeb1-7f7f-424c-a56c-e96cffc3046d","Type":"ContainerStarted","Data":"6489173d9312dd7bcb763db899e0ae4289ffaa1f4d1e42908424d691181a517d"} Oct 13 17:38:54 crc kubenswrapper[4720]: I1013 17:38:54.146639 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f8338f95-b766-4ce8-b60e-020957cdee12","Type":"ContainerStarted","Data":"f577a39343296195587682fc7ee5cab872707a2c3e3ae3e473042c733da5bfd2"} Oct 13 17:38:54 crc kubenswrapper[4720]: I1013 17:38:54.159574 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-cz99q" event={"ID":"b92efbfa-6501-4601-9432-8c37dbe4e020","Type":"ContainerStarted","Data":"48c67834dfa9cd124478a94eefeac1cab1322338d8b65e6d6ed028fdd8b22e67"} Oct 13 17:38:54 crc kubenswrapper[4720]: I1013 17:38:54.159795 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-cz99q" Oct 13 17:38:54 crc kubenswrapper[4720]: I1013 17:38:54.160362 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=6.795684833 podStartE2EDuration="23.160340046s" podCreationTimestamp="2025-10-13 17:38:31 +0000 UTC" firstStartedPulling="2025-10-13 17:38:36.069369316 +0000 UTC m=+861.526619448" lastFinishedPulling="2025-10-13 17:38:52.434024529 +0000 UTC m=+877.891274661" observedRunningTime="2025-10-13 17:38:53.196285422 +0000 UTC m=+878.653535564" watchObservedRunningTime="2025-10-13 17:38:54.160340046 +0000 UTC m=+879.617590168" Oct 13 17:38:54 crc kubenswrapper[4720]: I1013 17:38:54.165330 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=18.558499713 podStartE2EDuration="30.165323374s" podCreationTimestamp="2025-10-13 17:38:24 +0000 UTC" firstStartedPulling="2025-10-13 17:38:35.772271581 +0000 UTC m=+861.229521713" lastFinishedPulling="2025-10-13 17:38:47.379095222 +0000 UTC m=+872.836345374" observedRunningTime="2025-10-13 17:38:54.158393016 +0000 UTC m=+879.615643148" watchObservedRunningTime="2025-10-13 17:38:54.165323374 +0000 UTC m=+879.622573506" Oct 13 17:38:54 crc kubenswrapper[4720]: I1013 17:38:54.178203 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Oct 13 17:38:54 crc kubenswrapper[4720]: I1013 17:38:54.181537 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=18.538961349 podStartE2EDuration="29.181526601s" podCreationTimestamp="2025-10-13 17:38:25 +0000 UTC" firstStartedPulling="2025-10-13 17:38:36.643645771 +0000 UTC m=+862.100895903" lastFinishedPulling="2025-10-13 17:38:47.286184142 +0000 UTC m=+872.743461155" observedRunningTime="2025-10-13 17:38:54.180364491 +0000 UTC m=+879.637614623" watchObservedRunningTime="2025-10-13 17:38:54.181526601 +0000 UTC m=+879.638776733" Oct 13 17:38:54 crc kubenswrapper[4720]: I1013 17:38:54.222400 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-cz99q" podStartSLOduration=11.491143779 podStartE2EDuration="22.222384202s" podCreationTimestamp="2025-10-13 17:38:32 +0000 UTC" firstStartedPulling="2025-10-13 17:38:36.718083186 +0000 UTC m=+862.175333318" lastFinishedPulling="2025-10-13 17:38:47.449323599 +0000 UTC m=+872.906573741" observedRunningTime="2025-10-13 17:38:54.218328978 +0000 UTC m=+879.675579110" watchObservedRunningTime="2025-10-13 17:38:54.222384202 +0000 UTC m=+879.679634324" Oct 13 17:38:54 crc kubenswrapper[4720]: I1013 17:38:54.819588 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Oct 13 17:38:54 crc kubenswrapper[4720]: I1013 17:38:54.902897 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.189041 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.190283 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.190417 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-cz99q" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.228503 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.237759 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.476361 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.476408 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.551704 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-vpfh4"] Oct 13 17:38:55 crc kubenswrapper[4720]: E1013 17:38:55.552007 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b20e6d23-eea6-4e05-95fc-100aa77c82f7" containerName="dnsmasq-dns" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.552017 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="b20e6d23-eea6-4e05-95fc-100aa77c82f7" containerName="dnsmasq-dns" Oct 13 17:38:55 crc kubenswrapper[4720]: E1013 17:38:55.552040 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b20e6d23-eea6-4e05-95fc-100aa77c82f7" containerName="init" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.552046 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="b20e6d23-eea6-4e05-95fc-100aa77c82f7" containerName="init" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.552203 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="b20e6d23-eea6-4e05-95fc-100aa77c82f7" containerName="dnsmasq-dns" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.552981 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-vpfh4" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.557598 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.560464 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-vpfh4"] Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.593441 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-fkb8p"] Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.594527 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-fkb8p" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.599002 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.632683 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-fkb8p"] Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.644278 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae55068c-9459-4aa6-b591-9fdb67f6c7ff-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-vpfh4\" (UID: \"ae55068c-9459-4aa6-b591-9fdb67f6c7ff\") " pod="openstack/dnsmasq-dns-7fd796d7df-vpfh4" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.644547 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtnvx\" (UniqueName: \"kubernetes.io/projected/ae55068c-9459-4aa6-b591-9fdb67f6c7ff-kube-api-access-jtnvx\") pod \"dnsmasq-dns-7fd796d7df-vpfh4\" (UID: \"ae55068c-9459-4aa6-b591-9fdb67f6c7ff\") " pod="openstack/dnsmasq-dns-7fd796d7df-vpfh4" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.644686 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae55068c-9459-4aa6-b591-9fdb67f6c7ff-config\") pod \"dnsmasq-dns-7fd796d7df-vpfh4\" (UID: \"ae55068c-9459-4aa6-b591-9fdb67f6c7ff\") " pod="openstack/dnsmasq-dns-7fd796d7df-vpfh4" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.644747 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae55068c-9459-4aa6-b591-9fdb67f6c7ff-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-vpfh4\" (UID: \"ae55068c-9459-4aa6-b591-9fdb67f6c7ff\") " pod="openstack/dnsmasq-dns-7fd796d7df-vpfh4" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.684258 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.685483 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.687271 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.687506 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.688050 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.688398 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-8xzfj" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.705101 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.711429 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-vpfh4"] Oct 13 17:38:55 crc kubenswrapper[4720]: E1013 17:38:55.715641 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc kube-api-access-jtnvx ovsdbserver-nb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-7fd796d7df-vpfh4" podUID="ae55068c-9459-4aa6-b591-9fdb67f6c7ff" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.738523 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-rrkt8"] Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.739733 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-rrkt8" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.742547 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.745966 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6433630-935f-4a61-acab-4ceb6de36866-combined-ca-bundle\") pod \"ovn-controller-metrics-fkb8p\" (UID: \"d6433630-935f-4a61-acab-4ceb6de36866\") " pod="openstack/ovn-controller-metrics-fkb8p" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.746012 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d6433630-935f-4a61-acab-4ceb6de36866-ovn-rundir\") pod \"ovn-controller-metrics-fkb8p\" (UID: \"d6433630-935f-4a61-acab-4ceb6de36866\") " pod="openstack/ovn-controller-metrics-fkb8p" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.746044 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq6rz\" (UniqueName: \"kubernetes.io/projected/d6433630-935f-4a61-acab-4ceb6de36866-kube-api-access-wq6rz\") pod \"ovn-controller-metrics-fkb8p\" (UID: \"d6433630-935f-4a61-acab-4ceb6de36866\") " pod="openstack/ovn-controller-metrics-fkb8p" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.746073 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d6433630-935f-4a61-acab-4ceb6de36866-ovs-rundir\") pod \"ovn-controller-metrics-fkb8p\" (UID: \"d6433630-935f-4a61-acab-4ceb6de36866\") " pod="openstack/ovn-controller-metrics-fkb8p" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.746108 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtnvx\" (UniqueName: \"kubernetes.io/projected/ae55068c-9459-4aa6-b591-9fdb67f6c7ff-kube-api-access-jtnvx\") pod \"dnsmasq-dns-7fd796d7df-vpfh4\" (UID: \"ae55068c-9459-4aa6-b591-9fdb67f6c7ff\") " pod="openstack/dnsmasq-dns-7fd796d7df-vpfh4" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.746130 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6433630-935f-4a61-acab-4ceb6de36866-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-fkb8p\" (UID: \"d6433630-935f-4a61-acab-4ceb6de36866\") " pod="openstack/ovn-controller-metrics-fkb8p" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.746162 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6433630-935f-4a61-acab-4ceb6de36866-config\") pod \"ovn-controller-metrics-fkb8p\" (UID: \"d6433630-935f-4a61-acab-4ceb6de36866\") " pod="openstack/ovn-controller-metrics-fkb8p" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.746200 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae55068c-9459-4aa6-b591-9fdb67f6c7ff-config\") pod \"dnsmasq-dns-7fd796d7df-vpfh4\" (UID: \"ae55068c-9459-4aa6-b591-9fdb67f6c7ff\") " pod="openstack/dnsmasq-dns-7fd796d7df-vpfh4" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.746231 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae55068c-9459-4aa6-b591-9fdb67f6c7ff-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-vpfh4\" (UID: \"ae55068c-9459-4aa6-b591-9fdb67f6c7ff\") " pod="openstack/dnsmasq-dns-7fd796d7df-vpfh4" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.746249 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae55068c-9459-4aa6-b591-9fdb67f6c7ff-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-vpfh4\" (UID: \"ae55068c-9459-4aa6-b591-9fdb67f6c7ff\") " pod="openstack/dnsmasq-dns-7fd796d7df-vpfh4" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.749339 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae55068c-9459-4aa6-b591-9fdb67f6c7ff-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-vpfh4\" (UID: \"ae55068c-9459-4aa6-b591-9fdb67f6c7ff\") " pod="openstack/dnsmasq-dns-7fd796d7df-vpfh4" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.750150 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae55068c-9459-4aa6-b591-9fdb67f6c7ff-config\") pod \"dnsmasq-dns-7fd796d7df-vpfh4\" (UID: \"ae55068c-9459-4aa6-b591-9fdb67f6c7ff\") " pod="openstack/dnsmasq-dns-7fd796d7df-vpfh4" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.750655 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae55068c-9459-4aa6-b591-9fdb67f6c7ff-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-vpfh4\" (UID: \"ae55068c-9459-4aa6-b591-9fdb67f6c7ff\") " pod="openstack/dnsmasq-dns-7fd796d7df-vpfh4" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.755262 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-rrkt8"] Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.775914 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtnvx\" (UniqueName: \"kubernetes.io/projected/ae55068c-9459-4aa6-b591-9fdb67f6c7ff-kube-api-access-jtnvx\") pod \"dnsmasq-dns-7fd796d7df-vpfh4\" (UID: \"ae55068c-9459-4aa6-b591-9fdb67f6c7ff\") " pod="openstack/dnsmasq-dns-7fd796d7df-vpfh4" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.847917 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6433630-935f-4a61-acab-4ceb6de36866-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-fkb8p\" (UID: \"d6433630-935f-4a61-acab-4ceb6de36866\") " pod="openstack/ovn-controller-metrics-fkb8p" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.847967 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rbv8\" (UniqueName: \"kubernetes.io/projected/e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9-kube-api-access-6rbv8\") pod \"dnsmasq-dns-86db49b7ff-rrkt8\" (UID: \"e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9\") " pod="openstack/dnsmasq-dns-86db49b7ff-rrkt8" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.847997 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6433630-935f-4a61-acab-4ceb6de36866-config\") pod \"ovn-controller-metrics-fkb8p\" (UID: \"d6433630-935f-4a61-acab-4ceb6de36866\") " pod="openstack/ovn-controller-metrics-fkb8p" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.848021 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de1b534b-dfe4-42f3-ac5f-4aace4f956b6-config\") pod \"ovn-northd-0\" (UID: \"de1b534b-dfe4-42f3-ac5f-4aace4f956b6\") " pod="openstack/ovn-northd-0" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.848038 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/de1b534b-dfe4-42f3-ac5f-4aace4f956b6-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"de1b534b-dfe4-42f3-ac5f-4aace4f956b6\") " pod="openstack/ovn-northd-0" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.848056 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/de1b534b-dfe4-42f3-ac5f-4aace4f956b6-scripts\") pod \"ovn-northd-0\" (UID: \"de1b534b-dfe4-42f3-ac5f-4aace4f956b6\") " pod="openstack/ovn-northd-0" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.848074 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-rrkt8\" (UID: \"e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9\") " pod="openstack/dnsmasq-dns-86db49b7ff-rrkt8" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.848101 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/de1b534b-dfe4-42f3-ac5f-4aace4f956b6-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"de1b534b-dfe4-42f3-ac5f-4aace4f956b6\") " pod="openstack/ovn-northd-0" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.848120 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de1b534b-dfe4-42f3-ac5f-4aace4f956b6-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"de1b534b-dfe4-42f3-ac5f-4aace4f956b6\") " pod="openstack/ovn-northd-0" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.848136 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-rrkt8\" (UID: \"e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9\") " pod="openstack/dnsmasq-dns-86db49b7ff-rrkt8" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.848326 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9-config\") pod \"dnsmasq-dns-86db49b7ff-rrkt8\" (UID: \"e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9\") " pod="openstack/dnsmasq-dns-86db49b7ff-rrkt8" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.848348 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6433630-935f-4a61-acab-4ceb6de36866-combined-ca-bundle\") pod \"ovn-controller-metrics-fkb8p\" (UID: \"d6433630-935f-4a61-acab-4ceb6de36866\") " pod="openstack/ovn-controller-metrics-fkb8p" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.848370 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d6433630-935f-4a61-acab-4ceb6de36866-ovn-rundir\") pod \"ovn-controller-metrics-fkb8p\" (UID: \"d6433630-935f-4a61-acab-4ceb6de36866\") " pod="openstack/ovn-controller-metrics-fkb8p" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.848398 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-rrkt8\" (UID: \"e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9\") " pod="openstack/dnsmasq-dns-86db49b7ff-rrkt8" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.848420 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wq6rz\" (UniqueName: \"kubernetes.io/projected/d6433630-935f-4a61-acab-4ceb6de36866-kube-api-access-wq6rz\") pod \"ovn-controller-metrics-fkb8p\" (UID: \"d6433630-935f-4a61-acab-4ceb6de36866\") " pod="openstack/ovn-controller-metrics-fkb8p" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.848438 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j9fp\" (UniqueName: \"kubernetes.io/projected/de1b534b-dfe4-42f3-ac5f-4aace4f956b6-kube-api-access-7j9fp\") pod \"ovn-northd-0\" (UID: \"de1b534b-dfe4-42f3-ac5f-4aace4f956b6\") " pod="openstack/ovn-northd-0" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.848543 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d6433630-935f-4a61-acab-4ceb6de36866-ovs-rundir\") pod \"ovn-controller-metrics-fkb8p\" (UID: \"d6433630-935f-4a61-acab-4ceb6de36866\") " pod="openstack/ovn-controller-metrics-fkb8p" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.848590 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/de1b534b-dfe4-42f3-ac5f-4aace4f956b6-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"de1b534b-dfe4-42f3-ac5f-4aace4f956b6\") " pod="openstack/ovn-northd-0" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.848692 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d6433630-935f-4a61-acab-4ceb6de36866-ovn-rundir\") pod \"ovn-controller-metrics-fkb8p\" (UID: \"d6433630-935f-4a61-acab-4ceb6de36866\") " pod="openstack/ovn-controller-metrics-fkb8p" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.848726 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d6433630-935f-4a61-acab-4ceb6de36866-ovs-rundir\") pod \"ovn-controller-metrics-fkb8p\" (UID: \"d6433630-935f-4a61-acab-4ceb6de36866\") " pod="openstack/ovn-controller-metrics-fkb8p" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.849602 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6433630-935f-4a61-acab-4ceb6de36866-config\") pod \"ovn-controller-metrics-fkb8p\" (UID: \"d6433630-935f-4a61-acab-4ceb6de36866\") " pod="openstack/ovn-controller-metrics-fkb8p" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.851244 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6433630-935f-4a61-acab-4ceb6de36866-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-fkb8p\" (UID: \"d6433630-935f-4a61-acab-4ceb6de36866\") " pod="openstack/ovn-controller-metrics-fkb8p" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.853395 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6433630-935f-4a61-acab-4ceb6de36866-combined-ca-bundle\") pod \"ovn-controller-metrics-fkb8p\" (UID: \"d6433630-935f-4a61-acab-4ceb6de36866\") " pod="openstack/ovn-controller-metrics-fkb8p" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.864596 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wq6rz\" (UniqueName: \"kubernetes.io/projected/d6433630-935f-4a61-acab-4ceb6de36866-kube-api-access-wq6rz\") pod \"ovn-controller-metrics-fkb8p\" (UID: \"d6433630-935f-4a61-acab-4ceb6de36866\") " pod="openstack/ovn-controller-metrics-fkb8p" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.927442 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-fkb8p" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.950464 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/de1b534b-dfe4-42f3-ac5f-4aace4f956b6-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"de1b534b-dfe4-42f3-ac5f-4aace4f956b6\") " pod="openstack/ovn-northd-0" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.950528 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rbv8\" (UniqueName: \"kubernetes.io/projected/e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9-kube-api-access-6rbv8\") pod \"dnsmasq-dns-86db49b7ff-rrkt8\" (UID: \"e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9\") " pod="openstack/dnsmasq-dns-86db49b7ff-rrkt8" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.950563 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de1b534b-dfe4-42f3-ac5f-4aace4f956b6-config\") pod \"ovn-northd-0\" (UID: \"de1b534b-dfe4-42f3-ac5f-4aace4f956b6\") " pod="openstack/ovn-northd-0" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.950581 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/de1b534b-dfe4-42f3-ac5f-4aace4f956b6-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"de1b534b-dfe4-42f3-ac5f-4aace4f956b6\") " pod="openstack/ovn-northd-0" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.950604 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/de1b534b-dfe4-42f3-ac5f-4aace4f956b6-scripts\") pod \"ovn-northd-0\" (UID: \"de1b534b-dfe4-42f3-ac5f-4aace4f956b6\") " pod="openstack/ovn-northd-0" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.950622 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-rrkt8\" (UID: \"e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9\") " pod="openstack/dnsmasq-dns-86db49b7ff-rrkt8" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.950651 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/de1b534b-dfe4-42f3-ac5f-4aace4f956b6-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"de1b534b-dfe4-42f3-ac5f-4aace4f956b6\") " pod="openstack/ovn-northd-0" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.950670 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de1b534b-dfe4-42f3-ac5f-4aace4f956b6-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"de1b534b-dfe4-42f3-ac5f-4aace4f956b6\") " pod="openstack/ovn-northd-0" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.950686 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-rrkt8\" (UID: \"e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9\") " pod="openstack/dnsmasq-dns-86db49b7ff-rrkt8" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.950711 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9-config\") pod \"dnsmasq-dns-86db49b7ff-rrkt8\" (UID: \"e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9\") " pod="openstack/dnsmasq-dns-86db49b7ff-rrkt8" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.950743 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-rrkt8\" (UID: \"e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9\") " pod="openstack/dnsmasq-dns-86db49b7ff-rrkt8" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.950763 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7j9fp\" (UniqueName: \"kubernetes.io/projected/de1b534b-dfe4-42f3-ac5f-4aace4f956b6-kube-api-access-7j9fp\") pod \"ovn-northd-0\" (UID: \"de1b534b-dfe4-42f3-ac5f-4aace4f956b6\") " pod="openstack/ovn-northd-0" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.952422 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-rrkt8\" (UID: \"e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9\") " pod="openstack/dnsmasq-dns-86db49b7ff-rrkt8" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.953418 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-rrkt8\" (UID: \"e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9\") " pod="openstack/dnsmasq-dns-86db49b7ff-rrkt8" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.953802 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de1b534b-dfe4-42f3-ac5f-4aace4f956b6-config\") pod \"ovn-northd-0\" (UID: \"de1b534b-dfe4-42f3-ac5f-4aace4f956b6\") " pod="openstack/ovn-northd-0" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.953826 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/de1b534b-dfe4-42f3-ac5f-4aace4f956b6-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"de1b534b-dfe4-42f3-ac5f-4aace4f956b6\") " pod="openstack/ovn-northd-0" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.954014 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-rrkt8\" (UID: \"e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9\") " pod="openstack/dnsmasq-dns-86db49b7ff-rrkt8" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.954544 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9-config\") pod \"dnsmasq-dns-86db49b7ff-rrkt8\" (UID: \"e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9\") " pod="openstack/dnsmasq-dns-86db49b7ff-rrkt8" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.954643 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/de1b534b-dfe4-42f3-ac5f-4aace4f956b6-scripts\") pod \"ovn-northd-0\" (UID: \"de1b534b-dfe4-42f3-ac5f-4aace4f956b6\") " pod="openstack/ovn-northd-0" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.957437 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/de1b534b-dfe4-42f3-ac5f-4aace4f956b6-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"de1b534b-dfe4-42f3-ac5f-4aace4f956b6\") " pod="openstack/ovn-northd-0" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.957725 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/de1b534b-dfe4-42f3-ac5f-4aace4f956b6-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"de1b534b-dfe4-42f3-ac5f-4aace4f956b6\") " pod="openstack/ovn-northd-0" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.958811 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de1b534b-dfe4-42f3-ac5f-4aace4f956b6-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"de1b534b-dfe4-42f3-ac5f-4aace4f956b6\") " pod="openstack/ovn-northd-0" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.975828 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rbv8\" (UniqueName: \"kubernetes.io/projected/e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9-kube-api-access-6rbv8\") pod \"dnsmasq-dns-86db49b7ff-rrkt8\" (UID: \"e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9\") " pod="openstack/dnsmasq-dns-86db49b7ff-rrkt8" Oct 13 17:38:55 crc kubenswrapper[4720]: I1013 17:38:55.981182 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7j9fp\" (UniqueName: \"kubernetes.io/projected/de1b534b-dfe4-42f3-ac5f-4aace4f956b6-kube-api-access-7j9fp\") pod \"ovn-northd-0\" (UID: \"de1b534b-dfe4-42f3-ac5f-4aace4f956b6\") " pod="openstack/ovn-northd-0" Oct 13 17:38:56 crc kubenswrapper[4720]: I1013 17:38:56.004707 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Oct 13 17:38:56 crc kubenswrapper[4720]: I1013 17:38:56.065217 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-rrkt8" Oct 13 17:38:56 crc kubenswrapper[4720]: I1013 17:38:56.179822 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-vpfh4" Oct 13 17:38:56 crc kubenswrapper[4720]: I1013 17:38:56.196760 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-vpfh4" Oct 13 17:38:56 crc kubenswrapper[4720]: I1013 17:38:56.253720 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtnvx\" (UniqueName: \"kubernetes.io/projected/ae55068c-9459-4aa6-b591-9fdb67f6c7ff-kube-api-access-jtnvx\") pod \"ae55068c-9459-4aa6-b591-9fdb67f6c7ff\" (UID: \"ae55068c-9459-4aa6-b591-9fdb67f6c7ff\") " Oct 13 17:38:56 crc kubenswrapper[4720]: I1013 17:38:56.254131 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae55068c-9459-4aa6-b591-9fdb67f6c7ff-config\") pod \"ae55068c-9459-4aa6-b591-9fdb67f6c7ff\" (UID: \"ae55068c-9459-4aa6-b591-9fdb67f6c7ff\") " Oct 13 17:38:56 crc kubenswrapper[4720]: I1013 17:38:56.254207 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae55068c-9459-4aa6-b591-9fdb67f6c7ff-dns-svc\") pod \"ae55068c-9459-4aa6-b591-9fdb67f6c7ff\" (UID: \"ae55068c-9459-4aa6-b591-9fdb67f6c7ff\") " Oct 13 17:38:56 crc kubenswrapper[4720]: I1013 17:38:56.254228 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae55068c-9459-4aa6-b591-9fdb67f6c7ff-ovsdbserver-nb\") pod \"ae55068c-9459-4aa6-b591-9fdb67f6c7ff\" (UID: \"ae55068c-9459-4aa6-b591-9fdb67f6c7ff\") " Oct 13 17:38:56 crc kubenswrapper[4720]: I1013 17:38:56.254609 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae55068c-9459-4aa6-b591-9fdb67f6c7ff-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ae55068c-9459-4aa6-b591-9fdb67f6c7ff" (UID: "ae55068c-9459-4aa6-b591-9fdb67f6c7ff"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:38:56 crc kubenswrapper[4720]: I1013 17:38:56.254736 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae55068c-9459-4aa6-b591-9fdb67f6c7ff-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ae55068c-9459-4aa6-b591-9fdb67f6c7ff" (UID: "ae55068c-9459-4aa6-b591-9fdb67f6c7ff"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:38:56 crc kubenswrapper[4720]: I1013 17:38:56.255986 4720 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae55068c-9459-4aa6-b591-9fdb67f6c7ff-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 13 17:38:56 crc kubenswrapper[4720]: I1013 17:38:56.256013 4720 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae55068c-9459-4aa6-b591-9fdb67f6c7ff-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 13 17:38:56 crc kubenswrapper[4720]: I1013 17:38:56.257131 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae55068c-9459-4aa6-b591-9fdb67f6c7ff-config" (OuterVolumeSpecName: "config") pod "ae55068c-9459-4aa6-b591-9fdb67f6c7ff" (UID: "ae55068c-9459-4aa6-b591-9fdb67f6c7ff"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:38:56 crc kubenswrapper[4720]: I1013 17:38:56.259300 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae55068c-9459-4aa6-b591-9fdb67f6c7ff-kube-api-access-jtnvx" (OuterVolumeSpecName: "kube-api-access-jtnvx") pod "ae55068c-9459-4aa6-b591-9fdb67f6c7ff" (UID: "ae55068c-9459-4aa6-b591-9fdb67f6c7ff"). InnerVolumeSpecName "kube-api-access-jtnvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:38:56 crc kubenswrapper[4720]: I1013 17:38:56.359637 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtnvx\" (UniqueName: \"kubernetes.io/projected/ae55068c-9459-4aa6-b591-9fdb67f6c7ff-kube-api-access-jtnvx\") on node \"crc\" DevicePath \"\"" Oct 13 17:38:56 crc kubenswrapper[4720]: I1013 17:38:56.359693 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae55068c-9459-4aa6-b591-9fdb67f6c7ff-config\") on node \"crc\" DevicePath \"\"" Oct 13 17:38:56 crc kubenswrapper[4720]: I1013 17:38:56.425043 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-fkb8p"] Oct 13 17:38:56 crc kubenswrapper[4720]: I1013 17:38:56.530471 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Oct 13 17:38:56 crc kubenswrapper[4720]: I1013 17:38:56.564472 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-rrkt8"] Oct 13 17:38:56 crc kubenswrapper[4720]: I1013 17:38:56.962989 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Oct 13 17:38:56 crc kubenswrapper[4720]: I1013 17:38:56.963232 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Oct 13 17:38:57 crc kubenswrapper[4720]: I1013 17:38:57.190722 4720 generic.go:334] "Generic (PLEG): container finished" podID="e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9" containerID="4d4c6a239fbfd52b3ab03278acba83cffb387c957cf97df5e5861934491613b3" exitCode=0 Oct 13 17:38:57 crc kubenswrapper[4720]: I1013 17:38:57.190782 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-rrkt8" event={"ID":"e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9","Type":"ContainerDied","Data":"4d4c6a239fbfd52b3ab03278acba83cffb387c957cf97df5e5861934491613b3"} Oct 13 17:38:57 crc kubenswrapper[4720]: I1013 17:38:57.191059 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-rrkt8" event={"ID":"e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9","Type":"ContainerStarted","Data":"bbb7763f1f044bc16127c91e93bf47795e6088fa578359e2e221dced43654207"} Oct 13 17:38:57 crc kubenswrapper[4720]: I1013 17:38:57.193041 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-fkb8p" event={"ID":"d6433630-935f-4a61-acab-4ceb6de36866","Type":"ContainerStarted","Data":"86088e7c46a9f29eec6ef2db4b9fd8a57c31be99a9ebfa1556a8803d9d4d55d1"} Oct 13 17:38:57 crc kubenswrapper[4720]: I1013 17:38:57.193070 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-fkb8p" event={"ID":"d6433630-935f-4a61-acab-4ceb6de36866","Type":"ContainerStarted","Data":"1c50de9f5a83c92ff783693f7d2e47f2b3ef1f31c66823d1ef2cdb8041a86524"} Oct 13 17:38:57 crc kubenswrapper[4720]: I1013 17:38:57.197573 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-vpfh4" Oct 13 17:38:57 crc kubenswrapper[4720]: I1013 17:38:57.197647 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"de1b534b-dfe4-42f3-ac5f-4aace4f956b6","Type":"ContainerStarted","Data":"3406321f7de19087c005558522c6538f9958af7f950ecc497a522349a217d2ce"} Oct 13 17:38:57 crc kubenswrapper[4720]: I1013 17:38:57.252571 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-fkb8p" podStartSLOduration=2.252551855 podStartE2EDuration="2.252551855s" podCreationTimestamp="2025-10-13 17:38:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:38:57.252490424 +0000 UTC m=+882.709740576" watchObservedRunningTime="2025-10-13 17:38:57.252551855 +0000 UTC m=+882.709801997" Oct 13 17:38:57 crc kubenswrapper[4720]: I1013 17:38:57.318858 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Oct 13 17:38:57 crc kubenswrapper[4720]: I1013 17:38:57.399622 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-vpfh4"] Oct 13 17:38:57 crc kubenswrapper[4720]: I1013 17:38:57.428740 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-vpfh4"] Oct 13 17:38:58 crc kubenswrapper[4720]: I1013 17:38:58.182569 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Oct 13 17:38:58 crc kubenswrapper[4720]: I1013 17:38:58.222253 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"de1b534b-dfe4-42f3-ac5f-4aace4f956b6","Type":"ContainerStarted","Data":"96c1aa423f4d469ffefeb0e0169ca33708d43f53e5fea6ee0040c9c4a3208344"} Oct 13 17:38:58 crc kubenswrapper[4720]: I1013 17:38:58.225407 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-rrkt8" event={"ID":"e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9","Type":"ContainerStarted","Data":"85bd13e9da984d92d9214f3e3727410d31776fe8ad949845e111b07840a1c422"} Oct 13 17:38:58 crc kubenswrapper[4720]: I1013 17:38:58.225632 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-rrkt8" Oct 13 17:38:58 crc kubenswrapper[4720]: I1013 17:38:58.251494 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-rrkt8" podStartSLOduration=3.251475257 podStartE2EDuration="3.251475257s" podCreationTimestamp="2025-10-13 17:38:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:38:58.245598946 +0000 UTC m=+883.702849088" watchObservedRunningTime="2025-10-13 17:38:58.251475257 +0000 UTC m=+883.708725389" Oct 13 17:38:58 crc kubenswrapper[4720]: I1013 17:38:58.270837 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Oct 13 17:38:59 crc kubenswrapper[4720]: I1013 17:38:59.187336 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae55068c-9459-4aa6-b591-9fdb67f6c7ff" path="/var/lib/kubelet/pods/ae55068c-9459-4aa6-b591-9fdb67f6c7ff/volumes" Oct 13 17:38:59 crc kubenswrapper[4720]: I1013 17:38:59.241484 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"de1b534b-dfe4-42f3-ac5f-4aace4f956b6","Type":"ContainerStarted","Data":"40c76e03bd9f884b9b1b18eacfea2efd57affb29673e67077ff3602126c735aa"} Oct 13 17:38:59 crc kubenswrapper[4720]: I1013 17:38:59.241602 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Oct 13 17:38:59 crc kubenswrapper[4720]: I1013 17:38:59.278591 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.910548114 podStartE2EDuration="4.278566172s" podCreationTimestamp="2025-10-13 17:38:55 +0000 UTC" firstStartedPulling="2025-10-13 17:38:56.520345766 +0000 UTC m=+881.977595918" lastFinishedPulling="2025-10-13 17:38:57.888363834 +0000 UTC m=+883.345613976" observedRunningTime="2025-10-13 17:38:59.26952927 +0000 UTC m=+884.726779472" watchObservedRunningTime="2025-10-13 17:38:59.278566172 +0000 UTC m=+884.735816344" Oct 13 17:38:59 crc kubenswrapper[4720]: I1013 17:38:59.432570 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Oct 13 17:38:59 crc kubenswrapper[4720]: I1013 17:38:59.440840 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-rrkt8"] Oct 13 17:38:59 crc kubenswrapper[4720]: I1013 17:38:59.470604 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-mxpxk"] Oct 13 17:38:59 crc kubenswrapper[4720]: I1013 17:38:59.471862 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-mxpxk" Oct 13 17:38:59 crc kubenswrapper[4720]: I1013 17:38:59.487250 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-mxpxk"] Oct 13 17:38:59 crc kubenswrapper[4720]: I1013 17:38:59.619680 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2442614f-edaa-4e64-9ed1-fc0520a37cfd-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-mxpxk\" (UID: \"2442614f-edaa-4e64-9ed1-fc0520a37cfd\") " pod="openstack/dnsmasq-dns-698758b865-mxpxk" Oct 13 17:38:59 crc kubenswrapper[4720]: I1013 17:38:59.619726 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2442614f-edaa-4e64-9ed1-fc0520a37cfd-dns-svc\") pod \"dnsmasq-dns-698758b865-mxpxk\" (UID: \"2442614f-edaa-4e64-9ed1-fc0520a37cfd\") " pod="openstack/dnsmasq-dns-698758b865-mxpxk" Oct 13 17:38:59 crc kubenswrapper[4720]: I1013 17:38:59.619797 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpqdg\" (UniqueName: \"kubernetes.io/projected/2442614f-edaa-4e64-9ed1-fc0520a37cfd-kube-api-access-vpqdg\") pod \"dnsmasq-dns-698758b865-mxpxk\" (UID: \"2442614f-edaa-4e64-9ed1-fc0520a37cfd\") " pod="openstack/dnsmasq-dns-698758b865-mxpxk" Oct 13 17:38:59 crc kubenswrapper[4720]: I1013 17:38:59.619816 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2442614f-edaa-4e64-9ed1-fc0520a37cfd-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-mxpxk\" (UID: \"2442614f-edaa-4e64-9ed1-fc0520a37cfd\") " pod="openstack/dnsmasq-dns-698758b865-mxpxk" Oct 13 17:38:59 crc kubenswrapper[4720]: I1013 17:38:59.619866 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2442614f-edaa-4e64-9ed1-fc0520a37cfd-config\") pod \"dnsmasq-dns-698758b865-mxpxk\" (UID: \"2442614f-edaa-4e64-9ed1-fc0520a37cfd\") " pod="openstack/dnsmasq-dns-698758b865-mxpxk" Oct 13 17:38:59 crc kubenswrapper[4720]: I1013 17:38:59.721646 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2442614f-edaa-4e64-9ed1-fc0520a37cfd-config\") pod \"dnsmasq-dns-698758b865-mxpxk\" (UID: \"2442614f-edaa-4e64-9ed1-fc0520a37cfd\") " pod="openstack/dnsmasq-dns-698758b865-mxpxk" Oct 13 17:38:59 crc kubenswrapper[4720]: I1013 17:38:59.721733 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2442614f-edaa-4e64-9ed1-fc0520a37cfd-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-mxpxk\" (UID: \"2442614f-edaa-4e64-9ed1-fc0520a37cfd\") " pod="openstack/dnsmasq-dns-698758b865-mxpxk" Oct 13 17:38:59 crc kubenswrapper[4720]: I1013 17:38:59.721861 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2442614f-edaa-4e64-9ed1-fc0520a37cfd-dns-svc\") pod \"dnsmasq-dns-698758b865-mxpxk\" (UID: \"2442614f-edaa-4e64-9ed1-fc0520a37cfd\") " pod="openstack/dnsmasq-dns-698758b865-mxpxk" Oct 13 17:38:59 crc kubenswrapper[4720]: I1013 17:38:59.722065 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpqdg\" (UniqueName: \"kubernetes.io/projected/2442614f-edaa-4e64-9ed1-fc0520a37cfd-kube-api-access-vpqdg\") pod \"dnsmasq-dns-698758b865-mxpxk\" (UID: \"2442614f-edaa-4e64-9ed1-fc0520a37cfd\") " pod="openstack/dnsmasq-dns-698758b865-mxpxk" Oct 13 17:38:59 crc kubenswrapper[4720]: I1013 17:38:59.722093 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2442614f-edaa-4e64-9ed1-fc0520a37cfd-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-mxpxk\" (UID: \"2442614f-edaa-4e64-9ed1-fc0520a37cfd\") " pod="openstack/dnsmasq-dns-698758b865-mxpxk" Oct 13 17:38:59 crc kubenswrapper[4720]: I1013 17:38:59.722641 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2442614f-edaa-4e64-9ed1-fc0520a37cfd-config\") pod \"dnsmasq-dns-698758b865-mxpxk\" (UID: \"2442614f-edaa-4e64-9ed1-fc0520a37cfd\") " pod="openstack/dnsmasq-dns-698758b865-mxpxk" Oct 13 17:38:59 crc kubenswrapper[4720]: I1013 17:38:59.722766 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2442614f-edaa-4e64-9ed1-fc0520a37cfd-dns-svc\") pod \"dnsmasq-dns-698758b865-mxpxk\" (UID: \"2442614f-edaa-4e64-9ed1-fc0520a37cfd\") " pod="openstack/dnsmasq-dns-698758b865-mxpxk" Oct 13 17:38:59 crc kubenswrapper[4720]: I1013 17:38:59.722907 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2442614f-edaa-4e64-9ed1-fc0520a37cfd-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-mxpxk\" (UID: \"2442614f-edaa-4e64-9ed1-fc0520a37cfd\") " pod="openstack/dnsmasq-dns-698758b865-mxpxk" Oct 13 17:38:59 crc kubenswrapper[4720]: I1013 17:38:59.723463 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2442614f-edaa-4e64-9ed1-fc0520a37cfd-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-mxpxk\" (UID: \"2442614f-edaa-4e64-9ed1-fc0520a37cfd\") " pod="openstack/dnsmasq-dns-698758b865-mxpxk" Oct 13 17:38:59 crc kubenswrapper[4720]: I1013 17:38:59.741465 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpqdg\" (UniqueName: \"kubernetes.io/projected/2442614f-edaa-4e64-9ed1-fc0520a37cfd-kube-api-access-vpqdg\") pod \"dnsmasq-dns-698758b865-mxpxk\" (UID: \"2442614f-edaa-4e64-9ed1-fc0520a37cfd\") " pod="openstack/dnsmasq-dns-698758b865-mxpxk" Oct 13 17:38:59 crc kubenswrapper[4720]: I1013 17:38:59.789196 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-mxpxk" Oct 13 17:39:00 crc kubenswrapper[4720]: I1013 17:39:00.053689 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-mxpxk"] Oct 13 17:39:00 crc kubenswrapper[4720]: I1013 17:39:00.248407 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-mxpxk" event={"ID":"2442614f-edaa-4e64-9ed1-fc0520a37cfd","Type":"ContainerStarted","Data":"47207972261f580f6bca845fb53af4cb5119191cde8cf7eeb75de240ab4aff29"} Oct 13 17:39:00 crc kubenswrapper[4720]: I1013 17:39:00.248897 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-rrkt8" podUID="e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9" containerName="dnsmasq-dns" containerID="cri-o://85bd13e9da984d92d9214f3e3727410d31776fe8ad949845e111b07840a1c422" gracePeriod=10 Oct 13 17:39:00 crc kubenswrapper[4720]: I1013 17:39:00.559509 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Oct 13 17:39:00 crc kubenswrapper[4720]: I1013 17:39:00.567808 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Oct 13 17:39:00 crc kubenswrapper[4720]: I1013 17:39:00.596938 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Oct 13 17:39:00 crc kubenswrapper[4720]: I1013 17:39:00.597141 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Oct 13 17:39:00 crc kubenswrapper[4720]: I1013 17:39:00.597332 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-chlt8" Oct 13 17:39:00 crc kubenswrapper[4720]: I1013 17:39:00.597495 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Oct 13 17:39:00 crc kubenswrapper[4720]: I1013 17:39:00.602709 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Oct 13 17:39:00 crc kubenswrapper[4720]: I1013 17:39:00.686509 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-rrkt8" Oct 13 17:39:00 crc kubenswrapper[4720]: I1013 17:39:00.740201 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/2ffe28a4-ed4a-44c6-b982-501575dd907d-cache\") pod \"swift-storage-0\" (UID: \"2ffe28a4-ed4a-44c6-b982-501575dd907d\") " pod="openstack/swift-storage-0" Oct 13 17:39:00 crc kubenswrapper[4720]: I1013 17:39:00.740333 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/2ffe28a4-ed4a-44c6-b982-501575dd907d-lock\") pod \"swift-storage-0\" (UID: \"2ffe28a4-ed4a-44c6-b982-501575dd907d\") " pod="openstack/swift-storage-0" Oct 13 17:39:00 crc kubenswrapper[4720]: I1013 17:39:00.740388 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2ffe28a4-ed4a-44c6-b982-501575dd907d-etc-swift\") pod \"swift-storage-0\" (UID: \"2ffe28a4-ed4a-44c6-b982-501575dd907d\") " pod="openstack/swift-storage-0" Oct 13 17:39:00 crc kubenswrapper[4720]: I1013 17:39:00.740423 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"2ffe28a4-ed4a-44c6-b982-501575dd907d\") " pod="openstack/swift-storage-0" Oct 13 17:39:00 crc kubenswrapper[4720]: I1013 17:39:00.740459 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctlkd\" (UniqueName: \"kubernetes.io/projected/2ffe28a4-ed4a-44c6-b982-501575dd907d-kube-api-access-ctlkd\") pod \"swift-storage-0\" (UID: \"2ffe28a4-ed4a-44c6-b982-501575dd907d\") " pod="openstack/swift-storage-0" Oct 13 17:39:00 crc kubenswrapper[4720]: I1013 17:39:00.842108 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rbv8\" (UniqueName: \"kubernetes.io/projected/e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9-kube-api-access-6rbv8\") pod \"e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9\" (UID: \"e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9\") " Oct 13 17:39:00 crc kubenswrapper[4720]: I1013 17:39:00.842151 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9-config\") pod \"e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9\" (UID: \"e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9\") " Oct 13 17:39:00 crc kubenswrapper[4720]: I1013 17:39:00.842314 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9-ovsdbserver-nb\") pod \"e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9\" (UID: \"e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9\") " Oct 13 17:39:00 crc kubenswrapper[4720]: I1013 17:39:00.842404 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9-ovsdbserver-sb\") pod \"e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9\" (UID: \"e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9\") " Oct 13 17:39:00 crc kubenswrapper[4720]: I1013 17:39:00.842431 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9-dns-svc\") pod \"e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9\" (UID: \"e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9\") " Oct 13 17:39:00 crc kubenswrapper[4720]: I1013 17:39:00.842606 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"2ffe28a4-ed4a-44c6-b982-501575dd907d\") " pod="openstack/swift-storage-0" Oct 13 17:39:00 crc kubenswrapper[4720]: I1013 17:39:00.842639 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctlkd\" (UniqueName: \"kubernetes.io/projected/2ffe28a4-ed4a-44c6-b982-501575dd907d-kube-api-access-ctlkd\") pod \"swift-storage-0\" (UID: \"2ffe28a4-ed4a-44c6-b982-501575dd907d\") " pod="openstack/swift-storage-0" Oct 13 17:39:00 crc kubenswrapper[4720]: I1013 17:39:00.842669 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/2ffe28a4-ed4a-44c6-b982-501575dd907d-cache\") pod \"swift-storage-0\" (UID: \"2ffe28a4-ed4a-44c6-b982-501575dd907d\") " pod="openstack/swift-storage-0" Oct 13 17:39:00 crc kubenswrapper[4720]: I1013 17:39:00.842754 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/2ffe28a4-ed4a-44c6-b982-501575dd907d-lock\") pod \"swift-storage-0\" (UID: \"2ffe28a4-ed4a-44c6-b982-501575dd907d\") " pod="openstack/swift-storage-0" Oct 13 17:39:00 crc kubenswrapper[4720]: I1013 17:39:00.842792 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2ffe28a4-ed4a-44c6-b982-501575dd907d-etc-swift\") pod \"swift-storage-0\" (UID: \"2ffe28a4-ed4a-44c6-b982-501575dd907d\") " pod="openstack/swift-storage-0" Oct 13 17:39:00 crc kubenswrapper[4720]: E1013 17:39:00.842917 4720 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 13 17:39:00 crc kubenswrapper[4720]: E1013 17:39:00.842935 4720 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 13 17:39:00 crc kubenswrapper[4720]: E1013 17:39:00.842976 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2ffe28a4-ed4a-44c6-b982-501575dd907d-etc-swift podName:2ffe28a4-ed4a-44c6-b982-501575dd907d nodeName:}" failed. No retries permitted until 2025-10-13 17:39:01.342960843 +0000 UTC m=+886.800210975 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/2ffe28a4-ed4a-44c6-b982-501575dd907d-etc-swift") pod "swift-storage-0" (UID: "2ffe28a4-ed4a-44c6-b982-501575dd907d") : configmap "swift-ring-files" not found Oct 13 17:39:00 crc kubenswrapper[4720]: I1013 17:39:00.843123 4720 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"2ffe28a4-ed4a-44c6-b982-501575dd907d\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/swift-storage-0" Oct 13 17:39:00 crc kubenswrapper[4720]: I1013 17:39:00.847246 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9-kube-api-access-6rbv8" (OuterVolumeSpecName: "kube-api-access-6rbv8") pod "e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9" (UID: "e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9"). InnerVolumeSpecName "kube-api-access-6rbv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:39:00 crc kubenswrapper[4720]: I1013 17:39:00.848454 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/2ffe28a4-ed4a-44c6-b982-501575dd907d-cache\") pod \"swift-storage-0\" (UID: \"2ffe28a4-ed4a-44c6-b982-501575dd907d\") " pod="openstack/swift-storage-0" Oct 13 17:39:00 crc kubenswrapper[4720]: I1013 17:39:00.848775 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/2ffe28a4-ed4a-44c6-b982-501575dd907d-lock\") pod \"swift-storage-0\" (UID: \"2ffe28a4-ed4a-44c6-b982-501575dd907d\") " pod="openstack/swift-storage-0" Oct 13 17:39:00 crc kubenswrapper[4720]: I1013 17:39:00.869841 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctlkd\" (UniqueName: \"kubernetes.io/projected/2ffe28a4-ed4a-44c6-b982-501575dd907d-kube-api-access-ctlkd\") pod \"swift-storage-0\" (UID: \"2ffe28a4-ed4a-44c6-b982-501575dd907d\") " pod="openstack/swift-storage-0" Oct 13 17:39:00 crc kubenswrapper[4720]: I1013 17:39:00.871103 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"2ffe28a4-ed4a-44c6-b982-501575dd907d\") " pod="openstack/swift-storage-0" Oct 13 17:39:00 crc kubenswrapper[4720]: I1013 17:39:00.886714 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9" (UID: "e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:39:00 crc kubenswrapper[4720]: I1013 17:39:00.887823 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9-config" (OuterVolumeSpecName: "config") pod "e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9" (UID: "e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:39:00 crc kubenswrapper[4720]: I1013 17:39:00.894165 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9" (UID: "e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:39:00 crc kubenswrapper[4720]: I1013 17:39:00.895270 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9" (UID: "e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:39:00 crc kubenswrapper[4720]: I1013 17:39:00.944331 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9-config\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:00 crc kubenswrapper[4720]: I1013 17:39:00.944366 4720 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:00 crc kubenswrapper[4720]: I1013 17:39:00.944377 4720 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:00 crc kubenswrapper[4720]: I1013 17:39:00.944389 4720 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:00 crc kubenswrapper[4720]: I1013 17:39:00.944400 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rbv8\" (UniqueName: \"kubernetes.io/projected/e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9-kube-api-access-6rbv8\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.045442 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.059363 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-phh7s"] Oct 13 17:39:01 crc kubenswrapper[4720]: E1013 17:39:01.059654 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9" containerName="dnsmasq-dns" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.059666 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9" containerName="dnsmasq-dns" Oct 13 17:39:01 crc kubenswrapper[4720]: E1013 17:39:01.059696 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9" containerName="init" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.059702 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9" containerName="init" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.059841 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9" containerName="dnsmasq-dns" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.060340 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-phh7s" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.063603 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.063658 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.063866 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.142729 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.148109 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxxwh\" (UniqueName: \"kubernetes.io/projected/b61235f4-5b35-4bb9-9787-e7fc0075d0c9-kube-api-access-fxxwh\") pod \"swift-ring-rebalance-phh7s\" (UID: \"b61235f4-5b35-4bb9-9787-e7fc0075d0c9\") " pod="openstack/swift-ring-rebalance-phh7s" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.148146 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b61235f4-5b35-4bb9-9787-e7fc0075d0c9-dispersionconf\") pod \"swift-ring-rebalance-phh7s\" (UID: \"b61235f4-5b35-4bb9-9787-e7fc0075d0c9\") " pod="openstack/swift-ring-rebalance-phh7s" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.148167 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b61235f4-5b35-4bb9-9787-e7fc0075d0c9-etc-swift\") pod \"swift-ring-rebalance-phh7s\" (UID: \"b61235f4-5b35-4bb9-9787-e7fc0075d0c9\") " pod="openstack/swift-ring-rebalance-phh7s" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.148182 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b61235f4-5b35-4bb9-9787-e7fc0075d0c9-ring-data-devices\") pod \"swift-ring-rebalance-phh7s\" (UID: \"b61235f4-5b35-4bb9-9787-e7fc0075d0c9\") " pod="openstack/swift-ring-rebalance-phh7s" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.148233 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b61235f4-5b35-4bb9-9787-e7fc0075d0c9-scripts\") pod \"swift-ring-rebalance-phh7s\" (UID: \"b61235f4-5b35-4bb9-9787-e7fc0075d0c9\") " pod="openstack/swift-ring-rebalance-phh7s" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.148329 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b61235f4-5b35-4bb9-9787-e7fc0075d0c9-combined-ca-bundle\") pod \"swift-ring-rebalance-phh7s\" (UID: \"b61235f4-5b35-4bb9-9787-e7fc0075d0c9\") " pod="openstack/swift-ring-rebalance-phh7s" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.148370 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b61235f4-5b35-4bb9-9787-e7fc0075d0c9-swiftconf\") pod \"swift-ring-rebalance-phh7s\" (UID: \"b61235f4-5b35-4bb9-9787-e7fc0075d0c9\") " pod="openstack/swift-ring-rebalance-phh7s" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.149664 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-phh7s"] Oct 13 17:39:01 crc kubenswrapper[4720]: E1013 17:39:01.150006 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-fxxwh ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/swift-ring-rebalance-phh7s" podUID="b61235f4-5b35-4bb9-9787-e7fc0075d0c9" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.166051 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-9s2xb"] Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.168077 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-9s2xb" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.190532 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-9s2xb"] Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.209922 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-phh7s"] Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.249867 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3f752de1-5826-4009-a77c-b9186d9811ea-swiftconf\") pod \"swift-ring-rebalance-9s2xb\" (UID: \"3f752de1-5826-4009-a77c-b9186d9811ea\") " pod="openstack/swift-ring-rebalance-9s2xb" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.249920 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3f752de1-5826-4009-a77c-b9186d9811ea-etc-swift\") pod \"swift-ring-rebalance-9s2xb\" (UID: \"3f752de1-5826-4009-a77c-b9186d9811ea\") " pod="openstack/swift-ring-rebalance-9s2xb" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.249964 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssjhr\" (UniqueName: \"kubernetes.io/projected/3f752de1-5826-4009-a77c-b9186d9811ea-kube-api-access-ssjhr\") pod \"swift-ring-rebalance-9s2xb\" (UID: \"3f752de1-5826-4009-a77c-b9186d9811ea\") " pod="openstack/swift-ring-rebalance-9s2xb" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.250013 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3f752de1-5826-4009-a77c-b9186d9811ea-scripts\") pod \"swift-ring-rebalance-9s2xb\" (UID: \"3f752de1-5826-4009-a77c-b9186d9811ea\") " pod="openstack/swift-ring-rebalance-9s2xb" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.250033 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b61235f4-5b35-4bb9-9787-e7fc0075d0c9-combined-ca-bundle\") pod \"swift-ring-rebalance-phh7s\" (UID: \"b61235f4-5b35-4bb9-9787-e7fc0075d0c9\") " pod="openstack/swift-ring-rebalance-phh7s" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.250091 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3f752de1-5826-4009-a77c-b9186d9811ea-dispersionconf\") pod \"swift-ring-rebalance-9s2xb\" (UID: \"3f752de1-5826-4009-a77c-b9186d9811ea\") " pod="openstack/swift-ring-rebalance-9s2xb" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.250109 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b61235f4-5b35-4bb9-9787-e7fc0075d0c9-swiftconf\") pod \"swift-ring-rebalance-phh7s\" (UID: \"b61235f4-5b35-4bb9-9787-e7fc0075d0c9\") " pod="openstack/swift-ring-rebalance-phh7s" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.250138 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f752de1-5826-4009-a77c-b9186d9811ea-combined-ca-bundle\") pod \"swift-ring-rebalance-9s2xb\" (UID: \"3f752de1-5826-4009-a77c-b9186d9811ea\") " pod="openstack/swift-ring-rebalance-9s2xb" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.250174 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxxwh\" (UniqueName: \"kubernetes.io/projected/b61235f4-5b35-4bb9-9787-e7fc0075d0c9-kube-api-access-fxxwh\") pod \"swift-ring-rebalance-phh7s\" (UID: \"b61235f4-5b35-4bb9-9787-e7fc0075d0c9\") " pod="openstack/swift-ring-rebalance-phh7s" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.250287 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b61235f4-5b35-4bb9-9787-e7fc0075d0c9-dispersionconf\") pod \"swift-ring-rebalance-phh7s\" (UID: \"b61235f4-5b35-4bb9-9787-e7fc0075d0c9\") " pod="openstack/swift-ring-rebalance-phh7s" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.250326 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b61235f4-5b35-4bb9-9787-e7fc0075d0c9-etc-swift\") pod \"swift-ring-rebalance-phh7s\" (UID: \"b61235f4-5b35-4bb9-9787-e7fc0075d0c9\") " pod="openstack/swift-ring-rebalance-phh7s" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.250354 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b61235f4-5b35-4bb9-9787-e7fc0075d0c9-ring-data-devices\") pod \"swift-ring-rebalance-phh7s\" (UID: \"b61235f4-5b35-4bb9-9787-e7fc0075d0c9\") " pod="openstack/swift-ring-rebalance-phh7s" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.250384 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3f752de1-5826-4009-a77c-b9186d9811ea-ring-data-devices\") pod \"swift-ring-rebalance-9s2xb\" (UID: \"3f752de1-5826-4009-a77c-b9186d9811ea\") " pod="openstack/swift-ring-rebalance-9s2xb" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.250399 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b61235f4-5b35-4bb9-9787-e7fc0075d0c9-scripts\") pod \"swift-ring-rebalance-phh7s\" (UID: \"b61235f4-5b35-4bb9-9787-e7fc0075d0c9\") " pod="openstack/swift-ring-rebalance-phh7s" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.253828 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b61235f4-5b35-4bb9-9787-e7fc0075d0c9-etc-swift\") pod \"swift-ring-rebalance-phh7s\" (UID: \"b61235f4-5b35-4bb9-9787-e7fc0075d0c9\") " pod="openstack/swift-ring-rebalance-phh7s" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.254332 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b61235f4-5b35-4bb9-9787-e7fc0075d0c9-ring-data-devices\") pod \"swift-ring-rebalance-phh7s\" (UID: \"b61235f4-5b35-4bb9-9787-e7fc0075d0c9\") " pod="openstack/swift-ring-rebalance-phh7s" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.256104 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b61235f4-5b35-4bb9-9787-e7fc0075d0c9-scripts\") pod \"swift-ring-rebalance-phh7s\" (UID: \"b61235f4-5b35-4bb9-9787-e7fc0075d0c9\") " pod="openstack/swift-ring-rebalance-phh7s" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.258588 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b61235f4-5b35-4bb9-9787-e7fc0075d0c9-swiftconf\") pod \"swift-ring-rebalance-phh7s\" (UID: \"b61235f4-5b35-4bb9-9787-e7fc0075d0c9\") " pod="openstack/swift-ring-rebalance-phh7s" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.258745 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b61235f4-5b35-4bb9-9787-e7fc0075d0c9-dispersionconf\") pod \"swift-ring-rebalance-phh7s\" (UID: \"b61235f4-5b35-4bb9-9787-e7fc0075d0c9\") " pod="openstack/swift-ring-rebalance-phh7s" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.259051 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b61235f4-5b35-4bb9-9787-e7fc0075d0c9-combined-ca-bundle\") pod \"swift-ring-rebalance-phh7s\" (UID: \"b61235f4-5b35-4bb9-9787-e7fc0075d0c9\") " pod="openstack/swift-ring-rebalance-phh7s" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.263420 4720 generic.go:334] "Generic (PLEG): container finished" podID="2442614f-edaa-4e64-9ed1-fc0520a37cfd" containerID="a0081c27e8b5e7977626839730ef4b9b12fcc078d3394fcff9e65da9e0702972" exitCode=0 Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.263489 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-mxpxk" event={"ID":"2442614f-edaa-4e64-9ed1-fc0520a37cfd","Type":"ContainerDied","Data":"a0081c27e8b5e7977626839730ef4b9b12fcc078d3394fcff9e65da9e0702972"} Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.267619 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxxwh\" (UniqueName: \"kubernetes.io/projected/b61235f4-5b35-4bb9-9787-e7fc0075d0c9-kube-api-access-fxxwh\") pod \"swift-ring-rebalance-phh7s\" (UID: \"b61235f4-5b35-4bb9-9787-e7fc0075d0c9\") " pod="openstack/swift-ring-rebalance-phh7s" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.267847 4720 generic.go:334] "Generic (PLEG): container finished" podID="e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9" containerID="85bd13e9da984d92d9214f3e3727410d31776fe8ad949845e111b07840a1c422" exitCode=0 Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.267948 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-rrkt8" event={"ID":"e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9","Type":"ContainerDied","Data":"85bd13e9da984d92d9214f3e3727410d31776fe8ad949845e111b07840a1c422"} Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.267980 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-rrkt8" event={"ID":"e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9","Type":"ContainerDied","Data":"bbb7763f1f044bc16127c91e93bf47795e6088fa578359e2e221dced43654207"} Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.267996 4720 scope.go:117] "RemoveContainer" containerID="85bd13e9da984d92d9214f3e3727410d31776fe8ad949845e111b07840a1c422" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.268234 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-phh7s" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.268798 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-rrkt8" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.352364 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3f752de1-5826-4009-a77c-b9186d9811ea-ring-data-devices\") pod \"swift-ring-rebalance-9s2xb\" (UID: \"3f752de1-5826-4009-a77c-b9186d9811ea\") " pod="openstack/swift-ring-rebalance-9s2xb" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.352425 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3f752de1-5826-4009-a77c-b9186d9811ea-swiftconf\") pod \"swift-ring-rebalance-9s2xb\" (UID: \"3f752de1-5826-4009-a77c-b9186d9811ea\") " pod="openstack/swift-ring-rebalance-9s2xb" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.352453 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3f752de1-5826-4009-a77c-b9186d9811ea-etc-swift\") pod \"swift-ring-rebalance-9s2xb\" (UID: \"3f752de1-5826-4009-a77c-b9186d9811ea\") " pod="openstack/swift-ring-rebalance-9s2xb" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.352500 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssjhr\" (UniqueName: \"kubernetes.io/projected/3f752de1-5826-4009-a77c-b9186d9811ea-kube-api-access-ssjhr\") pod \"swift-ring-rebalance-9s2xb\" (UID: \"3f752de1-5826-4009-a77c-b9186d9811ea\") " pod="openstack/swift-ring-rebalance-9s2xb" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.352579 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2ffe28a4-ed4a-44c6-b982-501575dd907d-etc-swift\") pod \"swift-storage-0\" (UID: \"2ffe28a4-ed4a-44c6-b982-501575dd907d\") " pod="openstack/swift-storage-0" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.352607 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3f752de1-5826-4009-a77c-b9186d9811ea-scripts\") pod \"swift-ring-rebalance-9s2xb\" (UID: \"3f752de1-5826-4009-a77c-b9186d9811ea\") " pod="openstack/swift-ring-rebalance-9s2xb" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.352658 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3f752de1-5826-4009-a77c-b9186d9811ea-dispersionconf\") pod \"swift-ring-rebalance-9s2xb\" (UID: \"3f752de1-5826-4009-a77c-b9186d9811ea\") " pod="openstack/swift-ring-rebalance-9s2xb" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.352716 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f752de1-5826-4009-a77c-b9186d9811ea-combined-ca-bundle\") pod \"swift-ring-rebalance-9s2xb\" (UID: \"3f752de1-5826-4009-a77c-b9186d9811ea\") " pod="openstack/swift-ring-rebalance-9s2xb" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.354147 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3f752de1-5826-4009-a77c-b9186d9811ea-ring-data-devices\") pod \"swift-ring-rebalance-9s2xb\" (UID: \"3f752de1-5826-4009-a77c-b9186d9811ea\") " pod="openstack/swift-ring-rebalance-9s2xb" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.354730 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3f752de1-5826-4009-a77c-b9186d9811ea-etc-swift\") pod \"swift-ring-rebalance-9s2xb\" (UID: \"3f752de1-5826-4009-a77c-b9186d9811ea\") " pod="openstack/swift-ring-rebalance-9s2xb" Oct 13 17:39:01 crc kubenswrapper[4720]: E1013 17:39:01.354944 4720 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 13 17:39:01 crc kubenswrapper[4720]: E1013 17:39:01.354976 4720 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 13 17:39:01 crc kubenswrapper[4720]: E1013 17:39:01.355023 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2ffe28a4-ed4a-44c6-b982-501575dd907d-etc-swift podName:2ffe28a4-ed4a-44c6-b982-501575dd907d nodeName:}" failed. No retries permitted until 2025-10-13 17:39:02.355005307 +0000 UTC m=+887.812255439 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/2ffe28a4-ed4a-44c6-b982-501575dd907d-etc-swift") pod "swift-storage-0" (UID: "2ffe28a4-ed4a-44c6-b982-501575dd907d") : configmap "swift-ring-files" not found Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.355644 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3f752de1-5826-4009-a77c-b9186d9811ea-scripts\") pod \"swift-ring-rebalance-9s2xb\" (UID: \"3f752de1-5826-4009-a77c-b9186d9811ea\") " pod="openstack/swift-ring-rebalance-9s2xb" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.359934 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f752de1-5826-4009-a77c-b9186d9811ea-combined-ca-bundle\") pod \"swift-ring-rebalance-9s2xb\" (UID: \"3f752de1-5826-4009-a77c-b9186d9811ea\") " pod="openstack/swift-ring-rebalance-9s2xb" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.360055 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3f752de1-5826-4009-a77c-b9186d9811ea-swiftconf\") pod \"swift-ring-rebalance-9s2xb\" (UID: \"3f752de1-5826-4009-a77c-b9186d9811ea\") " pod="openstack/swift-ring-rebalance-9s2xb" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.364311 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3f752de1-5826-4009-a77c-b9186d9811ea-dispersionconf\") pod \"swift-ring-rebalance-9s2xb\" (UID: \"3f752de1-5826-4009-a77c-b9186d9811ea\") " pod="openstack/swift-ring-rebalance-9s2xb" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.372224 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssjhr\" (UniqueName: \"kubernetes.io/projected/3f752de1-5826-4009-a77c-b9186d9811ea-kube-api-access-ssjhr\") pod \"swift-ring-rebalance-9s2xb\" (UID: \"3f752de1-5826-4009-a77c-b9186d9811ea\") " pod="openstack/swift-ring-rebalance-9s2xb" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.393108 4720 scope.go:117] "RemoveContainer" containerID="4d4c6a239fbfd52b3ab03278acba83cffb387c957cf97df5e5861934491613b3" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.402988 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-phh7s" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.420801 4720 scope.go:117] "RemoveContainer" containerID="85bd13e9da984d92d9214f3e3727410d31776fe8ad949845e111b07840a1c422" Oct 13 17:39:01 crc kubenswrapper[4720]: E1013 17:39:01.421230 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85bd13e9da984d92d9214f3e3727410d31776fe8ad949845e111b07840a1c422\": container with ID starting with 85bd13e9da984d92d9214f3e3727410d31776fe8ad949845e111b07840a1c422 not found: ID does not exist" containerID="85bd13e9da984d92d9214f3e3727410d31776fe8ad949845e111b07840a1c422" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.421262 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85bd13e9da984d92d9214f3e3727410d31776fe8ad949845e111b07840a1c422"} err="failed to get container status \"85bd13e9da984d92d9214f3e3727410d31776fe8ad949845e111b07840a1c422\": rpc error: code = NotFound desc = could not find container \"85bd13e9da984d92d9214f3e3727410d31776fe8ad949845e111b07840a1c422\": container with ID starting with 85bd13e9da984d92d9214f3e3727410d31776fe8ad949845e111b07840a1c422 not found: ID does not exist" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.421289 4720 scope.go:117] "RemoveContainer" containerID="4d4c6a239fbfd52b3ab03278acba83cffb387c957cf97df5e5861934491613b3" Oct 13 17:39:01 crc kubenswrapper[4720]: E1013 17:39:01.421574 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d4c6a239fbfd52b3ab03278acba83cffb387c957cf97df5e5861934491613b3\": container with ID starting with 4d4c6a239fbfd52b3ab03278acba83cffb387c957cf97df5e5861934491613b3 not found: ID does not exist" containerID="4d4c6a239fbfd52b3ab03278acba83cffb387c957cf97df5e5861934491613b3" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.421598 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d4c6a239fbfd52b3ab03278acba83cffb387c957cf97df5e5861934491613b3"} err="failed to get container status \"4d4c6a239fbfd52b3ab03278acba83cffb387c957cf97df5e5861934491613b3\": rpc error: code = NotFound desc = could not find container \"4d4c6a239fbfd52b3ab03278acba83cffb387c957cf97df5e5861934491613b3\": container with ID starting with 4d4c6a239fbfd52b3ab03278acba83cffb387c957cf97df5e5861934491613b3 not found: ID does not exist" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.422355 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-rrkt8"] Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.428310 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-rrkt8"] Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.454109 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxxwh\" (UniqueName: \"kubernetes.io/projected/b61235f4-5b35-4bb9-9787-e7fc0075d0c9-kube-api-access-fxxwh\") pod \"b61235f4-5b35-4bb9-9787-e7fc0075d0c9\" (UID: \"b61235f4-5b35-4bb9-9787-e7fc0075d0c9\") " Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.454171 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b61235f4-5b35-4bb9-9787-e7fc0075d0c9-etc-swift\") pod \"b61235f4-5b35-4bb9-9787-e7fc0075d0c9\" (UID: \"b61235f4-5b35-4bb9-9787-e7fc0075d0c9\") " Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.454329 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b61235f4-5b35-4bb9-9787-e7fc0075d0c9-dispersionconf\") pod \"b61235f4-5b35-4bb9-9787-e7fc0075d0c9\" (UID: \"b61235f4-5b35-4bb9-9787-e7fc0075d0c9\") " Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.454363 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b61235f4-5b35-4bb9-9787-e7fc0075d0c9-ring-data-devices\") pod \"b61235f4-5b35-4bb9-9787-e7fc0075d0c9\" (UID: \"b61235f4-5b35-4bb9-9787-e7fc0075d0c9\") " Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.454386 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b61235f4-5b35-4bb9-9787-e7fc0075d0c9-combined-ca-bundle\") pod \"b61235f4-5b35-4bb9-9787-e7fc0075d0c9\" (UID: \"b61235f4-5b35-4bb9-9787-e7fc0075d0c9\") " Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.454460 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b61235f4-5b35-4bb9-9787-e7fc0075d0c9-scripts\") pod \"b61235f4-5b35-4bb9-9787-e7fc0075d0c9\" (UID: \"b61235f4-5b35-4bb9-9787-e7fc0075d0c9\") " Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.454526 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b61235f4-5b35-4bb9-9787-e7fc0075d0c9-swiftconf\") pod \"b61235f4-5b35-4bb9-9787-e7fc0075d0c9\" (UID: \"b61235f4-5b35-4bb9-9787-e7fc0075d0c9\") " Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.454570 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b61235f4-5b35-4bb9-9787-e7fc0075d0c9-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "b61235f4-5b35-4bb9-9787-e7fc0075d0c9" (UID: "b61235f4-5b35-4bb9-9787-e7fc0075d0c9"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.454825 4720 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b61235f4-5b35-4bb9-9787-e7fc0075d0c9-etc-swift\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.454905 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b61235f4-5b35-4bb9-9787-e7fc0075d0c9-scripts" (OuterVolumeSpecName: "scripts") pod "b61235f4-5b35-4bb9-9787-e7fc0075d0c9" (UID: "b61235f4-5b35-4bb9-9787-e7fc0075d0c9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.455638 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b61235f4-5b35-4bb9-9787-e7fc0075d0c9-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "b61235f4-5b35-4bb9-9787-e7fc0075d0c9" (UID: "b61235f4-5b35-4bb9-9787-e7fc0075d0c9"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.459037 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b61235f4-5b35-4bb9-9787-e7fc0075d0c9-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "b61235f4-5b35-4bb9-9787-e7fc0075d0c9" (UID: "b61235f4-5b35-4bb9-9787-e7fc0075d0c9"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.459390 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b61235f4-5b35-4bb9-9787-e7fc0075d0c9-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "b61235f4-5b35-4bb9-9787-e7fc0075d0c9" (UID: "b61235f4-5b35-4bb9-9787-e7fc0075d0c9"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.460411 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b61235f4-5b35-4bb9-9787-e7fc0075d0c9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b61235f4-5b35-4bb9-9787-e7fc0075d0c9" (UID: "b61235f4-5b35-4bb9-9787-e7fc0075d0c9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.460492 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b61235f4-5b35-4bb9-9787-e7fc0075d0c9-kube-api-access-fxxwh" (OuterVolumeSpecName: "kube-api-access-fxxwh") pod "b61235f4-5b35-4bb9-9787-e7fc0075d0c9" (UID: "b61235f4-5b35-4bb9-9787-e7fc0075d0c9"). InnerVolumeSpecName "kube-api-access-fxxwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.503023 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-9s2xb" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.556416 4720 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b61235f4-5b35-4bb9-9787-e7fc0075d0c9-dispersionconf\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.556674 4720 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b61235f4-5b35-4bb9-9787-e7fc0075d0c9-ring-data-devices\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.556687 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b61235f4-5b35-4bb9-9787-e7fc0075d0c9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.556696 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b61235f4-5b35-4bb9-9787-e7fc0075d0c9-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.556706 4720 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b61235f4-5b35-4bb9-9787-e7fc0075d0c9-swiftconf\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.556716 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxxwh\" (UniqueName: \"kubernetes.io/projected/b61235f4-5b35-4bb9-9787-e7fc0075d0c9-kube-api-access-fxxwh\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:01 crc kubenswrapper[4720]: I1013 17:39:01.947055 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-9s2xb"] Oct 13 17:39:01 crc kubenswrapper[4720]: W1013 17:39:01.960635 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f752de1_5826_4009_a77c_b9186d9811ea.slice/crio-ccea12143b56e4901b23adabcec64ca3ca3a81f821fea3e408472fa186189689 WatchSource:0}: Error finding container ccea12143b56e4901b23adabcec64ca3ca3a81f821fea3e408472fa186189689: Status 404 returned error can't find the container with id ccea12143b56e4901b23adabcec64ca3ca3a81f821fea3e408472fa186189689 Oct 13 17:39:02 crc kubenswrapper[4720]: I1013 17:39:02.279054 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-mxpxk" event={"ID":"2442614f-edaa-4e64-9ed1-fc0520a37cfd","Type":"ContainerStarted","Data":"46425dac3912c3b90081ca7451555dd016ce966bc5170559d8329870a07c33d5"} Oct 13 17:39:02 crc kubenswrapper[4720]: I1013 17:39:02.280399 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-9s2xb" event={"ID":"3f752de1-5826-4009-a77c-b9186d9811ea","Type":"ContainerStarted","Data":"ccea12143b56e4901b23adabcec64ca3ca3a81f821fea3e408472fa186189689"} Oct 13 17:39:02 crc kubenswrapper[4720]: I1013 17:39:02.280451 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-mxpxk" Oct 13 17:39:02 crc kubenswrapper[4720]: I1013 17:39:02.281648 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-phh7s" Oct 13 17:39:02 crc kubenswrapper[4720]: I1013 17:39:02.321110 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-mxpxk" podStartSLOduration=3.321090043 podStartE2EDuration="3.321090043s" podCreationTimestamp="2025-10-13 17:38:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:39:02.317284965 +0000 UTC m=+887.774535157" watchObservedRunningTime="2025-10-13 17:39:02.321090043 +0000 UTC m=+887.778340185" Oct 13 17:39:02 crc kubenswrapper[4720]: I1013 17:39:02.371026 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2ffe28a4-ed4a-44c6-b982-501575dd907d-etc-swift\") pod \"swift-storage-0\" (UID: \"2ffe28a4-ed4a-44c6-b982-501575dd907d\") " pod="openstack/swift-storage-0" Oct 13 17:39:02 crc kubenswrapper[4720]: E1013 17:39:02.371581 4720 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 13 17:39:02 crc kubenswrapper[4720]: E1013 17:39:02.371605 4720 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 13 17:39:02 crc kubenswrapper[4720]: E1013 17:39:02.371649 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2ffe28a4-ed4a-44c6-b982-501575dd907d-etc-swift podName:2ffe28a4-ed4a-44c6-b982-501575dd907d nodeName:}" failed. No retries permitted until 2025-10-13 17:39:04.371633563 +0000 UTC m=+889.828883705 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/2ffe28a4-ed4a-44c6-b982-501575dd907d-etc-swift") pod "swift-storage-0" (UID: "2ffe28a4-ed4a-44c6-b982-501575dd907d") : configmap "swift-ring-files" not found Oct 13 17:39:02 crc kubenswrapper[4720]: I1013 17:39:02.373802 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-phh7s"] Oct 13 17:39:02 crc kubenswrapper[4720]: I1013 17:39:02.386331 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-phh7s"] Oct 13 17:39:03 crc kubenswrapper[4720]: I1013 17:39:03.179506 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b61235f4-5b35-4bb9-9787-e7fc0075d0c9" path="/var/lib/kubelet/pods/b61235f4-5b35-4bb9-9787-e7fc0075d0c9/volumes" Oct 13 17:39:03 crc kubenswrapper[4720]: I1013 17:39:03.180668 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9" path="/var/lib/kubelet/pods/e9fb0805-93ca-4cd8-9ab3-108b2a4e94e9/volumes" Oct 13 17:39:04 crc kubenswrapper[4720]: I1013 17:39:04.427582 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2ffe28a4-ed4a-44c6-b982-501575dd907d-etc-swift\") pod \"swift-storage-0\" (UID: \"2ffe28a4-ed4a-44c6-b982-501575dd907d\") " pod="openstack/swift-storage-0" Oct 13 17:39:04 crc kubenswrapper[4720]: E1013 17:39:04.427789 4720 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 13 17:39:04 crc kubenswrapper[4720]: E1013 17:39:04.427823 4720 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 13 17:39:04 crc kubenswrapper[4720]: E1013 17:39:04.427884 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2ffe28a4-ed4a-44c6-b982-501575dd907d-etc-swift podName:2ffe28a4-ed4a-44c6-b982-501575dd907d nodeName:}" failed. No retries permitted until 2025-10-13 17:39:08.427867269 +0000 UTC m=+893.885117401 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/2ffe28a4-ed4a-44c6-b982-501575dd907d-etc-swift") pod "swift-storage-0" (UID: "2ffe28a4-ed4a-44c6-b982-501575dd907d") : configmap "swift-ring-files" not found Oct 13 17:39:06 crc kubenswrapper[4720]: I1013 17:39:06.324169 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-9s2xb" event={"ID":"3f752de1-5826-4009-a77c-b9186d9811ea","Type":"ContainerStarted","Data":"d7440d74a79f711158d023534bb6e9d6fcfac05375f200658bf5af745ec39146"} Oct 13 17:39:06 crc kubenswrapper[4720]: I1013 17:39:06.352251 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-9s2xb" podStartSLOduration=2.145251958 podStartE2EDuration="5.35222287s" podCreationTimestamp="2025-10-13 17:39:01 +0000 UTC" firstStartedPulling="2025-10-13 17:39:01.963289958 +0000 UTC m=+887.420540090" lastFinishedPulling="2025-10-13 17:39:05.17026087 +0000 UTC m=+890.627511002" observedRunningTime="2025-10-13 17:39:06.34911784 +0000 UTC m=+891.806368022" watchObservedRunningTime="2025-10-13 17:39:06.35222287 +0000 UTC m=+891.809473032" Oct 13 17:39:07 crc kubenswrapper[4720]: I1013 17:39:07.009780 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-6rql4"] Oct 13 17:39:07 crc kubenswrapper[4720]: I1013 17:39:07.011508 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6rql4" Oct 13 17:39:07 crc kubenswrapper[4720]: I1013 17:39:07.022218 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-6rql4"] Oct 13 17:39:07 crc kubenswrapper[4720]: I1013 17:39:07.187421 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dwzv\" (UniqueName: \"kubernetes.io/projected/828a8af5-455b-48ff-ab12-433c76df235c-kube-api-access-5dwzv\") pod \"keystone-db-create-6rql4\" (UID: \"828a8af5-455b-48ff-ab12-433c76df235c\") " pod="openstack/keystone-db-create-6rql4" Oct 13 17:39:07 crc kubenswrapper[4720]: I1013 17:39:07.205170 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-kd8vn"] Oct 13 17:39:07 crc kubenswrapper[4720]: I1013 17:39:07.206210 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-kd8vn" Oct 13 17:39:07 crc kubenswrapper[4720]: I1013 17:39:07.213825 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-kd8vn"] Oct 13 17:39:07 crc kubenswrapper[4720]: I1013 17:39:07.288950 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmlfk\" (UniqueName: \"kubernetes.io/projected/e022bab5-923a-4ce0-9027-d3dbffd6aa51-kube-api-access-nmlfk\") pod \"placement-db-create-kd8vn\" (UID: \"e022bab5-923a-4ce0-9027-d3dbffd6aa51\") " pod="openstack/placement-db-create-kd8vn" Oct 13 17:39:07 crc kubenswrapper[4720]: I1013 17:39:07.289131 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dwzv\" (UniqueName: \"kubernetes.io/projected/828a8af5-455b-48ff-ab12-433c76df235c-kube-api-access-5dwzv\") pod \"keystone-db-create-6rql4\" (UID: \"828a8af5-455b-48ff-ab12-433c76df235c\") " pod="openstack/keystone-db-create-6rql4" Oct 13 17:39:07 crc kubenswrapper[4720]: I1013 17:39:07.314617 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dwzv\" (UniqueName: \"kubernetes.io/projected/828a8af5-455b-48ff-ab12-433c76df235c-kube-api-access-5dwzv\") pod \"keystone-db-create-6rql4\" (UID: \"828a8af5-455b-48ff-ab12-433c76df235c\") " pod="openstack/keystone-db-create-6rql4" Oct 13 17:39:07 crc kubenswrapper[4720]: I1013 17:39:07.381882 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6rql4" Oct 13 17:39:07 crc kubenswrapper[4720]: I1013 17:39:07.390723 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmlfk\" (UniqueName: \"kubernetes.io/projected/e022bab5-923a-4ce0-9027-d3dbffd6aa51-kube-api-access-nmlfk\") pod \"placement-db-create-kd8vn\" (UID: \"e022bab5-923a-4ce0-9027-d3dbffd6aa51\") " pod="openstack/placement-db-create-kd8vn" Oct 13 17:39:07 crc kubenswrapper[4720]: I1013 17:39:07.427051 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmlfk\" (UniqueName: \"kubernetes.io/projected/e022bab5-923a-4ce0-9027-d3dbffd6aa51-kube-api-access-nmlfk\") pod \"placement-db-create-kd8vn\" (UID: \"e022bab5-923a-4ce0-9027-d3dbffd6aa51\") " pod="openstack/placement-db-create-kd8vn" Oct 13 17:39:07 crc kubenswrapper[4720]: I1013 17:39:07.522592 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-kd8vn" Oct 13 17:39:07 crc kubenswrapper[4720]: I1013 17:39:07.542847 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-tp2xb"] Oct 13 17:39:07 crc kubenswrapper[4720]: I1013 17:39:07.543925 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-tp2xb" Oct 13 17:39:07 crc kubenswrapper[4720]: I1013 17:39:07.558888 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-tp2xb"] Oct 13 17:39:07 crc kubenswrapper[4720]: I1013 17:39:07.651490 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-6rql4"] Oct 13 17:39:07 crc kubenswrapper[4720]: W1013 17:39:07.663888 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod828a8af5_455b_48ff_ab12_433c76df235c.slice/crio-6ce30b42f08bd98c41e87bc5d312a4c52a702419be729ae5bc26dfe7cee2ed5f WatchSource:0}: Error finding container 6ce30b42f08bd98c41e87bc5d312a4c52a702419be729ae5bc26dfe7cee2ed5f: Status 404 returned error can't find the container with id 6ce30b42f08bd98c41e87bc5d312a4c52a702419be729ae5bc26dfe7cee2ed5f Oct 13 17:39:07 crc kubenswrapper[4720]: I1013 17:39:07.695447 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdt5b\" (UniqueName: \"kubernetes.io/projected/a331f8b0-56f2-48ce-a85c-a1ef1c57d7dc-kube-api-access-rdt5b\") pod \"glance-db-create-tp2xb\" (UID: \"a331f8b0-56f2-48ce-a85c-a1ef1c57d7dc\") " pod="openstack/glance-db-create-tp2xb" Oct 13 17:39:07 crc kubenswrapper[4720]: I1013 17:39:07.797160 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdt5b\" (UniqueName: \"kubernetes.io/projected/a331f8b0-56f2-48ce-a85c-a1ef1c57d7dc-kube-api-access-rdt5b\") pod \"glance-db-create-tp2xb\" (UID: \"a331f8b0-56f2-48ce-a85c-a1ef1c57d7dc\") " pod="openstack/glance-db-create-tp2xb" Oct 13 17:39:07 crc kubenswrapper[4720]: I1013 17:39:07.831337 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdt5b\" (UniqueName: \"kubernetes.io/projected/a331f8b0-56f2-48ce-a85c-a1ef1c57d7dc-kube-api-access-rdt5b\") pod \"glance-db-create-tp2xb\" (UID: \"a331f8b0-56f2-48ce-a85c-a1ef1c57d7dc\") " pod="openstack/glance-db-create-tp2xb" Oct 13 17:39:07 crc kubenswrapper[4720]: I1013 17:39:07.936562 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-tp2xb" Oct 13 17:39:07 crc kubenswrapper[4720]: I1013 17:39:07.949244 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-kd8vn"] Oct 13 17:39:07 crc kubenswrapper[4720]: E1013 17:39:07.964095 4720 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod828a8af5_455b_48ff_ab12_433c76df235c.slice/crio-conmon-6af41f4743bec3c706117e16f192c7dddac9166aebc88ef11636a84354d0a274.scope\": RecentStats: unable to find data in memory cache]" Oct 13 17:39:07 crc kubenswrapper[4720]: W1013 17:39:07.974275 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode022bab5_923a_4ce0_9027_d3dbffd6aa51.slice/crio-b9605d85a8e3e6c892b27b28626023adaab67c1e214add3677d136b2e245fb49 WatchSource:0}: Error finding container b9605d85a8e3e6c892b27b28626023adaab67c1e214add3677d136b2e245fb49: Status 404 returned error can't find the container with id b9605d85a8e3e6c892b27b28626023adaab67c1e214add3677d136b2e245fb49 Oct 13 17:39:08 crc kubenswrapper[4720]: I1013 17:39:08.198264 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-tp2xb"] Oct 13 17:39:08 crc kubenswrapper[4720]: W1013 17:39:08.200622 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda331f8b0_56f2_48ce_a85c_a1ef1c57d7dc.slice/crio-3769e9989a7095fa72c331f550d492a0b3e54a2969483f6068948ed99b09bd86 WatchSource:0}: Error finding container 3769e9989a7095fa72c331f550d492a0b3e54a2969483f6068948ed99b09bd86: Status 404 returned error can't find the container with id 3769e9989a7095fa72c331f550d492a0b3e54a2969483f6068948ed99b09bd86 Oct 13 17:39:08 crc kubenswrapper[4720]: I1013 17:39:08.347698 4720 generic.go:334] "Generic (PLEG): container finished" podID="e022bab5-923a-4ce0-9027-d3dbffd6aa51" containerID="f8de1c1c2a52d5e983c06301e7add77830473448c98f55a1fef2c9c3f5f89bcb" exitCode=0 Oct 13 17:39:08 crc kubenswrapper[4720]: I1013 17:39:08.347792 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-kd8vn" event={"ID":"e022bab5-923a-4ce0-9027-d3dbffd6aa51","Type":"ContainerDied","Data":"f8de1c1c2a52d5e983c06301e7add77830473448c98f55a1fef2c9c3f5f89bcb"} Oct 13 17:39:08 crc kubenswrapper[4720]: I1013 17:39:08.348093 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-kd8vn" event={"ID":"e022bab5-923a-4ce0-9027-d3dbffd6aa51","Type":"ContainerStarted","Data":"b9605d85a8e3e6c892b27b28626023adaab67c1e214add3677d136b2e245fb49"} Oct 13 17:39:08 crc kubenswrapper[4720]: I1013 17:39:08.349274 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-tp2xb" event={"ID":"a331f8b0-56f2-48ce-a85c-a1ef1c57d7dc","Type":"ContainerStarted","Data":"3769e9989a7095fa72c331f550d492a0b3e54a2969483f6068948ed99b09bd86"} Oct 13 17:39:08 crc kubenswrapper[4720]: I1013 17:39:08.351415 4720 generic.go:334] "Generic (PLEG): container finished" podID="828a8af5-455b-48ff-ab12-433c76df235c" containerID="6af41f4743bec3c706117e16f192c7dddac9166aebc88ef11636a84354d0a274" exitCode=0 Oct 13 17:39:08 crc kubenswrapper[4720]: I1013 17:39:08.351443 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6rql4" event={"ID":"828a8af5-455b-48ff-ab12-433c76df235c","Type":"ContainerDied","Data":"6af41f4743bec3c706117e16f192c7dddac9166aebc88ef11636a84354d0a274"} Oct 13 17:39:08 crc kubenswrapper[4720]: I1013 17:39:08.351458 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6rql4" event={"ID":"828a8af5-455b-48ff-ab12-433c76df235c","Type":"ContainerStarted","Data":"6ce30b42f08bd98c41e87bc5d312a4c52a702419be729ae5bc26dfe7cee2ed5f"} Oct 13 17:39:08 crc kubenswrapper[4720]: I1013 17:39:08.524185 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2ffe28a4-ed4a-44c6-b982-501575dd907d-etc-swift\") pod \"swift-storage-0\" (UID: \"2ffe28a4-ed4a-44c6-b982-501575dd907d\") " pod="openstack/swift-storage-0" Oct 13 17:39:08 crc kubenswrapper[4720]: E1013 17:39:08.524429 4720 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 13 17:39:08 crc kubenswrapper[4720]: E1013 17:39:08.525666 4720 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 13 17:39:08 crc kubenswrapper[4720]: E1013 17:39:08.525746 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2ffe28a4-ed4a-44c6-b982-501575dd907d-etc-swift podName:2ffe28a4-ed4a-44c6-b982-501575dd907d nodeName:}" failed. No retries permitted until 2025-10-13 17:39:16.525721162 +0000 UTC m=+901.982971314 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/2ffe28a4-ed4a-44c6-b982-501575dd907d-etc-swift") pod "swift-storage-0" (UID: "2ffe28a4-ed4a-44c6-b982-501575dd907d") : configmap "swift-ring-files" not found Oct 13 17:39:09 crc kubenswrapper[4720]: I1013 17:39:09.361662 4720 generic.go:334] "Generic (PLEG): container finished" podID="a331f8b0-56f2-48ce-a85c-a1ef1c57d7dc" containerID="b9a3b9381496578cc7754c68200a759343b2ca01c293742b49ff69901105eb48" exitCode=0 Oct 13 17:39:09 crc kubenswrapper[4720]: I1013 17:39:09.362224 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-tp2xb" event={"ID":"a331f8b0-56f2-48ce-a85c-a1ef1c57d7dc","Type":"ContainerDied","Data":"b9a3b9381496578cc7754c68200a759343b2ca01c293742b49ff69901105eb48"} Oct 13 17:39:09 crc kubenswrapper[4720]: I1013 17:39:09.786708 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6rql4" Oct 13 17:39:09 crc kubenswrapper[4720]: I1013 17:39:09.790662 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-mxpxk" Oct 13 17:39:09 crc kubenswrapper[4720]: I1013 17:39:09.890579 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-9jdhm"] Oct 13 17:39:09 crc kubenswrapper[4720]: I1013 17:39:09.890798 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-9jdhm" podUID="7349a4fa-fffe-44e9-aebb-ceb486b7fb45" containerName="dnsmasq-dns" containerID="cri-o://be6edba14d70cbda2c79783058bd5c0f065dd429dce247c9be79501b1c5805a4" gracePeriod=10 Oct 13 17:39:09 crc kubenswrapper[4720]: I1013 17:39:09.903926 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-kd8vn" Oct 13 17:39:09 crc kubenswrapper[4720]: I1013 17:39:09.949432 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dwzv\" (UniqueName: \"kubernetes.io/projected/828a8af5-455b-48ff-ab12-433c76df235c-kube-api-access-5dwzv\") pod \"828a8af5-455b-48ff-ab12-433c76df235c\" (UID: \"828a8af5-455b-48ff-ab12-433c76df235c\") " Oct 13 17:39:09 crc kubenswrapper[4720]: I1013 17:39:09.959354 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/828a8af5-455b-48ff-ab12-433c76df235c-kube-api-access-5dwzv" (OuterVolumeSpecName: "kube-api-access-5dwzv") pod "828a8af5-455b-48ff-ab12-433c76df235c" (UID: "828a8af5-455b-48ff-ab12-433c76df235c"). InnerVolumeSpecName "kube-api-access-5dwzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:39:10 crc kubenswrapper[4720]: I1013 17:39:10.051502 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmlfk\" (UniqueName: \"kubernetes.io/projected/e022bab5-923a-4ce0-9027-d3dbffd6aa51-kube-api-access-nmlfk\") pod \"e022bab5-923a-4ce0-9027-d3dbffd6aa51\" (UID: \"e022bab5-923a-4ce0-9027-d3dbffd6aa51\") " Oct 13 17:39:10 crc kubenswrapper[4720]: I1013 17:39:10.052058 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dwzv\" (UniqueName: \"kubernetes.io/projected/828a8af5-455b-48ff-ab12-433c76df235c-kube-api-access-5dwzv\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:10 crc kubenswrapper[4720]: I1013 17:39:10.054408 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e022bab5-923a-4ce0-9027-d3dbffd6aa51-kube-api-access-nmlfk" (OuterVolumeSpecName: "kube-api-access-nmlfk") pod "e022bab5-923a-4ce0-9027-d3dbffd6aa51" (UID: "e022bab5-923a-4ce0-9027-d3dbffd6aa51"). InnerVolumeSpecName "kube-api-access-nmlfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:39:10 crc kubenswrapper[4720]: I1013 17:39:10.153497 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmlfk\" (UniqueName: \"kubernetes.io/projected/e022bab5-923a-4ce0-9027-d3dbffd6aa51-kube-api-access-nmlfk\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:10 crc kubenswrapper[4720]: I1013 17:39:10.282345 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-9jdhm" Oct 13 17:39:10 crc kubenswrapper[4720]: I1013 17:39:10.355791 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7349a4fa-fffe-44e9-aebb-ceb486b7fb45-dns-svc\") pod \"7349a4fa-fffe-44e9-aebb-ceb486b7fb45\" (UID: \"7349a4fa-fffe-44e9-aebb-ceb486b7fb45\") " Oct 13 17:39:10 crc kubenswrapper[4720]: I1013 17:39:10.356149 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54s2t\" (UniqueName: \"kubernetes.io/projected/7349a4fa-fffe-44e9-aebb-ceb486b7fb45-kube-api-access-54s2t\") pod \"7349a4fa-fffe-44e9-aebb-ceb486b7fb45\" (UID: \"7349a4fa-fffe-44e9-aebb-ceb486b7fb45\") " Oct 13 17:39:10 crc kubenswrapper[4720]: I1013 17:39:10.356246 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7349a4fa-fffe-44e9-aebb-ceb486b7fb45-config\") pod \"7349a4fa-fffe-44e9-aebb-ceb486b7fb45\" (UID: \"7349a4fa-fffe-44e9-aebb-ceb486b7fb45\") " Oct 13 17:39:10 crc kubenswrapper[4720]: I1013 17:39:10.360337 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7349a4fa-fffe-44e9-aebb-ceb486b7fb45-kube-api-access-54s2t" (OuterVolumeSpecName: "kube-api-access-54s2t") pod "7349a4fa-fffe-44e9-aebb-ceb486b7fb45" (UID: "7349a4fa-fffe-44e9-aebb-ceb486b7fb45"). InnerVolumeSpecName "kube-api-access-54s2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:39:10 crc kubenswrapper[4720]: I1013 17:39:10.371444 4720 generic.go:334] "Generic (PLEG): container finished" podID="7349a4fa-fffe-44e9-aebb-ceb486b7fb45" containerID="be6edba14d70cbda2c79783058bd5c0f065dd429dce247c9be79501b1c5805a4" exitCode=0 Oct 13 17:39:10 crc kubenswrapper[4720]: I1013 17:39:10.371577 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-9jdhm" Oct 13 17:39:10 crc kubenswrapper[4720]: I1013 17:39:10.371691 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-9jdhm" event={"ID":"7349a4fa-fffe-44e9-aebb-ceb486b7fb45","Type":"ContainerDied","Data":"be6edba14d70cbda2c79783058bd5c0f065dd429dce247c9be79501b1c5805a4"} Oct 13 17:39:10 crc kubenswrapper[4720]: I1013 17:39:10.371832 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-9jdhm" event={"ID":"7349a4fa-fffe-44e9-aebb-ceb486b7fb45","Type":"ContainerDied","Data":"721a5597baefb6d39cc3f7c24a3c6286dfcc2105dec9e3498d14e7118d0d582c"} Oct 13 17:39:10 crc kubenswrapper[4720]: I1013 17:39:10.371901 4720 scope.go:117] "RemoveContainer" containerID="be6edba14d70cbda2c79783058bd5c0f065dd429dce247c9be79501b1c5805a4" Oct 13 17:39:10 crc kubenswrapper[4720]: I1013 17:39:10.372906 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6rql4" event={"ID":"828a8af5-455b-48ff-ab12-433c76df235c","Type":"ContainerDied","Data":"6ce30b42f08bd98c41e87bc5d312a4c52a702419be729ae5bc26dfe7cee2ed5f"} Oct 13 17:39:10 crc kubenswrapper[4720]: I1013 17:39:10.372938 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ce30b42f08bd98c41e87bc5d312a4c52a702419be729ae5bc26dfe7cee2ed5f" Oct 13 17:39:10 crc kubenswrapper[4720]: I1013 17:39:10.372985 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6rql4" Oct 13 17:39:10 crc kubenswrapper[4720]: I1013 17:39:10.375279 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-kd8vn" event={"ID":"e022bab5-923a-4ce0-9027-d3dbffd6aa51","Type":"ContainerDied","Data":"b9605d85a8e3e6c892b27b28626023adaab67c1e214add3677d136b2e245fb49"} Oct 13 17:39:10 crc kubenswrapper[4720]: I1013 17:39:10.375306 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9605d85a8e3e6c892b27b28626023adaab67c1e214add3677d136b2e245fb49" Oct 13 17:39:10 crc kubenswrapper[4720]: I1013 17:39:10.375293 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-kd8vn" Oct 13 17:39:10 crc kubenswrapper[4720]: I1013 17:39:10.389946 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7349a4fa-fffe-44e9-aebb-ceb486b7fb45-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7349a4fa-fffe-44e9-aebb-ceb486b7fb45" (UID: "7349a4fa-fffe-44e9-aebb-ceb486b7fb45"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:39:10 crc kubenswrapper[4720]: I1013 17:39:10.394770 4720 scope.go:117] "RemoveContainer" containerID="ccc38b4ea21424ac83eec59d0f5011b23087266abeba694e3a687280a24caf1f" Oct 13 17:39:10 crc kubenswrapper[4720]: I1013 17:39:10.396493 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7349a4fa-fffe-44e9-aebb-ceb486b7fb45-config" (OuterVolumeSpecName: "config") pod "7349a4fa-fffe-44e9-aebb-ceb486b7fb45" (UID: "7349a4fa-fffe-44e9-aebb-ceb486b7fb45"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:39:10 crc kubenswrapper[4720]: I1013 17:39:10.421106 4720 scope.go:117] "RemoveContainer" containerID="be6edba14d70cbda2c79783058bd5c0f065dd429dce247c9be79501b1c5805a4" Oct 13 17:39:10 crc kubenswrapper[4720]: E1013 17:39:10.421578 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be6edba14d70cbda2c79783058bd5c0f065dd429dce247c9be79501b1c5805a4\": container with ID starting with be6edba14d70cbda2c79783058bd5c0f065dd429dce247c9be79501b1c5805a4 not found: ID does not exist" containerID="be6edba14d70cbda2c79783058bd5c0f065dd429dce247c9be79501b1c5805a4" Oct 13 17:39:10 crc kubenswrapper[4720]: I1013 17:39:10.421613 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be6edba14d70cbda2c79783058bd5c0f065dd429dce247c9be79501b1c5805a4"} err="failed to get container status \"be6edba14d70cbda2c79783058bd5c0f065dd429dce247c9be79501b1c5805a4\": rpc error: code = NotFound desc = could not find container \"be6edba14d70cbda2c79783058bd5c0f065dd429dce247c9be79501b1c5805a4\": container with ID starting with be6edba14d70cbda2c79783058bd5c0f065dd429dce247c9be79501b1c5805a4 not found: ID does not exist" Oct 13 17:39:10 crc kubenswrapper[4720]: I1013 17:39:10.421635 4720 scope.go:117] "RemoveContainer" containerID="ccc38b4ea21424ac83eec59d0f5011b23087266abeba694e3a687280a24caf1f" Oct 13 17:39:10 crc kubenswrapper[4720]: E1013 17:39:10.422056 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccc38b4ea21424ac83eec59d0f5011b23087266abeba694e3a687280a24caf1f\": container with ID starting with ccc38b4ea21424ac83eec59d0f5011b23087266abeba694e3a687280a24caf1f not found: ID does not exist" containerID="ccc38b4ea21424ac83eec59d0f5011b23087266abeba694e3a687280a24caf1f" Oct 13 17:39:10 crc kubenswrapper[4720]: I1013 17:39:10.422076 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccc38b4ea21424ac83eec59d0f5011b23087266abeba694e3a687280a24caf1f"} err="failed to get container status \"ccc38b4ea21424ac83eec59d0f5011b23087266abeba694e3a687280a24caf1f\": rpc error: code = NotFound desc = could not find container \"ccc38b4ea21424ac83eec59d0f5011b23087266abeba694e3a687280a24caf1f\": container with ID starting with ccc38b4ea21424ac83eec59d0f5011b23087266abeba694e3a687280a24caf1f not found: ID does not exist" Oct 13 17:39:10 crc kubenswrapper[4720]: I1013 17:39:10.458237 4720 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7349a4fa-fffe-44e9-aebb-ceb486b7fb45-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:10 crc kubenswrapper[4720]: I1013 17:39:10.458271 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54s2t\" (UniqueName: \"kubernetes.io/projected/7349a4fa-fffe-44e9-aebb-ceb486b7fb45-kube-api-access-54s2t\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:10 crc kubenswrapper[4720]: I1013 17:39:10.458283 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7349a4fa-fffe-44e9-aebb-ceb486b7fb45-config\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:10 crc kubenswrapper[4720]: I1013 17:39:10.608041 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-tp2xb" Oct 13 17:39:10 crc kubenswrapper[4720]: I1013 17:39:10.701786 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-9jdhm"] Oct 13 17:39:10 crc kubenswrapper[4720]: I1013 17:39:10.707508 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-9jdhm"] Oct 13 17:39:10 crc kubenswrapper[4720]: I1013 17:39:10.762810 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdt5b\" (UniqueName: \"kubernetes.io/projected/a331f8b0-56f2-48ce-a85c-a1ef1c57d7dc-kube-api-access-rdt5b\") pod \"a331f8b0-56f2-48ce-a85c-a1ef1c57d7dc\" (UID: \"a331f8b0-56f2-48ce-a85c-a1ef1c57d7dc\") " Oct 13 17:39:10 crc kubenswrapper[4720]: I1013 17:39:10.766893 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a331f8b0-56f2-48ce-a85c-a1ef1c57d7dc-kube-api-access-rdt5b" (OuterVolumeSpecName: "kube-api-access-rdt5b") pod "a331f8b0-56f2-48ce-a85c-a1ef1c57d7dc" (UID: "a331f8b0-56f2-48ce-a85c-a1ef1c57d7dc"). InnerVolumeSpecName "kube-api-access-rdt5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:39:10 crc kubenswrapper[4720]: I1013 17:39:10.864450 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rdt5b\" (UniqueName: \"kubernetes.io/projected/a331f8b0-56f2-48ce-a85c-a1ef1c57d7dc-kube-api-access-rdt5b\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:11 crc kubenswrapper[4720]: I1013 17:39:11.069549 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Oct 13 17:39:11 crc kubenswrapper[4720]: I1013 17:39:11.175343 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7349a4fa-fffe-44e9-aebb-ceb486b7fb45" path="/var/lib/kubelet/pods/7349a4fa-fffe-44e9-aebb-ceb486b7fb45/volumes" Oct 13 17:39:11 crc kubenswrapper[4720]: I1013 17:39:11.383229 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-tp2xb" event={"ID":"a331f8b0-56f2-48ce-a85c-a1ef1c57d7dc","Type":"ContainerDied","Data":"3769e9989a7095fa72c331f550d492a0b3e54a2969483f6068948ed99b09bd86"} Oct 13 17:39:11 crc kubenswrapper[4720]: I1013 17:39:11.383263 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3769e9989a7095fa72c331f550d492a0b3e54a2969483f6068948ed99b09bd86" Oct 13 17:39:11 crc kubenswrapper[4720]: I1013 17:39:11.383309 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-tp2xb" Oct 13 17:39:12 crc kubenswrapper[4720]: I1013 17:39:12.394820 4720 generic.go:334] "Generic (PLEG): container finished" podID="3f752de1-5826-4009-a77c-b9186d9811ea" containerID="d7440d74a79f711158d023534bb6e9d6fcfac05375f200658bf5af745ec39146" exitCode=0 Oct 13 17:39:12 crc kubenswrapper[4720]: I1013 17:39:12.394863 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-9s2xb" event={"ID":"3f752de1-5826-4009-a77c-b9186d9811ea","Type":"ContainerDied","Data":"d7440d74a79f711158d023534bb6e9d6fcfac05375f200658bf5af745ec39146"} Oct 13 17:39:13 crc kubenswrapper[4720]: I1013 17:39:13.789881 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-9s2xb" Oct 13 17:39:13 crc kubenswrapper[4720]: I1013 17:39:13.911084 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssjhr\" (UniqueName: \"kubernetes.io/projected/3f752de1-5826-4009-a77c-b9186d9811ea-kube-api-access-ssjhr\") pod \"3f752de1-5826-4009-a77c-b9186d9811ea\" (UID: \"3f752de1-5826-4009-a77c-b9186d9811ea\") " Oct 13 17:39:13 crc kubenswrapper[4720]: I1013 17:39:13.911147 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3f752de1-5826-4009-a77c-b9186d9811ea-scripts\") pod \"3f752de1-5826-4009-a77c-b9186d9811ea\" (UID: \"3f752de1-5826-4009-a77c-b9186d9811ea\") " Oct 13 17:39:13 crc kubenswrapper[4720]: I1013 17:39:13.911182 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3f752de1-5826-4009-a77c-b9186d9811ea-ring-data-devices\") pod \"3f752de1-5826-4009-a77c-b9186d9811ea\" (UID: \"3f752de1-5826-4009-a77c-b9186d9811ea\") " Oct 13 17:39:13 crc kubenswrapper[4720]: I1013 17:39:13.911255 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3f752de1-5826-4009-a77c-b9186d9811ea-swiftconf\") pod \"3f752de1-5826-4009-a77c-b9186d9811ea\" (UID: \"3f752de1-5826-4009-a77c-b9186d9811ea\") " Oct 13 17:39:13 crc kubenswrapper[4720]: I1013 17:39:13.911324 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3f752de1-5826-4009-a77c-b9186d9811ea-dispersionconf\") pod \"3f752de1-5826-4009-a77c-b9186d9811ea\" (UID: \"3f752de1-5826-4009-a77c-b9186d9811ea\") " Oct 13 17:39:13 crc kubenswrapper[4720]: I1013 17:39:13.911356 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f752de1-5826-4009-a77c-b9186d9811ea-combined-ca-bundle\") pod \"3f752de1-5826-4009-a77c-b9186d9811ea\" (UID: \"3f752de1-5826-4009-a77c-b9186d9811ea\") " Oct 13 17:39:13 crc kubenswrapper[4720]: I1013 17:39:13.911388 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3f752de1-5826-4009-a77c-b9186d9811ea-etc-swift\") pod \"3f752de1-5826-4009-a77c-b9186d9811ea\" (UID: \"3f752de1-5826-4009-a77c-b9186d9811ea\") " Oct 13 17:39:13 crc kubenswrapper[4720]: I1013 17:39:13.912557 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f752de1-5826-4009-a77c-b9186d9811ea-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "3f752de1-5826-4009-a77c-b9186d9811ea" (UID: "3f752de1-5826-4009-a77c-b9186d9811ea"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:39:13 crc kubenswrapper[4720]: I1013 17:39:13.912581 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f752de1-5826-4009-a77c-b9186d9811ea-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "3f752de1-5826-4009-a77c-b9186d9811ea" (UID: "3f752de1-5826-4009-a77c-b9186d9811ea"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:39:13 crc kubenswrapper[4720]: I1013 17:39:13.923149 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f752de1-5826-4009-a77c-b9186d9811ea-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "3f752de1-5826-4009-a77c-b9186d9811ea" (UID: "3f752de1-5826-4009-a77c-b9186d9811ea"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:39:13 crc kubenswrapper[4720]: I1013 17:39:13.931320 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f752de1-5826-4009-a77c-b9186d9811ea-kube-api-access-ssjhr" (OuterVolumeSpecName: "kube-api-access-ssjhr") pod "3f752de1-5826-4009-a77c-b9186d9811ea" (UID: "3f752de1-5826-4009-a77c-b9186d9811ea"). InnerVolumeSpecName "kube-api-access-ssjhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:39:13 crc kubenswrapper[4720]: I1013 17:39:13.934904 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f752de1-5826-4009-a77c-b9186d9811ea-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "3f752de1-5826-4009-a77c-b9186d9811ea" (UID: "3f752de1-5826-4009-a77c-b9186d9811ea"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:39:13 crc kubenswrapper[4720]: I1013 17:39:13.938455 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f752de1-5826-4009-a77c-b9186d9811ea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3f752de1-5826-4009-a77c-b9186d9811ea" (UID: "3f752de1-5826-4009-a77c-b9186d9811ea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:39:13 crc kubenswrapper[4720]: I1013 17:39:13.947472 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f752de1-5826-4009-a77c-b9186d9811ea-scripts" (OuterVolumeSpecName: "scripts") pod "3f752de1-5826-4009-a77c-b9186d9811ea" (UID: "3f752de1-5826-4009-a77c-b9186d9811ea"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:39:14 crc kubenswrapper[4720]: I1013 17:39:14.014885 4720 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/3f752de1-5826-4009-a77c-b9186d9811ea-dispersionconf\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:14 crc kubenswrapper[4720]: I1013 17:39:14.015016 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f752de1-5826-4009-a77c-b9186d9811ea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:14 crc kubenswrapper[4720]: I1013 17:39:14.015047 4720 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/3f752de1-5826-4009-a77c-b9186d9811ea-etc-swift\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:14 crc kubenswrapper[4720]: I1013 17:39:14.015072 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ssjhr\" (UniqueName: \"kubernetes.io/projected/3f752de1-5826-4009-a77c-b9186d9811ea-kube-api-access-ssjhr\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:14 crc kubenswrapper[4720]: I1013 17:39:14.015152 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3f752de1-5826-4009-a77c-b9186d9811ea-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:14 crc kubenswrapper[4720]: I1013 17:39:14.015300 4720 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/3f752de1-5826-4009-a77c-b9186d9811ea-ring-data-devices\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:14 crc kubenswrapper[4720]: I1013 17:39:14.015363 4720 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/3f752de1-5826-4009-a77c-b9186d9811ea-swiftconf\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:14 crc kubenswrapper[4720]: I1013 17:39:14.416272 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-9s2xb" event={"ID":"3f752de1-5826-4009-a77c-b9186d9811ea","Type":"ContainerDied","Data":"ccea12143b56e4901b23adabcec64ca3ca3a81f821fea3e408472fa186189689"} Oct 13 17:39:14 crc kubenswrapper[4720]: I1013 17:39:14.416588 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccea12143b56e4901b23adabcec64ca3ca3a81f821fea3e408472fa186189689" Oct 13 17:39:14 crc kubenswrapper[4720]: I1013 17:39:14.416361 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-9s2xb" Oct 13 17:39:15 crc kubenswrapper[4720]: I1013 17:39:15.213880 4720 patch_prober.go:28] interesting pod/machine-config-daemon-htwnl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 17:39:15 crc kubenswrapper[4720]: I1013 17:39:15.216692 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 17:39:16 crc kubenswrapper[4720]: I1013 17:39:16.560840 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2ffe28a4-ed4a-44c6-b982-501575dd907d-etc-swift\") pod \"swift-storage-0\" (UID: \"2ffe28a4-ed4a-44c6-b982-501575dd907d\") " pod="openstack/swift-storage-0" Oct 13 17:39:16 crc kubenswrapper[4720]: I1013 17:39:16.567279 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2ffe28a4-ed4a-44c6-b982-501575dd907d-etc-swift\") pod \"swift-storage-0\" (UID: \"2ffe28a4-ed4a-44c6-b982-501575dd907d\") " pod="openstack/swift-storage-0" Oct 13 17:39:16 crc kubenswrapper[4720]: I1013 17:39:16.816792 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Oct 13 17:39:17 crc kubenswrapper[4720]: I1013 17:39:17.073308 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-3f72-account-create-gcnc4"] Oct 13 17:39:17 crc kubenswrapper[4720]: E1013 17:39:17.073606 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7349a4fa-fffe-44e9-aebb-ceb486b7fb45" containerName="dnsmasq-dns" Oct 13 17:39:17 crc kubenswrapper[4720]: I1013 17:39:17.073618 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="7349a4fa-fffe-44e9-aebb-ceb486b7fb45" containerName="dnsmasq-dns" Oct 13 17:39:17 crc kubenswrapper[4720]: E1013 17:39:17.073627 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7349a4fa-fffe-44e9-aebb-ceb486b7fb45" containerName="init" Oct 13 17:39:17 crc kubenswrapper[4720]: I1013 17:39:17.073633 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="7349a4fa-fffe-44e9-aebb-ceb486b7fb45" containerName="init" Oct 13 17:39:17 crc kubenswrapper[4720]: E1013 17:39:17.073641 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f752de1-5826-4009-a77c-b9186d9811ea" containerName="swift-ring-rebalance" Oct 13 17:39:17 crc kubenswrapper[4720]: I1013 17:39:17.073648 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f752de1-5826-4009-a77c-b9186d9811ea" containerName="swift-ring-rebalance" Oct 13 17:39:17 crc kubenswrapper[4720]: E1013 17:39:17.073660 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e022bab5-923a-4ce0-9027-d3dbffd6aa51" containerName="mariadb-database-create" Oct 13 17:39:17 crc kubenswrapper[4720]: I1013 17:39:17.073666 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="e022bab5-923a-4ce0-9027-d3dbffd6aa51" containerName="mariadb-database-create" Oct 13 17:39:17 crc kubenswrapper[4720]: E1013 17:39:17.073695 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="828a8af5-455b-48ff-ab12-433c76df235c" containerName="mariadb-database-create" Oct 13 17:39:17 crc kubenswrapper[4720]: I1013 17:39:17.073701 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="828a8af5-455b-48ff-ab12-433c76df235c" containerName="mariadb-database-create" Oct 13 17:39:17 crc kubenswrapper[4720]: E1013 17:39:17.073714 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a331f8b0-56f2-48ce-a85c-a1ef1c57d7dc" containerName="mariadb-database-create" Oct 13 17:39:17 crc kubenswrapper[4720]: I1013 17:39:17.073719 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="a331f8b0-56f2-48ce-a85c-a1ef1c57d7dc" containerName="mariadb-database-create" Oct 13 17:39:17 crc kubenswrapper[4720]: I1013 17:39:17.073866 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="e022bab5-923a-4ce0-9027-d3dbffd6aa51" containerName="mariadb-database-create" Oct 13 17:39:17 crc kubenswrapper[4720]: I1013 17:39:17.073879 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="828a8af5-455b-48ff-ab12-433c76df235c" containerName="mariadb-database-create" Oct 13 17:39:17 crc kubenswrapper[4720]: I1013 17:39:17.073888 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="a331f8b0-56f2-48ce-a85c-a1ef1c57d7dc" containerName="mariadb-database-create" Oct 13 17:39:17 crc kubenswrapper[4720]: I1013 17:39:17.073898 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f752de1-5826-4009-a77c-b9186d9811ea" containerName="swift-ring-rebalance" Oct 13 17:39:17 crc kubenswrapper[4720]: I1013 17:39:17.073925 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="7349a4fa-fffe-44e9-aebb-ceb486b7fb45" containerName="dnsmasq-dns" Oct 13 17:39:17 crc kubenswrapper[4720]: I1013 17:39:17.074414 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3f72-account-create-gcnc4" Oct 13 17:39:17 crc kubenswrapper[4720]: I1013 17:39:17.076848 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Oct 13 17:39:17 crc kubenswrapper[4720]: I1013 17:39:17.081103 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-3f72-account-create-gcnc4"] Oct 13 17:39:17 crc kubenswrapper[4720]: I1013 17:39:17.184870 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksjv9\" (UniqueName: \"kubernetes.io/projected/360a98a6-bd5c-403b-a307-81dbb7396962-kube-api-access-ksjv9\") pod \"keystone-3f72-account-create-gcnc4\" (UID: \"360a98a6-bd5c-403b-a307-81dbb7396962\") " pod="openstack/keystone-3f72-account-create-gcnc4" Oct 13 17:39:17 crc kubenswrapper[4720]: I1013 17:39:17.273165 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-8fd1-account-create-mx8jn"] Oct 13 17:39:17 crc kubenswrapper[4720]: I1013 17:39:17.274559 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8fd1-account-create-mx8jn" Oct 13 17:39:17 crc kubenswrapper[4720]: I1013 17:39:17.276850 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Oct 13 17:39:17 crc kubenswrapper[4720]: I1013 17:39:17.283322 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8fd1-account-create-mx8jn"] Oct 13 17:39:17 crc kubenswrapper[4720]: I1013 17:39:17.286652 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksjv9\" (UniqueName: \"kubernetes.io/projected/360a98a6-bd5c-403b-a307-81dbb7396962-kube-api-access-ksjv9\") pod \"keystone-3f72-account-create-gcnc4\" (UID: \"360a98a6-bd5c-403b-a307-81dbb7396962\") " pod="openstack/keystone-3f72-account-create-gcnc4" Oct 13 17:39:17 crc kubenswrapper[4720]: I1013 17:39:17.327004 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksjv9\" (UniqueName: \"kubernetes.io/projected/360a98a6-bd5c-403b-a307-81dbb7396962-kube-api-access-ksjv9\") pod \"keystone-3f72-account-create-gcnc4\" (UID: \"360a98a6-bd5c-403b-a307-81dbb7396962\") " pod="openstack/keystone-3f72-account-create-gcnc4" Oct 13 17:39:17 crc kubenswrapper[4720]: I1013 17:39:17.388024 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k655q\" (UniqueName: \"kubernetes.io/projected/7ea781a5-9df7-4e60-909b-0e437f4c512b-kube-api-access-k655q\") pod \"placement-8fd1-account-create-mx8jn\" (UID: \"7ea781a5-9df7-4e60-909b-0e437f4c512b\") " pod="openstack/placement-8fd1-account-create-mx8jn" Oct 13 17:39:17 crc kubenswrapper[4720]: I1013 17:39:17.435362 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Oct 13 17:39:17 crc kubenswrapper[4720]: W1013 17:39:17.446796 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ffe28a4_ed4a_44c6_b982_501575dd907d.slice/crio-a885fb4cb0c6aba094025da0c237ad6260e54c7e7d8a97435a4feab0bf38d61a WatchSource:0}: Error finding container a885fb4cb0c6aba094025da0c237ad6260e54c7e7d8a97435a4feab0bf38d61a: Status 404 returned error can't find the container with id a885fb4cb0c6aba094025da0c237ad6260e54c7e7d8a97435a4feab0bf38d61a Oct 13 17:39:17 crc kubenswrapper[4720]: I1013 17:39:17.458142 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3f72-account-create-gcnc4" Oct 13 17:39:17 crc kubenswrapper[4720]: I1013 17:39:17.489179 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k655q\" (UniqueName: \"kubernetes.io/projected/7ea781a5-9df7-4e60-909b-0e437f4c512b-kube-api-access-k655q\") pod \"placement-8fd1-account-create-mx8jn\" (UID: \"7ea781a5-9df7-4e60-909b-0e437f4c512b\") " pod="openstack/placement-8fd1-account-create-mx8jn" Oct 13 17:39:17 crc kubenswrapper[4720]: I1013 17:39:17.509749 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k655q\" (UniqueName: \"kubernetes.io/projected/7ea781a5-9df7-4e60-909b-0e437f4c512b-kube-api-access-k655q\") pod \"placement-8fd1-account-create-mx8jn\" (UID: \"7ea781a5-9df7-4e60-909b-0e437f4c512b\") " pod="openstack/placement-8fd1-account-create-mx8jn" Oct 13 17:39:17 crc kubenswrapper[4720]: I1013 17:39:17.643396 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8fd1-account-create-mx8jn" Oct 13 17:39:17 crc kubenswrapper[4720]: I1013 17:39:17.695865 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-a6b5-account-create-s5d2r"] Oct 13 17:39:17 crc kubenswrapper[4720]: I1013 17:39:17.697102 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a6b5-account-create-s5d2r" Oct 13 17:39:17 crc kubenswrapper[4720]: I1013 17:39:17.701870 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Oct 13 17:39:17 crc kubenswrapper[4720]: I1013 17:39:17.713059 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-a6b5-account-create-s5d2r"] Oct 13 17:39:17 crc kubenswrapper[4720]: I1013 17:39:17.720364 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-3f72-account-create-gcnc4"] Oct 13 17:39:17 crc kubenswrapper[4720]: W1013 17:39:17.725580 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod360a98a6_bd5c_403b_a307_81dbb7396962.slice/crio-6c397b819af2aa6d1033bb7126162f9befd8ff35d8c195bcd5f14cc73b44cd23 WatchSource:0}: Error finding container 6c397b819af2aa6d1033bb7126162f9befd8ff35d8c195bcd5f14cc73b44cd23: Status 404 returned error can't find the container with id 6c397b819af2aa6d1033bb7126162f9befd8ff35d8c195bcd5f14cc73b44cd23 Oct 13 17:39:17 crc kubenswrapper[4720]: I1013 17:39:17.793770 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvtcg\" (UniqueName: \"kubernetes.io/projected/222effe0-fa9c-41a0-ae72-6ce0248ee52d-kube-api-access-dvtcg\") pod \"glance-a6b5-account-create-s5d2r\" (UID: \"222effe0-fa9c-41a0-ae72-6ce0248ee52d\") " pod="openstack/glance-a6b5-account-create-s5d2r" Oct 13 17:39:17 crc kubenswrapper[4720]: I1013 17:39:17.895509 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvtcg\" (UniqueName: \"kubernetes.io/projected/222effe0-fa9c-41a0-ae72-6ce0248ee52d-kube-api-access-dvtcg\") pod \"glance-a6b5-account-create-s5d2r\" (UID: \"222effe0-fa9c-41a0-ae72-6ce0248ee52d\") " pod="openstack/glance-a6b5-account-create-s5d2r" Oct 13 17:39:17 crc kubenswrapper[4720]: I1013 17:39:17.915745 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvtcg\" (UniqueName: \"kubernetes.io/projected/222effe0-fa9c-41a0-ae72-6ce0248ee52d-kube-api-access-dvtcg\") pod \"glance-a6b5-account-create-s5d2r\" (UID: \"222effe0-fa9c-41a0-ae72-6ce0248ee52d\") " pod="openstack/glance-a6b5-account-create-s5d2r" Oct 13 17:39:18 crc kubenswrapper[4720]: I1013 17:39:18.121547 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8fd1-account-create-mx8jn"] Oct 13 17:39:18 crc kubenswrapper[4720]: I1013 17:39:18.131144 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a6b5-account-create-s5d2r" Oct 13 17:39:18 crc kubenswrapper[4720]: W1013 17:39:18.132399 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ea781a5_9df7_4e60_909b_0e437f4c512b.slice/crio-657c59d2394f6232503222da316a3c8d38513b01fc9e5314449c8c452327e22b WatchSource:0}: Error finding container 657c59d2394f6232503222da316a3c8d38513b01fc9e5314449c8c452327e22b: Status 404 returned error can't find the container with id 657c59d2394f6232503222da316a3c8d38513b01fc9e5314449c8c452327e22b Oct 13 17:39:18 crc kubenswrapper[4720]: E1013 17:39:18.170818 4720 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod360a98a6_bd5c_403b_a307_81dbb7396962.slice/crio-conmon-52f08e812059eb073cacc07c4dc78f199d4c5ab92775a2c50e87ad26c4841368.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod360a98a6_bd5c_403b_a307_81dbb7396962.slice/crio-52f08e812059eb073cacc07c4dc78f199d4c5ab92775a2c50e87ad26c4841368.scope\": RecentStats: unable to find data in memory cache]" Oct 13 17:39:18 crc kubenswrapper[4720]: I1013 17:39:18.443787 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2ffe28a4-ed4a-44c6-b982-501575dd907d","Type":"ContainerStarted","Data":"a885fb4cb0c6aba094025da0c237ad6260e54c7e7d8a97435a4feab0bf38d61a"} Oct 13 17:39:18 crc kubenswrapper[4720]: I1013 17:39:18.445167 4720 generic.go:334] "Generic (PLEG): container finished" podID="360a98a6-bd5c-403b-a307-81dbb7396962" containerID="52f08e812059eb073cacc07c4dc78f199d4c5ab92775a2c50e87ad26c4841368" exitCode=0 Oct 13 17:39:18 crc kubenswrapper[4720]: I1013 17:39:18.445278 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3f72-account-create-gcnc4" event={"ID":"360a98a6-bd5c-403b-a307-81dbb7396962","Type":"ContainerDied","Data":"52f08e812059eb073cacc07c4dc78f199d4c5ab92775a2c50e87ad26c4841368"} Oct 13 17:39:18 crc kubenswrapper[4720]: I1013 17:39:18.445303 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3f72-account-create-gcnc4" event={"ID":"360a98a6-bd5c-403b-a307-81dbb7396962","Type":"ContainerStarted","Data":"6c397b819af2aa6d1033bb7126162f9befd8ff35d8c195bcd5f14cc73b44cd23"} Oct 13 17:39:18 crc kubenswrapper[4720]: I1013 17:39:18.446200 4720 generic.go:334] "Generic (PLEG): container finished" podID="7ea781a5-9df7-4e60-909b-0e437f4c512b" containerID="12392550598dfae26f34032162894283cd0ec232dcc69efa27258e452f1eaec7" exitCode=0 Oct 13 17:39:18 crc kubenswrapper[4720]: I1013 17:39:18.446246 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8fd1-account-create-mx8jn" event={"ID":"7ea781a5-9df7-4e60-909b-0e437f4c512b","Type":"ContainerDied","Data":"12392550598dfae26f34032162894283cd0ec232dcc69efa27258e452f1eaec7"} Oct 13 17:39:18 crc kubenswrapper[4720]: I1013 17:39:18.446301 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8fd1-account-create-mx8jn" event={"ID":"7ea781a5-9df7-4e60-909b-0e437f4c512b","Type":"ContainerStarted","Data":"657c59d2394f6232503222da316a3c8d38513b01fc9e5314449c8c452327e22b"} Oct 13 17:39:18 crc kubenswrapper[4720]: I1013 17:39:18.612163 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-a6b5-account-create-s5d2r"] Oct 13 17:39:19 crc kubenswrapper[4720]: I1013 17:39:19.461997 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2ffe28a4-ed4a-44c6-b982-501575dd907d","Type":"ContainerStarted","Data":"9eaa4ffe53bd957f806ca526f2ae6fef230a0d4815eda346df8dae9e7f7329b9"} Oct 13 17:39:19 crc kubenswrapper[4720]: I1013 17:39:19.462074 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2ffe28a4-ed4a-44c6-b982-501575dd907d","Type":"ContainerStarted","Data":"2cb466bda1617d985ae7135c002a70473c3669da57ed669b73eb943c18190517"} Oct 13 17:39:19 crc kubenswrapper[4720]: I1013 17:39:19.462096 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2ffe28a4-ed4a-44c6-b982-501575dd907d","Type":"ContainerStarted","Data":"5b9f56346624d3e96200257b014ec40746a61ee0b9d679fc800e299abf97c328"} Oct 13 17:39:19 crc kubenswrapper[4720]: I1013 17:39:19.462113 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2ffe28a4-ed4a-44c6-b982-501575dd907d","Type":"ContainerStarted","Data":"3bcedf83dfa82b2fbbfa773912a44b910f2b693674cdf6b944d895c34fa9e07f"} Oct 13 17:39:19 crc kubenswrapper[4720]: I1013 17:39:19.464618 4720 generic.go:334] "Generic (PLEG): container finished" podID="222effe0-fa9c-41a0-ae72-6ce0248ee52d" containerID="6a868f9ad6a67818916bb65e34ef887f4a262fd33b32889d9ba95be655d048e3" exitCode=0 Oct 13 17:39:19 crc kubenswrapper[4720]: I1013 17:39:19.465394 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a6b5-account-create-s5d2r" event={"ID":"222effe0-fa9c-41a0-ae72-6ce0248ee52d","Type":"ContainerDied","Data":"6a868f9ad6a67818916bb65e34ef887f4a262fd33b32889d9ba95be655d048e3"} Oct 13 17:39:19 crc kubenswrapper[4720]: I1013 17:39:19.465438 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a6b5-account-create-s5d2r" event={"ID":"222effe0-fa9c-41a0-ae72-6ce0248ee52d","Type":"ContainerStarted","Data":"16d9c1ef5262f53fe3a555059e3bc191645dce6f24f3149ebaa4cda9a0db3223"} Oct 13 17:39:19 crc kubenswrapper[4720]: I1013 17:39:19.914440 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3f72-account-create-gcnc4" Oct 13 17:39:19 crc kubenswrapper[4720]: I1013 17:39:19.921614 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8fd1-account-create-mx8jn" Oct 13 17:39:20 crc kubenswrapper[4720]: I1013 17:39:20.030717 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksjv9\" (UniqueName: \"kubernetes.io/projected/360a98a6-bd5c-403b-a307-81dbb7396962-kube-api-access-ksjv9\") pod \"360a98a6-bd5c-403b-a307-81dbb7396962\" (UID: \"360a98a6-bd5c-403b-a307-81dbb7396962\") " Oct 13 17:39:20 crc kubenswrapper[4720]: I1013 17:39:20.030784 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k655q\" (UniqueName: \"kubernetes.io/projected/7ea781a5-9df7-4e60-909b-0e437f4c512b-kube-api-access-k655q\") pod \"7ea781a5-9df7-4e60-909b-0e437f4c512b\" (UID: \"7ea781a5-9df7-4e60-909b-0e437f4c512b\") " Oct 13 17:39:20 crc kubenswrapper[4720]: I1013 17:39:20.037263 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/360a98a6-bd5c-403b-a307-81dbb7396962-kube-api-access-ksjv9" (OuterVolumeSpecName: "kube-api-access-ksjv9") pod "360a98a6-bd5c-403b-a307-81dbb7396962" (UID: "360a98a6-bd5c-403b-a307-81dbb7396962"). InnerVolumeSpecName "kube-api-access-ksjv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:39:20 crc kubenswrapper[4720]: I1013 17:39:20.043439 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ea781a5-9df7-4e60-909b-0e437f4c512b-kube-api-access-k655q" (OuterVolumeSpecName: "kube-api-access-k655q") pod "7ea781a5-9df7-4e60-909b-0e437f4c512b" (UID: "7ea781a5-9df7-4e60-909b-0e437f4c512b"). InnerVolumeSpecName "kube-api-access-k655q". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:39:20 crc kubenswrapper[4720]: I1013 17:39:20.132480 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksjv9\" (UniqueName: \"kubernetes.io/projected/360a98a6-bd5c-403b-a307-81dbb7396962-kube-api-access-ksjv9\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:20 crc kubenswrapper[4720]: I1013 17:39:20.132518 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k655q\" (UniqueName: \"kubernetes.io/projected/7ea781a5-9df7-4e60-909b-0e437f4c512b-kube-api-access-k655q\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:20 crc kubenswrapper[4720]: I1013 17:39:20.475411 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3f72-account-create-gcnc4" event={"ID":"360a98a6-bd5c-403b-a307-81dbb7396962","Type":"ContainerDied","Data":"6c397b819af2aa6d1033bb7126162f9befd8ff35d8c195bcd5f14cc73b44cd23"} Oct 13 17:39:20 crc kubenswrapper[4720]: I1013 17:39:20.475821 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c397b819af2aa6d1033bb7126162f9befd8ff35d8c195bcd5f14cc73b44cd23" Oct 13 17:39:20 crc kubenswrapper[4720]: I1013 17:39:20.475468 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3f72-account-create-gcnc4" Oct 13 17:39:20 crc kubenswrapper[4720]: I1013 17:39:20.477968 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8fd1-account-create-mx8jn" event={"ID":"7ea781a5-9df7-4e60-909b-0e437f4c512b","Type":"ContainerDied","Data":"657c59d2394f6232503222da316a3c8d38513b01fc9e5314449c8c452327e22b"} Oct 13 17:39:20 crc kubenswrapper[4720]: I1013 17:39:20.478018 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8fd1-account-create-mx8jn" Oct 13 17:39:20 crc kubenswrapper[4720]: I1013 17:39:20.478020 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="657c59d2394f6232503222da316a3c8d38513b01fc9e5314449c8c452327e22b" Oct 13 17:39:21 crc kubenswrapper[4720]: I1013 17:39:21.490662 4720 generic.go:334] "Generic (PLEG): container finished" podID="76c17d7a-8441-4b23-839b-f95ac54a6b24" containerID="b82320f15b735d12b0964ce3b17f86b2a98cb4eabc1eed44f6074ca7c0dcf4e4" exitCode=0 Oct 13 17:39:21 crc kubenswrapper[4720]: I1013 17:39:21.490724 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"76c17d7a-8441-4b23-839b-f95ac54a6b24","Type":"ContainerDied","Data":"b82320f15b735d12b0964ce3b17f86b2a98cb4eabc1eed44f6074ca7c0dcf4e4"} Oct 13 17:39:21 crc kubenswrapper[4720]: I1013 17:39:21.494655 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a6b5-account-create-s5d2r" event={"ID":"222effe0-fa9c-41a0-ae72-6ce0248ee52d","Type":"ContainerDied","Data":"16d9c1ef5262f53fe3a555059e3bc191645dce6f24f3149ebaa4cda9a0db3223"} Oct 13 17:39:21 crc kubenswrapper[4720]: I1013 17:39:21.494676 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16d9c1ef5262f53fe3a555059e3bc191645dce6f24f3149ebaa4cda9a0db3223" Oct 13 17:39:21 crc kubenswrapper[4720]: I1013 17:39:21.676913 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a6b5-account-create-s5d2r" Oct 13 17:39:21 crc kubenswrapper[4720]: I1013 17:39:21.801022 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvtcg\" (UniqueName: \"kubernetes.io/projected/222effe0-fa9c-41a0-ae72-6ce0248ee52d-kube-api-access-dvtcg\") pod \"222effe0-fa9c-41a0-ae72-6ce0248ee52d\" (UID: \"222effe0-fa9c-41a0-ae72-6ce0248ee52d\") " Oct 13 17:39:21 crc kubenswrapper[4720]: I1013 17:39:21.819832 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/222effe0-fa9c-41a0-ae72-6ce0248ee52d-kube-api-access-dvtcg" (OuterVolumeSpecName: "kube-api-access-dvtcg") pod "222effe0-fa9c-41a0-ae72-6ce0248ee52d" (UID: "222effe0-fa9c-41a0-ae72-6ce0248ee52d"). InnerVolumeSpecName "kube-api-access-dvtcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:39:21 crc kubenswrapper[4720]: I1013 17:39:21.903740 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvtcg\" (UniqueName: \"kubernetes.io/projected/222effe0-fa9c-41a0-ae72-6ce0248ee52d-kube-api-access-dvtcg\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:22 crc kubenswrapper[4720]: I1013 17:39:22.506255 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2ffe28a4-ed4a-44c6-b982-501575dd907d","Type":"ContainerStarted","Data":"193d805116786bb482b027d49980b9d40519a91435538375d1a44a91111b2294"} Oct 13 17:39:22 crc kubenswrapper[4720]: I1013 17:39:22.506529 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2ffe28a4-ed4a-44c6-b982-501575dd907d","Type":"ContainerStarted","Data":"a1e599df960c78a77a71d286f90e6e40fa179dabfba5f6edbcab1a1170c74d6d"} Oct 13 17:39:22 crc kubenswrapper[4720]: I1013 17:39:22.506544 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2ffe28a4-ed4a-44c6-b982-501575dd907d","Type":"ContainerStarted","Data":"dc89f5d8aa73574d70ed3526b4b2f22057cec8265ab5dae35eb67132c7ac32bf"} Oct 13 17:39:22 crc kubenswrapper[4720]: I1013 17:39:22.506559 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2ffe28a4-ed4a-44c6-b982-501575dd907d","Type":"ContainerStarted","Data":"41e7be3dafcb719b4148e8b67f8c4aa277216193a4e27ae6397c6c6d22464d36"} Oct 13 17:39:22 crc kubenswrapper[4720]: I1013 17:39:22.508229 4720 generic.go:334] "Generic (PLEG): container finished" podID="af59309d-fcea-47ce-85b5-0eafbf780d08" containerID="cbc606f7c5761bcc6663e0a2f8f1346ba2ab48d02354a5f22b7a9a7e5ee7cad0" exitCode=0 Oct 13 17:39:22 crc kubenswrapper[4720]: I1013 17:39:22.508285 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"af59309d-fcea-47ce-85b5-0eafbf780d08","Type":"ContainerDied","Data":"cbc606f7c5761bcc6663e0a2f8f1346ba2ab48d02354a5f22b7a9a7e5ee7cad0"} Oct 13 17:39:22 crc kubenswrapper[4720]: I1013 17:39:22.516217 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a6b5-account-create-s5d2r" Oct 13 17:39:22 crc kubenswrapper[4720]: I1013 17:39:22.516217 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"76c17d7a-8441-4b23-839b-f95ac54a6b24","Type":"ContainerStarted","Data":"31c995078368b5b95f3c3877081d2472920be4f1127b2d1afc577d2f63a1c6c9"} Oct 13 17:39:22 crc kubenswrapper[4720]: I1013 17:39:22.517691 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:39:22 crc kubenswrapper[4720]: I1013 17:39:22.625217 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=49.887754123 podStartE2EDuration="1m0.625174745s" podCreationTimestamp="2025-10-13 17:38:22 +0000 UTC" firstStartedPulling="2025-10-13 17:38:36.641712561 +0000 UTC m=+862.098962693" lastFinishedPulling="2025-10-13 17:38:47.379133183 +0000 UTC m=+872.836383315" observedRunningTime="2025-10-13 17:39:22.593207302 +0000 UTC m=+908.050457444" watchObservedRunningTime="2025-10-13 17:39:22.625174745 +0000 UTC m=+908.082424927" Oct 13 17:39:22 crc kubenswrapper[4720]: I1013 17:39:22.969136 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-99x2r"] Oct 13 17:39:22 crc kubenswrapper[4720]: E1013 17:39:22.969457 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="360a98a6-bd5c-403b-a307-81dbb7396962" containerName="mariadb-account-create" Oct 13 17:39:22 crc kubenswrapper[4720]: I1013 17:39:22.969475 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="360a98a6-bd5c-403b-a307-81dbb7396962" containerName="mariadb-account-create" Oct 13 17:39:22 crc kubenswrapper[4720]: E1013 17:39:22.969487 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="222effe0-fa9c-41a0-ae72-6ce0248ee52d" containerName="mariadb-account-create" Oct 13 17:39:22 crc kubenswrapper[4720]: I1013 17:39:22.969494 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="222effe0-fa9c-41a0-ae72-6ce0248ee52d" containerName="mariadb-account-create" Oct 13 17:39:22 crc kubenswrapper[4720]: E1013 17:39:22.969503 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ea781a5-9df7-4e60-909b-0e437f4c512b" containerName="mariadb-account-create" Oct 13 17:39:22 crc kubenswrapper[4720]: I1013 17:39:22.969509 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ea781a5-9df7-4e60-909b-0e437f4c512b" containerName="mariadb-account-create" Oct 13 17:39:22 crc kubenswrapper[4720]: I1013 17:39:22.969680 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ea781a5-9df7-4e60-909b-0e437f4c512b" containerName="mariadb-account-create" Oct 13 17:39:22 crc kubenswrapper[4720]: I1013 17:39:22.969719 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="222effe0-fa9c-41a0-ae72-6ce0248ee52d" containerName="mariadb-account-create" Oct 13 17:39:22 crc kubenswrapper[4720]: I1013 17:39:22.969735 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="360a98a6-bd5c-403b-a307-81dbb7396962" containerName="mariadb-account-create" Oct 13 17:39:22 crc kubenswrapper[4720]: I1013 17:39:22.970279 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-99x2r" Oct 13 17:39:22 crc kubenswrapper[4720]: I1013 17:39:22.972235 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-fwlrr" Oct 13 17:39:22 crc kubenswrapper[4720]: I1013 17:39:22.972396 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Oct 13 17:39:22 crc kubenswrapper[4720]: I1013 17:39:22.991154 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-99x2r"] Oct 13 17:39:23 crc kubenswrapper[4720]: I1013 17:39:23.026879 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bl9lk\" (UniqueName: \"kubernetes.io/projected/b035e5a0-e44b-4897-bcfa-c7112b8eee2d-kube-api-access-bl9lk\") pod \"glance-db-sync-99x2r\" (UID: \"b035e5a0-e44b-4897-bcfa-c7112b8eee2d\") " pod="openstack/glance-db-sync-99x2r" Oct 13 17:39:23 crc kubenswrapper[4720]: I1013 17:39:23.027004 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b035e5a0-e44b-4897-bcfa-c7112b8eee2d-combined-ca-bundle\") pod \"glance-db-sync-99x2r\" (UID: \"b035e5a0-e44b-4897-bcfa-c7112b8eee2d\") " pod="openstack/glance-db-sync-99x2r" Oct 13 17:39:23 crc kubenswrapper[4720]: I1013 17:39:23.027121 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b035e5a0-e44b-4897-bcfa-c7112b8eee2d-config-data\") pod \"glance-db-sync-99x2r\" (UID: \"b035e5a0-e44b-4897-bcfa-c7112b8eee2d\") " pod="openstack/glance-db-sync-99x2r" Oct 13 17:39:23 crc kubenswrapper[4720]: I1013 17:39:23.027376 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b035e5a0-e44b-4897-bcfa-c7112b8eee2d-db-sync-config-data\") pod \"glance-db-sync-99x2r\" (UID: \"b035e5a0-e44b-4897-bcfa-c7112b8eee2d\") " pod="openstack/glance-db-sync-99x2r" Oct 13 17:39:23 crc kubenswrapper[4720]: I1013 17:39:23.125792 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-vbc6h" podUID="283c0b58-d0a1-4cf1-af87-3859306c4a60" containerName="ovn-controller" probeResult="failure" output=< Oct 13 17:39:23 crc kubenswrapper[4720]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Oct 13 17:39:23 crc kubenswrapper[4720]: > Oct 13 17:39:23 crc kubenswrapper[4720]: I1013 17:39:23.129014 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b035e5a0-e44b-4897-bcfa-c7112b8eee2d-combined-ca-bundle\") pod \"glance-db-sync-99x2r\" (UID: \"b035e5a0-e44b-4897-bcfa-c7112b8eee2d\") " pod="openstack/glance-db-sync-99x2r" Oct 13 17:39:23 crc kubenswrapper[4720]: I1013 17:39:23.129088 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b035e5a0-e44b-4897-bcfa-c7112b8eee2d-config-data\") pod \"glance-db-sync-99x2r\" (UID: \"b035e5a0-e44b-4897-bcfa-c7112b8eee2d\") " pod="openstack/glance-db-sync-99x2r" Oct 13 17:39:23 crc kubenswrapper[4720]: I1013 17:39:23.129239 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b035e5a0-e44b-4897-bcfa-c7112b8eee2d-db-sync-config-data\") pod \"glance-db-sync-99x2r\" (UID: \"b035e5a0-e44b-4897-bcfa-c7112b8eee2d\") " pod="openstack/glance-db-sync-99x2r" Oct 13 17:39:23 crc kubenswrapper[4720]: I1013 17:39:23.129328 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bl9lk\" (UniqueName: \"kubernetes.io/projected/b035e5a0-e44b-4897-bcfa-c7112b8eee2d-kube-api-access-bl9lk\") pod \"glance-db-sync-99x2r\" (UID: \"b035e5a0-e44b-4897-bcfa-c7112b8eee2d\") " pod="openstack/glance-db-sync-99x2r" Oct 13 17:39:23 crc kubenswrapper[4720]: I1013 17:39:23.133916 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b035e5a0-e44b-4897-bcfa-c7112b8eee2d-db-sync-config-data\") pod \"glance-db-sync-99x2r\" (UID: \"b035e5a0-e44b-4897-bcfa-c7112b8eee2d\") " pod="openstack/glance-db-sync-99x2r" Oct 13 17:39:23 crc kubenswrapper[4720]: I1013 17:39:23.134759 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b035e5a0-e44b-4897-bcfa-c7112b8eee2d-config-data\") pod \"glance-db-sync-99x2r\" (UID: \"b035e5a0-e44b-4897-bcfa-c7112b8eee2d\") " pod="openstack/glance-db-sync-99x2r" Oct 13 17:39:23 crc kubenswrapper[4720]: I1013 17:39:23.146877 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b035e5a0-e44b-4897-bcfa-c7112b8eee2d-combined-ca-bundle\") pod \"glance-db-sync-99x2r\" (UID: \"b035e5a0-e44b-4897-bcfa-c7112b8eee2d\") " pod="openstack/glance-db-sync-99x2r" Oct 13 17:39:23 crc kubenswrapper[4720]: I1013 17:39:23.161070 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bl9lk\" (UniqueName: \"kubernetes.io/projected/b035e5a0-e44b-4897-bcfa-c7112b8eee2d-kube-api-access-bl9lk\") pod \"glance-db-sync-99x2r\" (UID: \"b035e5a0-e44b-4897-bcfa-c7112b8eee2d\") " pod="openstack/glance-db-sync-99x2r" Oct 13 17:39:23 crc kubenswrapper[4720]: I1013 17:39:23.228506 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-cz99q" Oct 13 17:39:23 crc kubenswrapper[4720]: I1013 17:39:23.230759 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-cz99q" Oct 13 17:39:23 crc kubenswrapper[4720]: I1013 17:39:23.286691 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-99x2r" Oct 13 17:39:23 crc kubenswrapper[4720]: I1013 17:39:23.466726 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-vbc6h-config-hknqn"] Oct 13 17:39:23 crc kubenswrapper[4720]: I1013 17:39:23.471509 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vbc6h-config-hknqn" Oct 13 17:39:23 crc kubenswrapper[4720]: I1013 17:39:23.473178 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Oct 13 17:39:23 crc kubenswrapper[4720]: I1013 17:39:23.473444 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-vbc6h-config-hknqn"] Oct 13 17:39:23 crc kubenswrapper[4720]: I1013 17:39:23.529809 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"af59309d-fcea-47ce-85b5-0eafbf780d08","Type":"ContainerStarted","Data":"eae200ee8e3ab7a84a4b212ea54a0bc79c87bfdac4395df822709711c402e84c"} Oct 13 17:39:23 crc kubenswrapper[4720]: I1013 17:39:23.534427 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b-var-run\") pod \"ovn-controller-vbc6h-config-hknqn\" (UID: \"fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b\") " pod="openstack/ovn-controller-vbc6h-config-hknqn" Oct 13 17:39:23 crc kubenswrapper[4720]: I1013 17:39:23.534515 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b-scripts\") pod \"ovn-controller-vbc6h-config-hknqn\" (UID: \"fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b\") " pod="openstack/ovn-controller-vbc6h-config-hknqn" Oct 13 17:39:23 crc kubenswrapper[4720]: I1013 17:39:23.534564 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b-var-log-ovn\") pod \"ovn-controller-vbc6h-config-hknqn\" (UID: \"fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b\") " pod="openstack/ovn-controller-vbc6h-config-hknqn" Oct 13 17:39:23 crc kubenswrapper[4720]: I1013 17:39:23.534663 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b-var-run-ovn\") pod \"ovn-controller-vbc6h-config-hknqn\" (UID: \"fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b\") " pod="openstack/ovn-controller-vbc6h-config-hknqn" Oct 13 17:39:23 crc kubenswrapper[4720]: I1013 17:39:23.534752 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b-additional-scripts\") pod \"ovn-controller-vbc6h-config-hknqn\" (UID: \"fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b\") " pod="openstack/ovn-controller-vbc6h-config-hknqn" Oct 13 17:39:23 crc kubenswrapper[4720]: I1013 17:39:23.534779 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gv5v\" (UniqueName: \"kubernetes.io/projected/fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b-kube-api-access-8gv5v\") pod \"ovn-controller-vbc6h-config-hknqn\" (UID: \"fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b\") " pod="openstack/ovn-controller-vbc6h-config-hknqn" Oct 13 17:39:23 crc kubenswrapper[4720]: I1013 17:39:23.563948 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=50.338564572 podStartE2EDuration="1m1.563927239s" podCreationTimestamp="2025-10-13 17:38:22 +0000 UTC" firstStartedPulling="2025-10-13 17:38:35.961546671 +0000 UTC m=+861.418796803" lastFinishedPulling="2025-10-13 17:38:47.186909328 +0000 UTC m=+872.644159470" observedRunningTime="2025-10-13 17:39:23.556794165 +0000 UTC m=+909.014044327" watchObservedRunningTime="2025-10-13 17:39:23.563927239 +0000 UTC m=+909.021177391" Oct 13 17:39:23 crc kubenswrapper[4720]: I1013 17:39:23.635844 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b-var-run-ovn\") pod \"ovn-controller-vbc6h-config-hknqn\" (UID: \"fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b\") " pod="openstack/ovn-controller-vbc6h-config-hknqn" Oct 13 17:39:23 crc kubenswrapper[4720]: I1013 17:39:23.635919 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b-additional-scripts\") pod \"ovn-controller-vbc6h-config-hknqn\" (UID: \"fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b\") " pod="openstack/ovn-controller-vbc6h-config-hknqn" Oct 13 17:39:23 crc kubenswrapper[4720]: I1013 17:39:23.635946 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gv5v\" (UniqueName: \"kubernetes.io/projected/fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b-kube-api-access-8gv5v\") pod \"ovn-controller-vbc6h-config-hknqn\" (UID: \"fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b\") " pod="openstack/ovn-controller-vbc6h-config-hknqn" Oct 13 17:39:23 crc kubenswrapper[4720]: I1013 17:39:23.636165 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b-var-run\") pod \"ovn-controller-vbc6h-config-hknqn\" (UID: \"fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b\") " pod="openstack/ovn-controller-vbc6h-config-hknqn" Oct 13 17:39:23 crc kubenswrapper[4720]: I1013 17:39:23.636224 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b-scripts\") pod \"ovn-controller-vbc6h-config-hknqn\" (UID: \"fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b\") " pod="openstack/ovn-controller-vbc6h-config-hknqn" Oct 13 17:39:23 crc kubenswrapper[4720]: I1013 17:39:23.636241 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b-var-log-ovn\") pod \"ovn-controller-vbc6h-config-hknqn\" (UID: \"fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b\") " pod="openstack/ovn-controller-vbc6h-config-hknqn" Oct 13 17:39:23 crc kubenswrapper[4720]: I1013 17:39:23.636470 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b-var-log-ovn\") pod \"ovn-controller-vbc6h-config-hknqn\" (UID: \"fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b\") " pod="openstack/ovn-controller-vbc6h-config-hknqn" Oct 13 17:39:23 crc kubenswrapper[4720]: I1013 17:39:23.637218 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b-var-run-ovn\") pod \"ovn-controller-vbc6h-config-hknqn\" (UID: \"fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b\") " pod="openstack/ovn-controller-vbc6h-config-hknqn" Oct 13 17:39:23 crc kubenswrapper[4720]: I1013 17:39:23.637466 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b-var-run\") pod \"ovn-controller-vbc6h-config-hknqn\" (UID: \"fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b\") " pod="openstack/ovn-controller-vbc6h-config-hknqn" Oct 13 17:39:23 crc kubenswrapper[4720]: I1013 17:39:23.637926 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b-additional-scripts\") pod \"ovn-controller-vbc6h-config-hknqn\" (UID: \"fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b\") " pod="openstack/ovn-controller-vbc6h-config-hknqn" Oct 13 17:39:23 crc kubenswrapper[4720]: I1013 17:39:23.639459 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b-scripts\") pod \"ovn-controller-vbc6h-config-hknqn\" (UID: \"fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b\") " pod="openstack/ovn-controller-vbc6h-config-hknqn" Oct 13 17:39:23 crc kubenswrapper[4720]: I1013 17:39:23.673251 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Oct 13 17:39:23 crc kubenswrapper[4720]: I1013 17:39:23.679391 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gv5v\" (UniqueName: \"kubernetes.io/projected/fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b-kube-api-access-8gv5v\") pod \"ovn-controller-vbc6h-config-hknqn\" (UID: \"fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b\") " pod="openstack/ovn-controller-vbc6h-config-hknqn" Oct 13 17:39:23 crc kubenswrapper[4720]: I1013 17:39:23.788379 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vbc6h-config-hknqn" Oct 13 17:39:24 crc kubenswrapper[4720]: I1013 17:39:24.113987 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-99x2r"] Oct 13 17:39:24 crc kubenswrapper[4720]: I1013 17:39:24.334177 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-vbc6h-config-hknqn"] Oct 13 17:39:24 crc kubenswrapper[4720]: W1013 17:39:24.341214 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfbe4ea8c_bec5_4adb_a4de_1c6c0fc4aa1b.slice/crio-1638ed61ea0cb9c0701c973e89bb08fe72c17cdc818130cc869045edd9be0897 WatchSource:0}: Error finding container 1638ed61ea0cb9c0701c973e89bb08fe72c17cdc818130cc869045edd9be0897: Status 404 returned error can't find the container with id 1638ed61ea0cb9c0701c973e89bb08fe72c17cdc818130cc869045edd9be0897 Oct 13 17:39:24 crc kubenswrapper[4720]: I1013 17:39:24.542454 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vbc6h-config-hknqn" event={"ID":"fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b","Type":"ContainerStarted","Data":"1638ed61ea0cb9c0701c973e89bb08fe72c17cdc818130cc869045edd9be0897"} Oct 13 17:39:24 crc kubenswrapper[4720]: I1013 17:39:24.549632 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2ffe28a4-ed4a-44c6-b982-501575dd907d","Type":"ContainerStarted","Data":"14808bf07fafbddb5382d7f8246b779aa47c89263c60cd7144a7bd6158aac2ed"} Oct 13 17:39:24 crc kubenswrapper[4720]: I1013 17:39:24.549673 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2ffe28a4-ed4a-44c6-b982-501575dd907d","Type":"ContainerStarted","Data":"77aa4d9da31c9d876d8567ecebcb090770fda4fa841b6b846edee539559063ef"} Oct 13 17:39:24 crc kubenswrapper[4720]: I1013 17:39:24.549683 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2ffe28a4-ed4a-44c6-b982-501575dd907d","Type":"ContainerStarted","Data":"42ba1301f600e3b19a5252e50413c3b34da1d0d13c4ab6ecdfb4b9c722a74932"} Oct 13 17:39:24 crc kubenswrapper[4720]: I1013 17:39:24.549691 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2ffe28a4-ed4a-44c6-b982-501575dd907d","Type":"ContainerStarted","Data":"4d710fd1b5ae36e80b556b8360c0a70503e135fbeedab9a871456cf818fbb469"} Oct 13 17:39:24 crc kubenswrapper[4720]: I1013 17:39:24.549699 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2ffe28a4-ed4a-44c6-b982-501575dd907d","Type":"ContainerStarted","Data":"fab160bd8cdfe4d33ef77734b7510513a4ce93599354d13ccf768039242db578"} Oct 13 17:39:24 crc kubenswrapper[4720]: I1013 17:39:24.551156 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-99x2r" event={"ID":"b035e5a0-e44b-4897-bcfa-c7112b8eee2d","Type":"ContainerStarted","Data":"a67d7049c0fbd6ad1b31232134112fe87b43dea763372e3bf00d3b3f6c475827"} Oct 13 17:39:25 crc kubenswrapper[4720]: I1013 17:39:25.568079 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2ffe28a4-ed4a-44c6-b982-501575dd907d","Type":"ContainerStarted","Data":"6001198dc8f473303b83368ec3f8632fffe999bfb20f0036ee974c957d593ec7"} Oct 13 17:39:25 crc kubenswrapper[4720]: I1013 17:39:25.568502 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2ffe28a4-ed4a-44c6-b982-501575dd907d","Type":"ContainerStarted","Data":"bc519cef35d377b8af84f4937338d09bcab8cb169029ae48fb7c6a1647949393"} Oct 13 17:39:25 crc kubenswrapper[4720]: I1013 17:39:25.571046 4720 generic.go:334] "Generic (PLEG): container finished" podID="fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b" containerID="3872641b485d9425673af80d8a8482b6b99a69f7999cfa891a974a8fe63c1c1c" exitCode=0 Oct 13 17:39:25 crc kubenswrapper[4720]: I1013 17:39:25.571155 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vbc6h-config-hknqn" event={"ID":"fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b","Type":"ContainerDied","Data":"3872641b485d9425673af80d8a8482b6b99a69f7999cfa891a974a8fe63c1c1c"} Oct 13 17:39:25 crc kubenswrapper[4720]: I1013 17:39:25.623599 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=20.450883967 podStartE2EDuration="26.623585665s" podCreationTimestamp="2025-10-13 17:38:59 +0000 UTC" firstStartedPulling="2025-10-13 17:39:17.450000473 +0000 UTC m=+902.907250605" lastFinishedPulling="2025-10-13 17:39:23.622702151 +0000 UTC m=+909.079952303" observedRunningTime="2025-10-13 17:39:25.616965375 +0000 UTC m=+911.074215527" watchObservedRunningTime="2025-10-13 17:39:25.623585665 +0000 UTC m=+911.080835797" Oct 13 17:39:25 crc kubenswrapper[4720]: I1013 17:39:25.895673 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-hqcjj"] Oct 13 17:39:25 crc kubenswrapper[4720]: I1013 17:39:25.897024 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-hqcjj" Oct 13 17:39:25 crc kubenswrapper[4720]: I1013 17:39:25.906450 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Oct 13 17:39:25 crc kubenswrapper[4720]: I1013 17:39:25.923074 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-hqcjj"] Oct 13 17:39:25 crc kubenswrapper[4720]: I1013 17:39:25.987107 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6720cae9-e687-497d-955d-53e36250c8a4-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-hqcjj\" (UID: \"6720cae9-e687-497d-955d-53e36250c8a4\") " pod="openstack/dnsmasq-dns-77585f5f8c-hqcjj" Oct 13 17:39:25 crc kubenswrapper[4720]: I1013 17:39:25.987159 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6720cae9-e687-497d-955d-53e36250c8a4-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-hqcjj\" (UID: \"6720cae9-e687-497d-955d-53e36250c8a4\") " pod="openstack/dnsmasq-dns-77585f5f8c-hqcjj" Oct 13 17:39:25 crc kubenswrapper[4720]: I1013 17:39:25.987199 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6720cae9-e687-497d-955d-53e36250c8a4-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-hqcjj\" (UID: \"6720cae9-e687-497d-955d-53e36250c8a4\") " pod="openstack/dnsmasq-dns-77585f5f8c-hqcjj" Oct 13 17:39:25 crc kubenswrapper[4720]: I1013 17:39:25.987216 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6720cae9-e687-497d-955d-53e36250c8a4-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-hqcjj\" (UID: \"6720cae9-e687-497d-955d-53e36250c8a4\") " pod="openstack/dnsmasq-dns-77585f5f8c-hqcjj" Oct 13 17:39:25 crc kubenswrapper[4720]: I1013 17:39:25.987317 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnt9k\" (UniqueName: \"kubernetes.io/projected/6720cae9-e687-497d-955d-53e36250c8a4-kube-api-access-pnt9k\") pod \"dnsmasq-dns-77585f5f8c-hqcjj\" (UID: \"6720cae9-e687-497d-955d-53e36250c8a4\") " pod="openstack/dnsmasq-dns-77585f5f8c-hqcjj" Oct 13 17:39:25 crc kubenswrapper[4720]: I1013 17:39:25.987359 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6720cae9-e687-497d-955d-53e36250c8a4-config\") pod \"dnsmasq-dns-77585f5f8c-hqcjj\" (UID: \"6720cae9-e687-497d-955d-53e36250c8a4\") " pod="openstack/dnsmasq-dns-77585f5f8c-hqcjj" Oct 13 17:39:26 crc kubenswrapper[4720]: I1013 17:39:26.089271 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnt9k\" (UniqueName: \"kubernetes.io/projected/6720cae9-e687-497d-955d-53e36250c8a4-kube-api-access-pnt9k\") pod \"dnsmasq-dns-77585f5f8c-hqcjj\" (UID: \"6720cae9-e687-497d-955d-53e36250c8a4\") " pod="openstack/dnsmasq-dns-77585f5f8c-hqcjj" Oct 13 17:39:26 crc kubenswrapper[4720]: I1013 17:39:26.089687 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6720cae9-e687-497d-955d-53e36250c8a4-config\") pod \"dnsmasq-dns-77585f5f8c-hqcjj\" (UID: \"6720cae9-e687-497d-955d-53e36250c8a4\") " pod="openstack/dnsmasq-dns-77585f5f8c-hqcjj" Oct 13 17:39:26 crc kubenswrapper[4720]: I1013 17:39:26.089752 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6720cae9-e687-497d-955d-53e36250c8a4-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-hqcjj\" (UID: \"6720cae9-e687-497d-955d-53e36250c8a4\") " pod="openstack/dnsmasq-dns-77585f5f8c-hqcjj" Oct 13 17:39:26 crc kubenswrapper[4720]: I1013 17:39:26.089790 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6720cae9-e687-497d-955d-53e36250c8a4-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-hqcjj\" (UID: \"6720cae9-e687-497d-955d-53e36250c8a4\") " pod="openstack/dnsmasq-dns-77585f5f8c-hqcjj" Oct 13 17:39:26 crc kubenswrapper[4720]: I1013 17:39:26.089819 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6720cae9-e687-497d-955d-53e36250c8a4-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-hqcjj\" (UID: \"6720cae9-e687-497d-955d-53e36250c8a4\") " pod="openstack/dnsmasq-dns-77585f5f8c-hqcjj" Oct 13 17:39:26 crc kubenswrapper[4720]: I1013 17:39:26.089841 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6720cae9-e687-497d-955d-53e36250c8a4-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-hqcjj\" (UID: \"6720cae9-e687-497d-955d-53e36250c8a4\") " pod="openstack/dnsmasq-dns-77585f5f8c-hqcjj" Oct 13 17:39:26 crc kubenswrapper[4720]: I1013 17:39:26.090972 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6720cae9-e687-497d-955d-53e36250c8a4-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-hqcjj\" (UID: \"6720cae9-e687-497d-955d-53e36250c8a4\") " pod="openstack/dnsmasq-dns-77585f5f8c-hqcjj" Oct 13 17:39:26 crc kubenswrapper[4720]: I1013 17:39:26.091712 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6720cae9-e687-497d-955d-53e36250c8a4-config\") pod \"dnsmasq-dns-77585f5f8c-hqcjj\" (UID: \"6720cae9-e687-497d-955d-53e36250c8a4\") " pod="openstack/dnsmasq-dns-77585f5f8c-hqcjj" Oct 13 17:39:26 crc kubenswrapper[4720]: I1013 17:39:26.092439 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6720cae9-e687-497d-955d-53e36250c8a4-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-hqcjj\" (UID: \"6720cae9-e687-497d-955d-53e36250c8a4\") " pod="openstack/dnsmasq-dns-77585f5f8c-hqcjj" Oct 13 17:39:26 crc kubenswrapper[4720]: I1013 17:39:26.093157 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6720cae9-e687-497d-955d-53e36250c8a4-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-hqcjj\" (UID: \"6720cae9-e687-497d-955d-53e36250c8a4\") " pod="openstack/dnsmasq-dns-77585f5f8c-hqcjj" Oct 13 17:39:26 crc kubenswrapper[4720]: I1013 17:39:26.093904 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6720cae9-e687-497d-955d-53e36250c8a4-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-hqcjj\" (UID: \"6720cae9-e687-497d-955d-53e36250c8a4\") " pod="openstack/dnsmasq-dns-77585f5f8c-hqcjj" Oct 13 17:39:26 crc kubenswrapper[4720]: I1013 17:39:26.113542 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnt9k\" (UniqueName: \"kubernetes.io/projected/6720cae9-e687-497d-955d-53e36250c8a4-kube-api-access-pnt9k\") pod \"dnsmasq-dns-77585f5f8c-hqcjj\" (UID: \"6720cae9-e687-497d-955d-53e36250c8a4\") " pod="openstack/dnsmasq-dns-77585f5f8c-hqcjj" Oct 13 17:39:26 crc kubenswrapper[4720]: I1013 17:39:26.212693 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-hqcjj" Oct 13 17:39:26 crc kubenswrapper[4720]: I1013 17:39:26.476171 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-hqcjj"] Oct 13 17:39:26 crc kubenswrapper[4720]: I1013 17:39:26.578733 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-hqcjj" event={"ID":"6720cae9-e687-497d-955d-53e36250c8a4","Type":"ContainerStarted","Data":"e05f52b43ed93d0335fc2a9761a4c3086de82864fddbb564d08d8f5e9f6d57d1"} Oct 13 17:39:26 crc kubenswrapper[4720]: I1013 17:39:26.832761 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vbc6h-config-hknqn" Oct 13 17:39:27 crc kubenswrapper[4720]: I1013 17:39:27.003493 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b-scripts\") pod \"fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b\" (UID: \"fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b\") " Oct 13 17:39:27 crc kubenswrapper[4720]: I1013 17:39:27.003611 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b-additional-scripts\") pod \"fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b\" (UID: \"fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b\") " Oct 13 17:39:27 crc kubenswrapper[4720]: I1013 17:39:27.003692 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gv5v\" (UniqueName: \"kubernetes.io/projected/fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b-kube-api-access-8gv5v\") pod \"fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b\" (UID: \"fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b\") " Oct 13 17:39:27 crc kubenswrapper[4720]: I1013 17:39:27.003735 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b-var-log-ovn\") pod \"fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b\" (UID: \"fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b\") " Oct 13 17:39:27 crc kubenswrapper[4720]: I1013 17:39:27.003793 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b-var-run-ovn\") pod \"fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b\" (UID: \"fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b\") " Oct 13 17:39:27 crc kubenswrapper[4720]: I1013 17:39:27.003927 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b-var-run\") pod \"fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b\" (UID: \"fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b\") " Oct 13 17:39:27 crc kubenswrapper[4720]: I1013 17:39:27.004033 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b" (UID: "fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 17:39:27 crc kubenswrapper[4720]: I1013 17:39:27.004046 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b" (UID: "fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 17:39:27 crc kubenswrapper[4720]: I1013 17:39:27.004128 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b-var-run" (OuterVolumeSpecName: "var-run") pod "fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b" (UID: "fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 17:39:27 crc kubenswrapper[4720]: I1013 17:39:27.004842 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b" (UID: "fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:39:27 crc kubenswrapper[4720]: I1013 17:39:27.005180 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b-scripts" (OuterVolumeSpecName: "scripts") pod "fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b" (UID: "fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:39:27 crc kubenswrapper[4720]: I1013 17:39:27.005353 4720 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b-additional-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:27 crc kubenswrapper[4720]: I1013 17:39:27.005376 4720 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b-var-log-ovn\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:27 crc kubenswrapper[4720]: I1013 17:39:27.005386 4720 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b-var-run-ovn\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:27 crc kubenswrapper[4720]: I1013 17:39:27.005396 4720 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b-var-run\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:27 crc kubenswrapper[4720]: I1013 17:39:27.009227 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b-kube-api-access-8gv5v" (OuterVolumeSpecName: "kube-api-access-8gv5v") pod "fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b" (UID: "fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b"). InnerVolumeSpecName "kube-api-access-8gv5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:39:27 crc kubenswrapper[4720]: I1013 17:39:27.107417 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:27 crc kubenswrapper[4720]: I1013 17:39:27.107472 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gv5v\" (UniqueName: \"kubernetes.io/projected/fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b-kube-api-access-8gv5v\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:27 crc kubenswrapper[4720]: I1013 17:39:27.587850 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vbc6h-config-hknqn" Oct 13 17:39:27 crc kubenswrapper[4720]: I1013 17:39:27.587847 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vbc6h-config-hknqn" event={"ID":"fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b","Type":"ContainerDied","Data":"1638ed61ea0cb9c0701c973e89bb08fe72c17cdc818130cc869045edd9be0897"} Oct 13 17:39:27 crc kubenswrapper[4720]: I1013 17:39:27.588234 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1638ed61ea0cb9c0701c973e89bb08fe72c17cdc818130cc869045edd9be0897" Oct 13 17:39:27 crc kubenswrapper[4720]: I1013 17:39:27.590994 4720 generic.go:334] "Generic (PLEG): container finished" podID="6720cae9-e687-497d-955d-53e36250c8a4" containerID="a3d1cacbdf268225b85cc828e14e7021e9ec58cb45e446ceecdf9930b389d2f9" exitCode=0 Oct 13 17:39:27 crc kubenswrapper[4720]: I1013 17:39:27.591028 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-hqcjj" event={"ID":"6720cae9-e687-497d-955d-53e36250c8a4","Type":"ContainerDied","Data":"a3d1cacbdf268225b85cc828e14e7021e9ec58cb45e446ceecdf9930b389d2f9"} Oct 13 17:39:28 crc kubenswrapper[4720]: I1013 17:39:28.004090 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-vbc6h-config-hknqn"] Oct 13 17:39:28 crc kubenswrapper[4720]: I1013 17:39:28.013024 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-vbc6h-config-hknqn"] Oct 13 17:39:28 crc kubenswrapper[4720]: I1013 17:39:28.170651 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-vbc6h" Oct 13 17:39:28 crc kubenswrapper[4720]: I1013 17:39:28.606353 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-hqcjj" event={"ID":"6720cae9-e687-497d-955d-53e36250c8a4","Type":"ContainerStarted","Data":"b0002647c37f7785f53d93639f39f1283c44c9eb5f43c35de676c9b8cb38b83b"} Oct 13 17:39:28 crc kubenswrapper[4720]: I1013 17:39:28.607426 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77585f5f8c-hqcjj" Oct 13 17:39:28 crc kubenswrapper[4720]: I1013 17:39:28.631404 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-77585f5f8c-hqcjj" podStartSLOduration=3.631385249 podStartE2EDuration="3.631385249s" podCreationTimestamp="2025-10-13 17:39:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:39:28.622966722 +0000 UTC m=+914.080216854" watchObservedRunningTime="2025-10-13 17:39:28.631385249 +0000 UTC m=+914.088635381" Oct 13 17:39:29 crc kubenswrapper[4720]: I1013 17:39:29.179388 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b" path="/var/lib/kubelet/pods/fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b/volumes" Oct 13 17:39:33 crc kubenswrapper[4720]: I1013 17:39:33.675502 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Oct 13 17:39:34 crc kubenswrapper[4720]: I1013 17:39:34.028389 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:39:34 crc kubenswrapper[4720]: I1013 17:39:34.030506 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-b4hpj"] Oct 13 17:39:34 crc kubenswrapper[4720]: E1013 17:39:34.030845 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b" containerName="ovn-config" Oct 13 17:39:34 crc kubenswrapper[4720]: I1013 17:39:34.030864 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b" containerName="ovn-config" Oct 13 17:39:34 crc kubenswrapper[4720]: I1013 17:39:34.031115 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbe4ea8c-bec5-4adb-a4de-1c6c0fc4aa1b" containerName="ovn-config" Oct 13 17:39:34 crc kubenswrapper[4720]: I1013 17:39:34.031631 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-b4hpj" Oct 13 17:39:34 crc kubenswrapper[4720]: I1013 17:39:34.046387 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-b4hpj"] Oct 13 17:39:34 crc kubenswrapper[4720]: I1013 17:39:34.123950 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtwcs\" (UniqueName: \"kubernetes.io/projected/289efdb4-70d6-47fe-929d-c6c659cf5117-kube-api-access-mtwcs\") pod \"cinder-db-create-b4hpj\" (UID: \"289efdb4-70d6-47fe-929d-c6c659cf5117\") " pod="openstack/cinder-db-create-b4hpj" Oct 13 17:39:34 crc kubenswrapper[4720]: I1013 17:39:34.132787 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-5jft7"] Oct 13 17:39:34 crc kubenswrapper[4720]: I1013 17:39:34.133849 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-5jft7" Oct 13 17:39:34 crc kubenswrapper[4720]: I1013 17:39:34.140013 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-5jft7"] Oct 13 17:39:34 crc kubenswrapper[4720]: I1013 17:39:34.225998 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnx29\" (UniqueName: \"kubernetes.io/projected/939e2da6-73bd-4929-8194-fa3d63d023bf-kube-api-access-xnx29\") pod \"barbican-db-create-5jft7\" (UID: \"939e2da6-73bd-4929-8194-fa3d63d023bf\") " pod="openstack/barbican-db-create-5jft7" Oct 13 17:39:34 crc kubenswrapper[4720]: I1013 17:39:34.226215 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtwcs\" (UniqueName: \"kubernetes.io/projected/289efdb4-70d6-47fe-929d-c6c659cf5117-kube-api-access-mtwcs\") pod \"cinder-db-create-b4hpj\" (UID: \"289efdb4-70d6-47fe-929d-c6c659cf5117\") " pod="openstack/cinder-db-create-b4hpj" Oct 13 17:39:34 crc kubenswrapper[4720]: I1013 17:39:34.243146 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtwcs\" (UniqueName: \"kubernetes.io/projected/289efdb4-70d6-47fe-929d-c6c659cf5117-kube-api-access-mtwcs\") pod \"cinder-db-create-b4hpj\" (UID: \"289efdb4-70d6-47fe-929d-c6c659cf5117\") " pod="openstack/cinder-db-create-b4hpj" Oct 13 17:39:34 crc kubenswrapper[4720]: I1013 17:39:34.289467 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-4f6d4"] Oct 13 17:39:34 crc kubenswrapper[4720]: I1013 17:39:34.290643 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4f6d4" Oct 13 17:39:34 crc kubenswrapper[4720]: I1013 17:39:34.293109 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 13 17:39:34 crc kubenswrapper[4720]: I1013 17:39:34.293512 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 13 17:39:34 crc kubenswrapper[4720]: I1013 17:39:34.293655 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 13 17:39:34 crc kubenswrapper[4720]: I1013 17:39:34.293904 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-gnxbs" Oct 13 17:39:34 crc kubenswrapper[4720]: I1013 17:39:34.301487 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-4f6d4"] Oct 13 17:39:34 crc kubenswrapper[4720]: I1013 17:39:34.327900 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnx29\" (UniqueName: \"kubernetes.io/projected/939e2da6-73bd-4929-8194-fa3d63d023bf-kube-api-access-xnx29\") pod \"barbican-db-create-5jft7\" (UID: \"939e2da6-73bd-4929-8194-fa3d63d023bf\") " pod="openstack/barbican-db-create-5jft7" Oct 13 17:39:34 crc kubenswrapper[4720]: I1013 17:39:34.341152 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-ptmbs"] Oct 13 17:39:34 crc kubenswrapper[4720]: I1013 17:39:34.342307 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-ptmbs" Oct 13 17:39:34 crc kubenswrapper[4720]: I1013 17:39:34.346900 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-ptmbs"] Oct 13 17:39:34 crc kubenswrapper[4720]: I1013 17:39:34.348536 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-b4hpj" Oct 13 17:39:34 crc kubenswrapper[4720]: I1013 17:39:34.348977 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnx29\" (UniqueName: \"kubernetes.io/projected/939e2da6-73bd-4929-8194-fa3d63d023bf-kube-api-access-xnx29\") pod \"barbican-db-create-5jft7\" (UID: \"939e2da6-73bd-4929-8194-fa3d63d023bf\") " pod="openstack/barbican-db-create-5jft7" Oct 13 17:39:34 crc kubenswrapper[4720]: I1013 17:39:34.429520 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a20d69a3-6d9e-4066-b3d5-62eb5897451c-config-data\") pod \"keystone-db-sync-4f6d4\" (UID: \"a20d69a3-6d9e-4066-b3d5-62eb5897451c\") " pod="openstack/keystone-db-sync-4f6d4" Oct 13 17:39:34 crc kubenswrapper[4720]: I1013 17:39:34.429565 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnpwt\" (UniqueName: \"kubernetes.io/projected/a20d69a3-6d9e-4066-b3d5-62eb5897451c-kube-api-access-bnpwt\") pod \"keystone-db-sync-4f6d4\" (UID: \"a20d69a3-6d9e-4066-b3d5-62eb5897451c\") " pod="openstack/keystone-db-sync-4f6d4" Oct 13 17:39:34 crc kubenswrapper[4720]: I1013 17:39:34.429623 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwkk8\" (UniqueName: \"kubernetes.io/projected/392ec994-30b1-4ab2-b27d-fd2c59bd2eef-kube-api-access-mwkk8\") pod \"neutron-db-create-ptmbs\" (UID: \"392ec994-30b1-4ab2-b27d-fd2c59bd2eef\") " pod="openstack/neutron-db-create-ptmbs" Oct 13 17:39:34 crc kubenswrapper[4720]: I1013 17:39:34.429838 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a20d69a3-6d9e-4066-b3d5-62eb5897451c-combined-ca-bundle\") pod \"keystone-db-sync-4f6d4\" (UID: \"a20d69a3-6d9e-4066-b3d5-62eb5897451c\") " pod="openstack/keystone-db-sync-4f6d4" Oct 13 17:39:34 crc kubenswrapper[4720]: I1013 17:39:34.449464 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-5jft7" Oct 13 17:39:34 crc kubenswrapper[4720]: I1013 17:39:34.531497 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a20d69a3-6d9e-4066-b3d5-62eb5897451c-config-data\") pod \"keystone-db-sync-4f6d4\" (UID: \"a20d69a3-6d9e-4066-b3d5-62eb5897451c\") " pod="openstack/keystone-db-sync-4f6d4" Oct 13 17:39:34 crc kubenswrapper[4720]: I1013 17:39:34.531548 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnpwt\" (UniqueName: \"kubernetes.io/projected/a20d69a3-6d9e-4066-b3d5-62eb5897451c-kube-api-access-bnpwt\") pod \"keystone-db-sync-4f6d4\" (UID: \"a20d69a3-6d9e-4066-b3d5-62eb5897451c\") " pod="openstack/keystone-db-sync-4f6d4" Oct 13 17:39:34 crc kubenswrapper[4720]: I1013 17:39:34.531612 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwkk8\" (UniqueName: \"kubernetes.io/projected/392ec994-30b1-4ab2-b27d-fd2c59bd2eef-kube-api-access-mwkk8\") pod \"neutron-db-create-ptmbs\" (UID: \"392ec994-30b1-4ab2-b27d-fd2c59bd2eef\") " pod="openstack/neutron-db-create-ptmbs" Oct 13 17:39:34 crc kubenswrapper[4720]: I1013 17:39:34.531662 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a20d69a3-6d9e-4066-b3d5-62eb5897451c-combined-ca-bundle\") pod \"keystone-db-sync-4f6d4\" (UID: \"a20d69a3-6d9e-4066-b3d5-62eb5897451c\") " pod="openstack/keystone-db-sync-4f6d4" Oct 13 17:39:34 crc kubenswrapper[4720]: I1013 17:39:34.534968 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a20d69a3-6d9e-4066-b3d5-62eb5897451c-combined-ca-bundle\") pod \"keystone-db-sync-4f6d4\" (UID: \"a20d69a3-6d9e-4066-b3d5-62eb5897451c\") " pod="openstack/keystone-db-sync-4f6d4" Oct 13 17:39:34 crc kubenswrapper[4720]: I1013 17:39:34.543267 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a20d69a3-6d9e-4066-b3d5-62eb5897451c-config-data\") pod \"keystone-db-sync-4f6d4\" (UID: \"a20d69a3-6d9e-4066-b3d5-62eb5897451c\") " pod="openstack/keystone-db-sync-4f6d4" Oct 13 17:39:34 crc kubenswrapper[4720]: I1013 17:39:34.547765 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnpwt\" (UniqueName: \"kubernetes.io/projected/a20d69a3-6d9e-4066-b3d5-62eb5897451c-kube-api-access-bnpwt\") pod \"keystone-db-sync-4f6d4\" (UID: \"a20d69a3-6d9e-4066-b3d5-62eb5897451c\") " pod="openstack/keystone-db-sync-4f6d4" Oct 13 17:39:34 crc kubenswrapper[4720]: I1013 17:39:34.551827 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwkk8\" (UniqueName: \"kubernetes.io/projected/392ec994-30b1-4ab2-b27d-fd2c59bd2eef-kube-api-access-mwkk8\") pod \"neutron-db-create-ptmbs\" (UID: \"392ec994-30b1-4ab2-b27d-fd2c59bd2eef\") " pod="openstack/neutron-db-create-ptmbs" Oct 13 17:39:34 crc kubenswrapper[4720]: I1013 17:39:34.607101 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4f6d4" Oct 13 17:39:34 crc kubenswrapper[4720]: I1013 17:39:34.691613 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-ptmbs" Oct 13 17:39:36 crc kubenswrapper[4720]: I1013 17:39:36.214448 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-77585f5f8c-hqcjj" Oct 13 17:39:36 crc kubenswrapper[4720]: I1013 17:39:36.268792 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-mxpxk"] Oct 13 17:39:36 crc kubenswrapper[4720]: I1013 17:39:36.269004 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-mxpxk" podUID="2442614f-edaa-4e64-9ed1-fc0520a37cfd" containerName="dnsmasq-dns" containerID="cri-o://46425dac3912c3b90081ca7451555dd016ce966bc5170559d8329870a07c33d5" gracePeriod=10 Oct 13 17:39:36 crc kubenswrapper[4720]: I1013 17:39:36.672170 4720 generic.go:334] "Generic (PLEG): container finished" podID="2442614f-edaa-4e64-9ed1-fc0520a37cfd" containerID="46425dac3912c3b90081ca7451555dd016ce966bc5170559d8329870a07c33d5" exitCode=0 Oct 13 17:39:36 crc kubenswrapper[4720]: I1013 17:39:36.672231 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-mxpxk" event={"ID":"2442614f-edaa-4e64-9ed1-fc0520a37cfd","Type":"ContainerDied","Data":"46425dac3912c3b90081ca7451555dd016ce966bc5170559d8329870a07c33d5"} Oct 13 17:39:38 crc kubenswrapper[4720]: I1013 17:39:38.980547 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-mxpxk" Oct 13 17:39:39 crc kubenswrapper[4720]: I1013 17:39:39.118708 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2442614f-edaa-4e64-9ed1-fc0520a37cfd-dns-svc\") pod \"2442614f-edaa-4e64-9ed1-fc0520a37cfd\" (UID: \"2442614f-edaa-4e64-9ed1-fc0520a37cfd\") " Oct 13 17:39:39 crc kubenswrapper[4720]: I1013 17:39:39.118776 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpqdg\" (UniqueName: \"kubernetes.io/projected/2442614f-edaa-4e64-9ed1-fc0520a37cfd-kube-api-access-vpqdg\") pod \"2442614f-edaa-4e64-9ed1-fc0520a37cfd\" (UID: \"2442614f-edaa-4e64-9ed1-fc0520a37cfd\") " Oct 13 17:39:39 crc kubenswrapper[4720]: I1013 17:39:39.118857 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2442614f-edaa-4e64-9ed1-fc0520a37cfd-ovsdbserver-sb\") pod \"2442614f-edaa-4e64-9ed1-fc0520a37cfd\" (UID: \"2442614f-edaa-4e64-9ed1-fc0520a37cfd\") " Oct 13 17:39:39 crc kubenswrapper[4720]: I1013 17:39:39.118888 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2442614f-edaa-4e64-9ed1-fc0520a37cfd-ovsdbserver-nb\") pod \"2442614f-edaa-4e64-9ed1-fc0520a37cfd\" (UID: \"2442614f-edaa-4e64-9ed1-fc0520a37cfd\") " Oct 13 17:39:39 crc kubenswrapper[4720]: I1013 17:39:39.118945 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2442614f-edaa-4e64-9ed1-fc0520a37cfd-config\") pod \"2442614f-edaa-4e64-9ed1-fc0520a37cfd\" (UID: \"2442614f-edaa-4e64-9ed1-fc0520a37cfd\") " Oct 13 17:39:39 crc kubenswrapper[4720]: I1013 17:39:39.123524 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2442614f-edaa-4e64-9ed1-fc0520a37cfd-kube-api-access-vpqdg" (OuterVolumeSpecName: "kube-api-access-vpqdg") pod "2442614f-edaa-4e64-9ed1-fc0520a37cfd" (UID: "2442614f-edaa-4e64-9ed1-fc0520a37cfd"). InnerVolumeSpecName "kube-api-access-vpqdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:39:39 crc kubenswrapper[4720]: I1013 17:39:39.167109 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2442614f-edaa-4e64-9ed1-fc0520a37cfd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2442614f-edaa-4e64-9ed1-fc0520a37cfd" (UID: "2442614f-edaa-4e64-9ed1-fc0520a37cfd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:39:39 crc kubenswrapper[4720]: I1013 17:39:39.169921 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2442614f-edaa-4e64-9ed1-fc0520a37cfd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2442614f-edaa-4e64-9ed1-fc0520a37cfd" (UID: "2442614f-edaa-4e64-9ed1-fc0520a37cfd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:39:39 crc kubenswrapper[4720]: I1013 17:39:39.170415 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2442614f-edaa-4e64-9ed1-fc0520a37cfd-config" (OuterVolumeSpecName: "config") pod "2442614f-edaa-4e64-9ed1-fc0520a37cfd" (UID: "2442614f-edaa-4e64-9ed1-fc0520a37cfd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:39:39 crc kubenswrapper[4720]: I1013 17:39:39.177547 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2442614f-edaa-4e64-9ed1-fc0520a37cfd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2442614f-edaa-4e64-9ed1-fc0520a37cfd" (UID: "2442614f-edaa-4e64-9ed1-fc0520a37cfd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:39:39 crc kubenswrapper[4720]: I1013 17:39:39.216862 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-5jft7"] Oct 13 17:39:39 crc kubenswrapper[4720]: I1013 17:39:39.220179 4720 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2442614f-edaa-4e64-9ed1-fc0520a37cfd-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:39 crc kubenswrapper[4720]: I1013 17:39:39.220240 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpqdg\" (UniqueName: \"kubernetes.io/projected/2442614f-edaa-4e64-9ed1-fc0520a37cfd-kube-api-access-vpqdg\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:39 crc kubenswrapper[4720]: I1013 17:39:39.220255 4720 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2442614f-edaa-4e64-9ed1-fc0520a37cfd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:39 crc kubenswrapper[4720]: I1013 17:39:39.220267 4720 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2442614f-edaa-4e64-9ed1-fc0520a37cfd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:39 crc kubenswrapper[4720]: I1013 17:39:39.220786 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2442614f-edaa-4e64-9ed1-fc0520a37cfd-config\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:39 crc kubenswrapper[4720]: I1013 17:39:39.277583 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-ptmbs"] Oct 13 17:39:39 crc kubenswrapper[4720]: W1013 17:39:39.282908 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod392ec994_30b1_4ab2_b27d_fd2c59bd2eef.slice/crio-e9b2c340a03eedd1279f2617b206291a6f9354ffcab9d9dd2c37253f79de50f9 WatchSource:0}: Error finding container e9b2c340a03eedd1279f2617b206291a6f9354ffcab9d9dd2c37253f79de50f9: Status 404 returned error can't find the container with id e9b2c340a03eedd1279f2617b206291a6f9354ffcab9d9dd2c37253f79de50f9 Oct 13 17:39:39 crc kubenswrapper[4720]: I1013 17:39:39.332680 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-4f6d4"] Oct 13 17:39:39 crc kubenswrapper[4720]: I1013 17:39:39.339212 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-b4hpj"] Oct 13 17:39:39 crc kubenswrapper[4720]: W1013 17:39:39.347041 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod289efdb4_70d6_47fe_929d_c6c659cf5117.slice/crio-19d177ef60211765b032f5aec6dab1196c4a8acabb9cf65ffcf7c70b09e02a8d WatchSource:0}: Error finding container 19d177ef60211765b032f5aec6dab1196c4a8acabb9cf65ffcf7c70b09e02a8d: Status 404 returned error can't find the container with id 19d177ef60211765b032f5aec6dab1196c4a8acabb9cf65ffcf7c70b09e02a8d Oct 13 17:39:39 crc kubenswrapper[4720]: I1013 17:39:39.739098 4720 generic.go:334] "Generic (PLEG): container finished" podID="289efdb4-70d6-47fe-929d-c6c659cf5117" containerID="047b6fba1bf1e9677b7128d90bc39bff50e1b1504d25db5928f22242d61e3932" exitCode=0 Oct 13 17:39:39 crc kubenswrapper[4720]: I1013 17:39:39.739159 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-b4hpj" event={"ID":"289efdb4-70d6-47fe-929d-c6c659cf5117","Type":"ContainerDied","Data":"047b6fba1bf1e9677b7128d90bc39bff50e1b1504d25db5928f22242d61e3932"} Oct 13 17:39:39 crc kubenswrapper[4720]: I1013 17:39:39.739453 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-b4hpj" event={"ID":"289efdb4-70d6-47fe-929d-c6c659cf5117","Type":"ContainerStarted","Data":"19d177ef60211765b032f5aec6dab1196c4a8acabb9cf65ffcf7c70b09e02a8d"} Oct 13 17:39:39 crc kubenswrapper[4720]: I1013 17:39:39.740869 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4f6d4" event={"ID":"a20d69a3-6d9e-4066-b3d5-62eb5897451c","Type":"ContainerStarted","Data":"a3a82bc4ac272c563b8ddb9617d1d63be376daa70514e43df19d554e43e590b8"} Oct 13 17:39:39 crc kubenswrapper[4720]: I1013 17:39:39.743485 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-mxpxk" event={"ID":"2442614f-edaa-4e64-9ed1-fc0520a37cfd","Type":"ContainerDied","Data":"47207972261f580f6bca845fb53af4cb5119191cde8cf7eeb75de240ab4aff29"} Oct 13 17:39:39 crc kubenswrapper[4720]: I1013 17:39:39.743540 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-mxpxk" Oct 13 17:39:39 crc kubenswrapper[4720]: I1013 17:39:39.743638 4720 scope.go:117] "RemoveContainer" containerID="46425dac3912c3b90081ca7451555dd016ce966bc5170559d8329870a07c33d5" Oct 13 17:39:39 crc kubenswrapper[4720]: I1013 17:39:39.746601 4720 generic.go:334] "Generic (PLEG): container finished" podID="939e2da6-73bd-4929-8194-fa3d63d023bf" containerID="55a9e6eaf4f40a5b8fb0a0a4bf7a94d966d9ee51c3be7c21032deb3985be3924" exitCode=0 Oct 13 17:39:39 crc kubenswrapper[4720]: I1013 17:39:39.746667 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-5jft7" event={"ID":"939e2da6-73bd-4929-8194-fa3d63d023bf","Type":"ContainerDied","Data":"55a9e6eaf4f40a5b8fb0a0a4bf7a94d966d9ee51c3be7c21032deb3985be3924"} Oct 13 17:39:39 crc kubenswrapper[4720]: I1013 17:39:39.746705 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-5jft7" event={"ID":"939e2da6-73bd-4929-8194-fa3d63d023bf","Type":"ContainerStarted","Data":"79db4d028e825fc731b431fc60bf5c11347771903688154d575353cc69829ff1"} Oct 13 17:39:39 crc kubenswrapper[4720]: I1013 17:39:39.753339 4720 generic.go:334] "Generic (PLEG): container finished" podID="392ec994-30b1-4ab2-b27d-fd2c59bd2eef" containerID="2b9d1689917fef9a21d52f9ea6f4db786a2a816cd2bcea0149a1c56163de5271" exitCode=0 Oct 13 17:39:39 crc kubenswrapper[4720]: I1013 17:39:39.753407 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-ptmbs" event={"ID":"392ec994-30b1-4ab2-b27d-fd2c59bd2eef","Type":"ContainerDied","Data":"2b9d1689917fef9a21d52f9ea6f4db786a2a816cd2bcea0149a1c56163de5271"} Oct 13 17:39:39 crc kubenswrapper[4720]: I1013 17:39:39.753429 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-ptmbs" event={"ID":"392ec994-30b1-4ab2-b27d-fd2c59bd2eef","Type":"ContainerStarted","Data":"e9b2c340a03eedd1279f2617b206291a6f9354ffcab9d9dd2c37253f79de50f9"} Oct 13 17:39:39 crc kubenswrapper[4720]: I1013 17:39:39.755608 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-99x2r" event={"ID":"b035e5a0-e44b-4897-bcfa-c7112b8eee2d","Type":"ContainerStarted","Data":"02b5c272e9cea9d9f00706c7d23a57a83f86359bf8d851d2d6e05a42964abb02"} Oct 13 17:39:39 crc kubenswrapper[4720]: I1013 17:39:39.787112 4720 scope.go:117] "RemoveContainer" containerID="a0081c27e8b5e7977626839730ef4b9b12fcc078d3394fcff9e65da9e0702972" Oct 13 17:39:39 crc kubenswrapper[4720]: I1013 17:39:39.789852 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-mxpxk"] Oct 13 17:39:39 crc kubenswrapper[4720]: I1013 17:39:39.794992 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-mxpxk"] Oct 13 17:39:39 crc kubenswrapper[4720]: I1013 17:39:39.816582 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-99x2r" podStartSLOduration=3.142208359 podStartE2EDuration="17.816563354s" podCreationTimestamp="2025-10-13 17:39:22 +0000 UTC" firstStartedPulling="2025-10-13 17:39:24.128582757 +0000 UTC m=+909.585832889" lastFinishedPulling="2025-10-13 17:39:38.802937742 +0000 UTC m=+924.260187884" observedRunningTime="2025-10-13 17:39:39.807136512 +0000 UTC m=+925.264386654" watchObservedRunningTime="2025-10-13 17:39:39.816563354 +0000 UTC m=+925.273813486" Oct 13 17:39:41 crc kubenswrapper[4720]: I1013 17:39:41.177939 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2442614f-edaa-4e64-9ed1-fc0520a37cfd" path="/var/lib/kubelet/pods/2442614f-edaa-4e64-9ed1-fc0520a37cfd/volumes" Oct 13 17:39:41 crc kubenswrapper[4720]: I1013 17:39:41.180653 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-b4hpj" Oct 13 17:39:41 crc kubenswrapper[4720]: I1013 17:39:41.184532 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-ptmbs" Oct 13 17:39:41 crc kubenswrapper[4720]: I1013 17:39:41.190208 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-5jft7" Oct 13 17:39:41 crc kubenswrapper[4720]: I1013 17:39:41.252411 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwkk8\" (UniqueName: \"kubernetes.io/projected/392ec994-30b1-4ab2-b27d-fd2c59bd2eef-kube-api-access-mwkk8\") pod \"392ec994-30b1-4ab2-b27d-fd2c59bd2eef\" (UID: \"392ec994-30b1-4ab2-b27d-fd2c59bd2eef\") " Oct 13 17:39:41 crc kubenswrapper[4720]: I1013 17:39:41.252562 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnx29\" (UniqueName: \"kubernetes.io/projected/939e2da6-73bd-4929-8194-fa3d63d023bf-kube-api-access-xnx29\") pod \"939e2da6-73bd-4929-8194-fa3d63d023bf\" (UID: \"939e2da6-73bd-4929-8194-fa3d63d023bf\") " Oct 13 17:39:41 crc kubenswrapper[4720]: I1013 17:39:41.252692 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtwcs\" (UniqueName: \"kubernetes.io/projected/289efdb4-70d6-47fe-929d-c6c659cf5117-kube-api-access-mtwcs\") pod \"289efdb4-70d6-47fe-929d-c6c659cf5117\" (UID: \"289efdb4-70d6-47fe-929d-c6c659cf5117\") " Oct 13 17:39:41 crc kubenswrapper[4720]: I1013 17:39:41.259087 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/939e2da6-73bd-4929-8194-fa3d63d023bf-kube-api-access-xnx29" (OuterVolumeSpecName: "kube-api-access-xnx29") pod "939e2da6-73bd-4929-8194-fa3d63d023bf" (UID: "939e2da6-73bd-4929-8194-fa3d63d023bf"). InnerVolumeSpecName "kube-api-access-xnx29". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:39:41 crc kubenswrapper[4720]: I1013 17:39:41.259161 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/392ec994-30b1-4ab2-b27d-fd2c59bd2eef-kube-api-access-mwkk8" (OuterVolumeSpecName: "kube-api-access-mwkk8") pod "392ec994-30b1-4ab2-b27d-fd2c59bd2eef" (UID: "392ec994-30b1-4ab2-b27d-fd2c59bd2eef"). InnerVolumeSpecName "kube-api-access-mwkk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:39:41 crc kubenswrapper[4720]: I1013 17:39:41.281342 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/289efdb4-70d6-47fe-929d-c6c659cf5117-kube-api-access-mtwcs" (OuterVolumeSpecName: "kube-api-access-mtwcs") pod "289efdb4-70d6-47fe-929d-c6c659cf5117" (UID: "289efdb4-70d6-47fe-929d-c6c659cf5117"). InnerVolumeSpecName "kube-api-access-mtwcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:39:41 crc kubenswrapper[4720]: I1013 17:39:41.355227 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtwcs\" (UniqueName: \"kubernetes.io/projected/289efdb4-70d6-47fe-929d-c6c659cf5117-kube-api-access-mtwcs\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:41 crc kubenswrapper[4720]: I1013 17:39:41.355271 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwkk8\" (UniqueName: \"kubernetes.io/projected/392ec994-30b1-4ab2-b27d-fd2c59bd2eef-kube-api-access-mwkk8\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:41 crc kubenswrapper[4720]: I1013 17:39:41.355285 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnx29\" (UniqueName: \"kubernetes.io/projected/939e2da6-73bd-4929-8194-fa3d63d023bf-kube-api-access-xnx29\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:41 crc kubenswrapper[4720]: I1013 17:39:41.774458 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-b4hpj" Oct 13 17:39:41 crc kubenswrapper[4720]: I1013 17:39:41.779336 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-b4hpj" event={"ID":"289efdb4-70d6-47fe-929d-c6c659cf5117","Type":"ContainerDied","Data":"19d177ef60211765b032f5aec6dab1196c4a8acabb9cf65ffcf7c70b09e02a8d"} Oct 13 17:39:41 crc kubenswrapper[4720]: I1013 17:39:41.779399 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19d177ef60211765b032f5aec6dab1196c4a8acabb9cf65ffcf7c70b09e02a8d" Oct 13 17:39:41 crc kubenswrapper[4720]: I1013 17:39:41.781355 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-5jft7" event={"ID":"939e2da6-73bd-4929-8194-fa3d63d023bf","Type":"ContainerDied","Data":"79db4d028e825fc731b431fc60bf5c11347771903688154d575353cc69829ff1"} Oct 13 17:39:41 crc kubenswrapper[4720]: I1013 17:39:41.781387 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79db4d028e825fc731b431fc60bf5c11347771903688154d575353cc69829ff1" Oct 13 17:39:41 crc kubenswrapper[4720]: I1013 17:39:41.781461 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-5jft7" Oct 13 17:39:41 crc kubenswrapper[4720]: I1013 17:39:41.800488 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-ptmbs" event={"ID":"392ec994-30b1-4ab2-b27d-fd2c59bd2eef","Type":"ContainerDied","Data":"e9b2c340a03eedd1279f2617b206291a6f9354ffcab9d9dd2c37253f79de50f9"} Oct 13 17:39:41 crc kubenswrapper[4720]: I1013 17:39:41.800809 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9b2c340a03eedd1279f2617b206291a6f9354ffcab9d9dd2c37253f79de50f9" Oct 13 17:39:41 crc kubenswrapper[4720]: I1013 17:39:41.800924 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-ptmbs" Oct 13 17:39:44 crc kubenswrapper[4720]: I1013 17:39:44.867450 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4f6d4" event={"ID":"a20d69a3-6d9e-4066-b3d5-62eb5897451c","Type":"ContainerStarted","Data":"562d51b684d6468108fdc600b7c30def921adc24bd379bd56c074de3d205d066"} Oct 13 17:39:44 crc kubenswrapper[4720]: I1013 17:39:44.894664 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-4f6d4" podStartSLOduration=6.217128332 podStartE2EDuration="10.894645339s" podCreationTimestamp="2025-10-13 17:39:34 +0000 UTC" firstStartedPulling="2025-10-13 17:39:39.350181634 +0000 UTC m=+924.807431766" lastFinishedPulling="2025-10-13 17:39:44.027698611 +0000 UTC m=+929.484948773" observedRunningTime="2025-10-13 17:39:44.884633141 +0000 UTC m=+930.341883283" watchObservedRunningTime="2025-10-13 17:39:44.894645339 +0000 UTC m=+930.351895481" Oct 13 17:39:45 crc kubenswrapper[4720]: I1013 17:39:45.213038 4720 patch_prober.go:28] interesting pod/machine-config-daemon-htwnl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 17:39:45 crc kubenswrapper[4720]: I1013 17:39:45.213122 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 17:39:45 crc kubenswrapper[4720]: I1013 17:39:45.213185 4720 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" Oct 13 17:39:45 crc kubenswrapper[4720]: I1013 17:39:45.214170 4720 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4d6ae9650e2a4d9303f0ed0df57f14a865dd0defb52c4262f76ce0d77b3d80c5"} pod="openshift-machine-config-operator/machine-config-daemon-htwnl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 13 17:39:45 crc kubenswrapper[4720]: I1013 17:39:45.214314 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerName="machine-config-daemon" containerID="cri-o://4d6ae9650e2a4d9303f0ed0df57f14a865dd0defb52c4262f76ce0d77b3d80c5" gracePeriod=600 Oct 13 17:39:45 crc kubenswrapper[4720]: I1013 17:39:45.882456 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" event={"ID":"ce442c80-fcde-4b79-b6f9-f8f25771dfd4","Type":"ContainerDied","Data":"4d6ae9650e2a4d9303f0ed0df57f14a865dd0defb52c4262f76ce0d77b3d80c5"} Oct 13 17:39:45 crc kubenswrapper[4720]: I1013 17:39:45.882806 4720 scope.go:117] "RemoveContainer" containerID="08f406006acd7f2a5ccd32367b83e6ce328ee80787fc6b3f0206a4c41af2f48b" Oct 13 17:39:45 crc kubenswrapper[4720]: I1013 17:39:45.882337 4720 generic.go:334] "Generic (PLEG): container finished" podID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerID="4d6ae9650e2a4d9303f0ed0df57f14a865dd0defb52c4262f76ce0d77b3d80c5" exitCode=0 Oct 13 17:39:45 crc kubenswrapper[4720]: I1013 17:39:45.883135 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" event={"ID":"ce442c80-fcde-4b79-b6f9-f8f25771dfd4","Type":"ContainerStarted","Data":"c1278ace50a45373a8479a2cc48b2ad98ee5d6f328dbb131bbb45680a5894fc3"} Oct 13 17:39:46 crc kubenswrapper[4720]: I1013 17:39:46.897362 4720 generic.go:334] "Generic (PLEG): container finished" podID="b035e5a0-e44b-4897-bcfa-c7112b8eee2d" containerID="02b5c272e9cea9d9f00706c7d23a57a83f86359bf8d851d2d6e05a42964abb02" exitCode=0 Oct 13 17:39:46 crc kubenswrapper[4720]: I1013 17:39:46.897511 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-99x2r" event={"ID":"b035e5a0-e44b-4897-bcfa-c7112b8eee2d","Type":"ContainerDied","Data":"02b5c272e9cea9d9f00706c7d23a57a83f86359bf8d851d2d6e05a42964abb02"} Oct 13 17:39:46 crc kubenswrapper[4720]: I1013 17:39:46.902620 4720 generic.go:334] "Generic (PLEG): container finished" podID="a20d69a3-6d9e-4066-b3d5-62eb5897451c" containerID="562d51b684d6468108fdc600b7c30def921adc24bd379bd56c074de3d205d066" exitCode=0 Oct 13 17:39:46 crc kubenswrapper[4720]: I1013 17:39:46.902726 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4f6d4" event={"ID":"a20d69a3-6d9e-4066-b3d5-62eb5897451c","Type":"ContainerDied","Data":"562d51b684d6468108fdc600b7c30def921adc24bd379bd56c074de3d205d066"} Oct 13 17:39:48 crc kubenswrapper[4720]: I1013 17:39:48.304243 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4f6d4" Oct 13 17:39:48 crc kubenswrapper[4720]: I1013 17:39:48.383759 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-99x2r" Oct 13 17:39:48 crc kubenswrapper[4720]: I1013 17:39:48.479944 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a20d69a3-6d9e-4066-b3d5-62eb5897451c-combined-ca-bundle\") pod \"a20d69a3-6d9e-4066-b3d5-62eb5897451c\" (UID: \"a20d69a3-6d9e-4066-b3d5-62eb5897451c\") " Oct 13 17:39:48 crc kubenswrapper[4720]: I1013 17:39:48.480099 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a20d69a3-6d9e-4066-b3d5-62eb5897451c-config-data\") pod \"a20d69a3-6d9e-4066-b3d5-62eb5897451c\" (UID: \"a20d69a3-6d9e-4066-b3d5-62eb5897451c\") " Oct 13 17:39:48 crc kubenswrapper[4720]: I1013 17:39:48.480150 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnpwt\" (UniqueName: \"kubernetes.io/projected/a20d69a3-6d9e-4066-b3d5-62eb5897451c-kube-api-access-bnpwt\") pod \"a20d69a3-6d9e-4066-b3d5-62eb5897451c\" (UID: \"a20d69a3-6d9e-4066-b3d5-62eb5897451c\") " Oct 13 17:39:48 crc kubenswrapper[4720]: I1013 17:39:48.486089 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a20d69a3-6d9e-4066-b3d5-62eb5897451c-kube-api-access-bnpwt" (OuterVolumeSpecName: "kube-api-access-bnpwt") pod "a20d69a3-6d9e-4066-b3d5-62eb5897451c" (UID: "a20d69a3-6d9e-4066-b3d5-62eb5897451c"). InnerVolumeSpecName "kube-api-access-bnpwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:39:48 crc kubenswrapper[4720]: I1013 17:39:48.522943 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a20d69a3-6d9e-4066-b3d5-62eb5897451c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a20d69a3-6d9e-4066-b3d5-62eb5897451c" (UID: "a20d69a3-6d9e-4066-b3d5-62eb5897451c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:39:48 crc kubenswrapper[4720]: I1013 17:39:48.560946 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a20d69a3-6d9e-4066-b3d5-62eb5897451c-config-data" (OuterVolumeSpecName: "config-data") pod "a20d69a3-6d9e-4066-b3d5-62eb5897451c" (UID: "a20d69a3-6d9e-4066-b3d5-62eb5897451c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:39:48 crc kubenswrapper[4720]: I1013 17:39:48.581241 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b035e5a0-e44b-4897-bcfa-c7112b8eee2d-combined-ca-bundle\") pod \"b035e5a0-e44b-4897-bcfa-c7112b8eee2d\" (UID: \"b035e5a0-e44b-4897-bcfa-c7112b8eee2d\") " Oct 13 17:39:48 crc kubenswrapper[4720]: I1013 17:39:48.581295 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b035e5a0-e44b-4897-bcfa-c7112b8eee2d-config-data\") pod \"b035e5a0-e44b-4897-bcfa-c7112b8eee2d\" (UID: \"b035e5a0-e44b-4897-bcfa-c7112b8eee2d\") " Oct 13 17:39:48 crc kubenswrapper[4720]: I1013 17:39:48.581334 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b035e5a0-e44b-4897-bcfa-c7112b8eee2d-db-sync-config-data\") pod \"b035e5a0-e44b-4897-bcfa-c7112b8eee2d\" (UID: \"b035e5a0-e44b-4897-bcfa-c7112b8eee2d\") " Oct 13 17:39:48 crc kubenswrapper[4720]: I1013 17:39:48.581369 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bl9lk\" (UniqueName: \"kubernetes.io/projected/b035e5a0-e44b-4897-bcfa-c7112b8eee2d-kube-api-access-bl9lk\") pod \"b035e5a0-e44b-4897-bcfa-c7112b8eee2d\" (UID: \"b035e5a0-e44b-4897-bcfa-c7112b8eee2d\") " Oct 13 17:39:48 crc kubenswrapper[4720]: I1013 17:39:48.581826 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a20d69a3-6d9e-4066-b3d5-62eb5897451c-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:48 crc kubenswrapper[4720]: I1013 17:39:48.581850 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnpwt\" (UniqueName: \"kubernetes.io/projected/a20d69a3-6d9e-4066-b3d5-62eb5897451c-kube-api-access-bnpwt\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:48 crc kubenswrapper[4720]: I1013 17:39:48.581864 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a20d69a3-6d9e-4066-b3d5-62eb5897451c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:48 crc kubenswrapper[4720]: I1013 17:39:48.586653 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b035e5a0-e44b-4897-bcfa-c7112b8eee2d-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "b035e5a0-e44b-4897-bcfa-c7112b8eee2d" (UID: "b035e5a0-e44b-4897-bcfa-c7112b8eee2d"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:39:48 crc kubenswrapper[4720]: I1013 17:39:48.587334 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b035e5a0-e44b-4897-bcfa-c7112b8eee2d-kube-api-access-bl9lk" (OuterVolumeSpecName: "kube-api-access-bl9lk") pod "b035e5a0-e44b-4897-bcfa-c7112b8eee2d" (UID: "b035e5a0-e44b-4897-bcfa-c7112b8eee2d"). InnerVolumeSpecName "kube-api-access-bl9lk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:39:48 crc kubenswrapper[4720]: I1013 17:39:48.601827 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b035e5a0-e44b-4897-bcfa-c7112b8eee2d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b035e5a0-e44b-4897-bcfa-c7112b8eee2d" (UID: "b035e5a0-e44b-4897-bcfa-c7112b8eee2d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:39:48 crc kubenswrapper[4720]: I1013 17:39:48.658342 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b035e5a0-e44b-4897-bcfa-c7112b8eee2d-config-data" (OuterVolumeSpecName: "config-data") pod "b035e5a0-e44b-4897-bcfa-c7112b8eee2d" (UID: "b035e5a0-e44b-4897-bcfa-c7112b8eee2d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:39:48 crc kubenswrapper[4720]: I1013 17:39:48.682964 4720 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b035e5a0-e44b-4897-bcfa-c7112b8eee2d-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:48 crc kubenswrapper[4720]: I1013 17:39:48.683240 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bl9lk\" (UniqueName: \"kubernetes.io/projected/b035e5a0-e44b-4897-bcfa-c7112b8eee2d-kube-api-access-bl9lk\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:48 crc kubenswrapper[4720]: I1013 17:39:48.683256 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b035e5a0-e44b-4897-bcfa-c7112b8eee2d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:48 crc kubenswrapper[4720]: I1013 17:39:48.683270 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b035e5a0-e44b-4897-bcfa-c7112b8eee2d-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:48 crc kubenswrapper[4720]: I1013 17:39:48.925458 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4f6d4" event={"ID":"a20d69a3-6d9e-4066-b3d5-62eb5897451c","Type":"ContainerDied","Data":"a3a82bc4ac272c563b8ddb9617d1d63be376daa70514e43df19d554e43e590b8"} Oct 13 17:39:48 crc kubenswrapper[4720]: I1013 17:39:48.925502 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3a82bc4ac272c563b8ddb9617d1d63be376daa70514e43df19d554e43e590b8" Oct 13 17:39:48 crc kubenswrapper[4720]: I1013 17:39:48.925530 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4f6d4" Oct 13 17:39:48 crc kubenswrapper[4720]: I1013 17:39:48.927361 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-99x2r" event={"ID":"b035e5a0-e44b-4897-bcfa-c7112b8eee2d","Type":"ContainerDied","Data":"a67d7049c0fbd6ad1b31232134112fe87b43dea763372e3bf00d3b3f6c475827"} Oct 13 17:39:48 crc kubenswrapper[4720]: I1013 17:39:48.927408 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a67d7049c0fbd6ad1b31232134112fe87b43dea763372e3bf00d3b3f6c475827" Oct 13 17:39:48 crc kubenswrapper[4720]: I1013 17:39:48.927474 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-99x2r" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.139998 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-ndkhx"] Oct 13 17:39:49 crc kubenswrapper[4720]: E1013 17:39:49.140381 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2442614f-edaa-4e64-9ed1-fc0520a37cfd" containerName="dnsmasq-dns" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.140401 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="2442614f-edaa-4e64-9ed1-fc0520a37cfd" containerName="dnsmasq-dns" Oct 13 17:39:49 crc kubenswrapper[4720]: E1013 17:39:49.140415 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2442614f-edaa-4e64-9ed1-fc0520a37cfd" containerName="init" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.140424 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="2442614f-edaa-4e64-9ed1-fc0520a37cfd" containerName="init" Oct 13 17:39:49 crc kubenswrapper[4720]: E1013 17:39:49.140443 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="939e2da6-73bd-4929-8194-fa3d63d023bf" containerName="mariadb-database-create" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.140452 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="939e2da6-73bd-4929-8194-fa3d63d023bf" containerName="mariadb-database-create" Oct 13 17:39:49 crc kubenswrapper[4720]: E1013 17:39:49.140471 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="392ec994-30b1-4ab2-b27d-fd2c59bd2eef" containerName="mariadb-database-create" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.140479 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="392ec994-30b1-4ab2-b27d-fd2c59bd2eef" containerName="mariadb-database-create" Oct 13 17:39:49 crc kubenswrapper[4720]: E1013 17:39:49.140500 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="289efdb4-70d6-47fe-929d-c6c659cf5117" containerName="mariadb-database-create" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.140509 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="289efdb4-70d6-47fe-929d-c6c659cf5117" containerName="mariadb-database-create" Oct 13 17:39:49 crc kubenswrapper[4720]: E1013 17:39:49.140519 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b035e5a0-e44b-4897-bcfa-c7112b8eee2d" containerName="glance-db-sync" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.140528 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="b035e5a0-e44b-4897-bcfa-c7112b8eee2d" containerName="glance-db-sync" Oct 13 17:39:49 crc kubenswrapper[4720]: E1013 17:39:49.140547 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a20d69a3-6d9e-4066-b3d5-62eb5897451c" containerName="keystone-db-sync" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.140556 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="a20d69a3-6d9e-4066-b3d5-62eb5897451c" containerName="keystone-db-sync" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.140757 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="392ec994-30b1-4ab2-b27d-fd2c59bd2eef" containerName="mariadb-database-create" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.140771 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="b035e5a0-e44b-4897-bcfa-c7112b8eee2d" containerName="glance-db-sync" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.140785 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="a20d69a3-6d9e-4066-b3d5-62eb5897451c" containerName="keystone-db-sync" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.140806 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="289efdb4-70d6-47fe-929d-c6c659cf5117" containerName="mariadb-database-create" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.140815 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="2442614f-edaa-4e64-9ed1-fc0520a37cfd" containerName="dnsmasq-dns" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.140834 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="939e2da6-73bd-4929-8194-fa3d63d023bf" containerName="mariadb-database-create" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.144529 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ndkhx" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.148582 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.148810 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.149050 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-gnxbs" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.149223 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.164542 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-ndkhx"] Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.182953 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-8dwkl"] Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.184138 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fff446b9-8dwkl" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.233332 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-8dwkl"] Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.288916 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-565dc5fd65-jxjns"] Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.290384 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-565dc5fd65-jxjns" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.294405 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.294574 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.294593 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.294784 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-tzs28" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.295404 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54hmp\" (UniqueName: \"kubernetes.io/projected/7f3d3ba5-1521-4ac1-a005-871d7bb7eea0-kube-api-access-54hmp\") pod \"keystone-bootstrap-ndkhx\" (UID: \"7f3d3ba5-1521-4ac1-a005-871d7bb7eea0\") " pod="openstack/keystone-bootstrap-ndkhx" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.295433 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502-ovsdbserver-sb\") pod \"dnsmasq-dns-55fff446b9-8dwkl\" (UID: \"5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502\") " pod="openstack/dnsmasq-dns-55fff446b9-8dwkl" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.295467 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f3d3ba5-1521-4ac1-a005-871d7bb7eea0-scripts\") pod \"keystone-bootstrap-ndkhx\" (UID: \"7f3d3ba5-1521-4ac1-a005-871d7bb7eea0\") " pod="openstack/keystone-bootstrap-ndkhx" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.295502 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502-dns-svc\") pod \"dnsmasq-dns-55fff446b9-8dwkl\" (UID: \"5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502\") " pod="openstack/dnsmasq-dns-55fff446b9-8dwkl" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.295524 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7f3d3ba5-1521-4ac1-a005-871d7bb7eea0-credential-keys\") pod \"keystone-bootstrap-ndkhx\" (UID: \"7f3d3ba5-1521-4ac1-a005-871d7bb7eea0\") " pod="openstack/keystone-bootstrap-ndkhx" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.295539 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps5h7\" (UniqueName: \"kubernetes.io/projected/5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502-kube-api-access-ps5h7\") pod \"dnsmasq-dns-55fff446b9-8dwkl\" (UID: \"5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502\") " pod="openstack/dnsmasq-dns-55fff446b9-8dwkl" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.295556 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7f3d3ba5-1521-4ac1-a005-871d7bb7eea0-fernet-keys\") pod \"keystone-bootstrap-ndkhx\" (UID: \"7f3d3ba5-1521-4ac1-a005-871d7bb7eea0\") " pod="openstack/keystone-bootstrap-ndkhx" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.295619 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502-config\") pod \"dnsmasq-dns-55fff446b9-8dwkl\" (UID: \"5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502\") " pod="openstack/dnsmasq-dns-55fff446b9-8dwkl" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.295648 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502-dns-swift-storage-0\") pod \"dnsmasq-dns-55fff446b9-8dwkl\" (UID: \"5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502\") " pod="openstack/dnsmasq-dns-55fff446b9-8dwkl" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.295675 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f3d3ba5-1521-4ac1-a005-871d7bb7eea0-combined-ca-bundle\") pod \"keystone-bootstrap-ndkhx\" (UID: \"7f3d3ba5-1521-4ac1-a005-871d7bb7eea0\") " pod="openstack/keystone-bootstrap-ndkhx" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.295706 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502-ovsdbserver-nb\") pod \"dnsmasq-dns-55fff446b9-8dwkl\" (UID: \"5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502\") " pod="openstack/dnsmasq-dns-55fff446b9-8dwkl" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.295723 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f3d3ba5-1521-4ac1-a005-871d7bb7eea0-config-data\") pod \"keystone-bootstrap-ndkhx\" (UID: \"7f3d3ba5-1521-4ac1-a005-871d7bb7eea0\") " pod="openstack/keystone-bootstrap-ndkhx" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.305701 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-565dc5fd65-jxjns"] Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.396791 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-bf8c4749f-wv7s9"] Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.398063 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/182cb87f-6f29-43c2-932e-f9de187d4fa0-config-data\") pod \"horizon-565dc5fd65-jxjns\" (UID: \"182cb87f-6f29-43c2-932e-f9de187d4fa0\") " pod="openstack/horizon-565dc5fd65-jxjns" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.398106 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-bf8c4749f-wv7s9" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.398118 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/182cb87f-6f29-43c2-932e-f9de187d4fa0-scripts\") pod \"horizon-565dc5fd65-jxjns\" (UID: \"182cb87f-6f29-43c2-932e-f9de187d4fa0\") " pod="openstack/horizon-565dc5fd65-jxjns" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.398161 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502-config\") pod \"dnsmasq-dns-55fff446b9-8dwkl\" (UID: \"5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502\") " pod="openstack/dnsmasq-dns-55fff446b9-8dwkl" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.398217 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502-dns-swift-storage-0\") pod \"dnsmasq-dns-55fff446b9-8dwkl\" (UID: \"5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502\") " pod="openstack/dnsmasq-dns-55fff446b9-8dwkl" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.398243 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f3d3ba5-1521-4ac1-a005-871d7bb7eea0-combined-ca-bundle\") pod \"keystone-bootstrap-ndkhx\" (UID: \"7f3d3ba5-1521-4ac1-a005-871d7bb7eea0\") " pod="openstack/keystone-bootstrap-ndkhx" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.398266 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/182cb87f-6f29-43c2-932e-f9de187d4fa0-logs\") pod \"horizon-565dc5fd65-jxjns\" (UID: \"182cb87f-6f29-43c2-932e-f9de187d4fa0\") " pod="openstack/horizon-565dc5fd65-jxjns" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.398282 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502-ovsdbserver-nb\") pod \"dnsmasq-dns-55fff446b9-8dwkl\" (UID: \"5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502\") " pod="openstack/dnsmasq-dns-55fff446b9-8dwkl" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.398302 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f3d3ba5-1521-4ac1-a005-871d7bb7eea0-config-data\") pod \"keystone-bootstrap-ndkhx\" (UID: \"7f3d3ba5-1521-4ac1-a005-871d7bb7eea0\") " pod="openstack/keystone-bootstrap-ndkhx" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.398326 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54hmp\" (UniqueName: \"kubernetes.io/projected/7f3d3ba5-1521-4ac1-a005-871d7bb7eea0-kube-api-access-54hmp\") pod \"keystone-bootstrap-ndkhx\" (UID: \"7f3d3ba5-1521-4ac1-a005-871d7bb7eea0\") " pod="openstack/keystone-bootstrap-ndkhx" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.398344 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502-ovsdbserver-sb\") pod \"dnsmasq-dns-55fff446b9-8dwkl\" (UID: \"5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502\") " pod="openstack/dnsmasq-dns-55fff446b9-8dwkl" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.398370 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f3d3ba5-1521-4ac1-a005-871d7bb7eea0-scripts\") pod \"keystone-bootstrap-ndkhx\" (UID: \"7f3d3ba5-1521-4ac1-a005-871d7bb7eea0\") " pod="openstack/keystone-bootstrap-ndkhx" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.398393 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/182cb87f-6f29-43c2-932e-f9de187d4fa0-horizon-secret-key\") pod \"horizon-565dc5fd65-jxjns\" (UID: \"182cb87f-6f29-43c2-932e-f9de187d4fa0\") " pod="openstack/horizon-565dc5fd65-jxjns" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.398409 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxn7j\" (UniqueName: \"kubernetes.io/projected/182cb87f-6f29-43c2-932e-f9de187d4fa0-kube-api-access-gxn7j\") pod \"horizon-565dc5fd65-jxjns\" (UID: \"182cb87f-6f29-43c2-932e-f9de187d4fa0\") " pod="openstack/horizon-565dc5fd65-jxjns" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.398437 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502-dns-svc\") pod \"dnsmasq-dns-55fff446b9-8dwkl\" (UID: \"5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502\") " pod="openstack/dnsmasq-dns-55fff446b9-8dwkl" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.398459 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7f3d3ba5-1521-4ac1-a005-871d7bb7eea0-credential-keys\") pod \"keystone-bootstrap-ndkhx\" (UID: \"7f3d3ba5-1521-4ac1-a005-871d7bb7eea0\") " pod="openstack/keystone-bootstrap-ndkhx" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.398475 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ps5h7\" (UniqueName: \"kubernetes.io/projected/5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502-kube-api-access-ps5h7\") pod \"dnsmasq-dns-55fff446b9-8dwkl\" (UID: \"5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502\") " pod="openstack/dnsmasq-dns-55fff446b9-8dwkl" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.398492 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7f3d3ba5-1521-4ac1-a005-871d7bb7eea0-fernet-keys\") pod \"keystone-bootstrap-ndkhx\" (UID: \"7f3d3ba5-1521-4ac1-a005-871d7bb7eea0\") " pod="openstack/keystone-bootstrap-ndkhx" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.401647 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502-dns-swift-storage-0\") pod \"dnsmasq-dns-55fff446b9-8dwkl\" (UID: \"5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502\") " pod="openstack/dnsmasq-dns-55fff446b9-8dwkl" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.402264 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502-config\") pod \"dnsmasq-dns-55fff446b9-8dwkl\" (UID: \"5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502\") " pod="openstack/dnsmasq-dns-55fff446b9-8dwkl" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.402795 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f3d3ba5-1521-4ac1-a005-871d7bb7eea0-config-data\") pod \"keystone-bootstrap-ndkhx\" (UID: \"7f3d3ba5-1521-4ac1-a005-871d7bb7eea0\") " pod="openstack/keystone-bootstrap-ndkhx" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.406728 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502-ovsdbserver-sb\") pod \"dnsmasq-dns-55fff446b9-8dwkl\" (UID: \"5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502\") " pod="openstack/dnsmasq-dns-55fff446b9-8dwkl" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.407816 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502-ovsdbserver-nb\") pod \"dnsmasq-dns-55fff446b9-8dwkl\" (UID: \"5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502\") " pod="openstack/dnsmasq-dns-55fff446b9-8dwkl" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.408327 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502-dns-svc\") pod \"dnsmasq-dns-55fff446b9-8dwkl\" (UID: \"5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502\") " pod="openstack/dnsmasq-dns-55fff446b9-8dwkl" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.409573 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f3d3ba5-1521-4ac1-a005-871d7bb7eea0-combined-ca-bundle\") pod \"keystone-bootstrap-ndkhx\" (UID: \"7f3d3ba5-1521-4ac1-a005-871d7bb7eea0\") " pod="openstack/keystone-bootstrap-ndkhx" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.409687 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-bf8c4749f-wv7s9"] Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.413989 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f3d3ba5-1521-4ac1-a005-871d7bb7eea0-scripts\") pod \"keystone-bootstrap-ndkhx\" (UID: \"7f3d3ba5-1521-4ac1-a005-871d7bb7eea0\") " pod="openstack/keystone-bootstrap-ndkhx" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.414659 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7f3d3ba5-1521-4ac1-a005-871d7bb7eea0-fernet-keys\") pod \"keystone-bootstrap-ndkhx\" (UID: \"7f3d3ba5-1521-4ac1-a005-871d7bb7eea0\") " pod="openstack/keystone-bootstrap-ndkhx" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.425889 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7f3d3ba5-1521-4ac1-a005-871d7bb7eea0-credential-keys\") pod \"keystone-bootstrap-ndkhx\" (UID: \"7f3d3ba5-1521-4ac1-a005-871d7bb7eea0\") " pod="openstack/keystone-bootstrap-ndkhx" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.441014 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ps5h7\" (UniqueName: \"kubernetes.io/projected/5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502-kube-api-access-ps5h7\") pod \"dnsmasq-dns-55fff446b9-8dwkl\" (UID: \"5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502\") " pod="openstack/dnsmasq-dns-55fff446b9-8dwkl" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.456949 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54hmp\" (UniqueName: \"kubernetes.io/projected/7f3d3ba5-1521-4ac1-a005-871d7bb7eea0-kube-api-access-54hmp\") pod \"keystone-bootstrap-ndkhx\" (UID: \"7f3d3ba5-1521-4ac1-a005-871d7bb7eea0\") " pod="openstack/keystone-bootstrap-ndkhx" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.468938 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ndkhx" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.491435 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.493540 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.500972 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.501248 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.502596 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rz4dv\" (UniqueName: \"kubernetes.io/projected/0711763b-3ada-4180-b697-ad4911f48641-kube-api-access-rz4dv\") pod \"horizon-bf8c4749f-wv7s9\" (UID: \"0711763b-3ada-4180-b697-ad4911f48641\") " pod="openstack/horizon-bf8c4749f-wv7s9" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.502665 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/182cb87f-6f29-43c2-932e-f9de187d4fa0-logs\") pod \"horizon-565dc5fd65-jxjns\" (UID: \"182cb87f-6f29-43c2-932e-f9de187d4fa0\") " pod="openstack/horizon-565dc5fd65-jxjns" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.502688 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0711763b-3ada-4180-b697-ad4911f48641-logs\") pod \"horizon-bf8c4749f-wv7s9\" (UID: \"0711763b-3ada-4180-b697-ad4911f48641\") " pod="openstack/horizon-bf8c4749f-wv7s9" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.502740 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0711763b-3ada-4180-b697-ad4911f48641-scripts\") pod \"horizon-bf8c4749f-wv7s9\" (UID: \"0711763b-3ada-4180-b697-ad4911f48641\") " pod="openstack/horizon-bf8c4749f-wv7s9" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.502763 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0711763b-3ada-4180-b697-ad4911f48641-config-data\") pod \"horizon-bf8c4749f-wv7s9\" (UID: \"0711763b-3ada-4180-b697-ad4911f48641\") " pod="openstack/horizon-bf8c4749f-wv7s9" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.502784 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/182cb87f-6f29-43c2-932e-f9de187d4fa0-horizon-secret-key\") pod \"horizon-565dc5fd65-jxjns\" (UID: \"182cb87f-6f29-43c2-932e-f9de187d4fa0\") " pod="openstack/horizon-565dc5fd65-jxjns" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.502801 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxn7j\" (UniqueName: \"kubernetes.io/projected/182cb87f-6f29-43c2-932e-f9de187d4fa0-kube-api-access-gxn7j\") pod \"horizon-565dc5fd65-jxjns\" (UID: \"182cb87f-6f29-43c2-932e-f9de187d4fa0\") " pod="openstack/horizon-565dc5fd65-jxjns" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.502834 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0711763b-3ada-4180-b697-ad4911f48641-horizon-secret-key\") pod \"horizon-bf8c4749f-wv7s9\" (UID: \"0711763b-3ada-4180-b697-ad4911f48641\") " pod="openstack/horizon-bf8c4749f-wv7s9" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.502875 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/182cb87f-6f29-43c2-932e-f9de187d4fa0-config-data\") pod \"horizon-565dc5fd65-jxjns\" (UID: \"182cb87f-6f29-43c2-932e-f9de187d4fa0\") " pod="openstack/horizon-565dc5fd65-jxjns" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.502892 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/182cb87f-6f29-43c2-932e-f9de187d4fa0-scripts\") pod \"horizon-565dc5fd65-jxjns\" (UID: \"182cb87f-6f29-43c2-932e-f9de187d4fa0\") " pod="openstack/horizon-565dc5fd65-jxjns" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.503062 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/182cb87f-6f29-43c2-932e-f9de187d4fa0-logs\") pod \"horizon-565dc5fd65-jxjns\" (UID: \"182cb87f-6f29-43c2-932e-f9de187d4fa0\") " pod="openstack/horizon-565dc5fd65-jxjns" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.504492 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/182cb87f-6f29-43c2-932e-f9de187d4fa0-config-data\") pod \"horizon-565dc5fd65-jxjns\" (UID: \"182cb87f-6f29-43c2-932e-f9de187d4fa0\") " pod="openstack/horizon-565dc5fd65-jxjns" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.505345 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.510322 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/182cb87f-6f29-43c2-932e-f9de187d4fa0-scripts\") pod \"horizon-565dc5fd65-jxjns\" (UID: \"182cb87f-6f29-43c2-932e-f9de187d4fa0\") " pod="openstack/horizon-565dc5fd65-jxjns" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.512922 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/182cb87f-6f29-43c2-932e-f9de187d4fa0-horizon-secret-key\") pod \"horizon-565dc5fd65-jxjns\" (UID: \"182cb87f-6f29-43c2-932e-f9de187d4fa0\") " pod="openstack/horizon-565dc5fd65-jxjns" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.534764 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxn7j\" (UniqueName: \"kubernetes.io/projected/182cb87f-6f29-43c2-932e-f9de187d4fa0-kube-api-access-gxn7j\") pod \"horizon-565dc5fd65-jxjns\" (UID: \"182cb87f-6f29-43c2-932e-f9de187d4fa0\") " pod="openstack/horizon-565dc5fd65-jxjns" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.544252 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-8dwkl"] Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.544793 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fff446b9-8dwkl" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.594159 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-2htwh"] Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.595423 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2htwh" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.597968 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.602071 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.602089 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-jdjtc" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.603762 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rz4dv\" (UniqueName: \"kubernetes.io/projected/0711763b-3ada-4180-b697-ad4911f48641-kube-api-access-rz4dv\") pod \"horizon-bf8c4749f-wv7s9\" (UID: \"0711763b-3ada-4180-b697-ad4911f48641\") " pod="openstack/horizon-bf8c4749f-wv7s9" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.603802 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d3623f2-ec27-4ad6-8cb4-553cb0527e15-scripts\") pod \"ceilometer-0\" (UID: \"6d3623f2-ec27-4ad6-8cb4-553cb0527e15\") " pod="openstack/ceilometer-0" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.603822 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d3623f2-ec27-4ad6-8cb4-553cb0527e15-config-data\") pod \"ceilometer-0\" (UID: \"6d3623f2-ec27-4ad6-8cb4-553cb0527e15\") " pod="openstack/ceilometer-0" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.603846 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0711763b-3ada-4180-b697-ad4911f48641-logs\") pod \"horizon-bf8c4749f-wv7s9\" (UID: \"0711763b-3ada-4180-b697-ad4911f48641\") " pod="openstack/horizon-bf8c4749f-wv7s9" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.603878 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d3623f2-ec27-4ad6-8cb4-553cb0527e15-run-httpd\") pod \"ceilometer-0\" (UID: \"6d3623f2-ec27-4ad6-8cb4-553cb0527e15\") " pod="openstack/ceilometer-0" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.603899 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bth4\" (UniqueName: \"kubernetes.io/projected/6d3623f2-ec27-4ad6-8cb4-553cb0527e15-kube-api-access-4bth4\") pod \"ceilometer-0\" (UID: \"6d3623f2-ec27-4ad6-8cb4-553cb0527e15\") " pod="openstack/ceilometer-0" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.603925 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0711763b-3ada-4180-b697-ad4911f48641-scripts\") pod \"horizon-bf8c4749f-wv7s9\" (UID: \"0711763b-3ada-4180-b697-ad4911f48641\") " pod="openstack/horizon-bf8c4749f-wv7s9" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.603949 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6d3623f2-ec27-4ad6-8cb4-553cb0527e15-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6d3623f2-ec27-4ad6-8cb4-553cb0527e15\") " pod="openstack/ceilometer-0" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.603965 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0711763b-3ada-4180-b697-ad4911f48641-config-data\") pod \"horizon-bf8c4749f-wv7s9\" (UID: \"0711763b-3ada-4180-b697-ad4911f48641\") " pod="openstack/horizon-bf8c4749f-wv7s9" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.603990 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d3623f2-ec27-4ad6-8cb4-553cb0527e15-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6d3623f2-ec27-4ad6-8cb4-553cb0527e15\") " pod="openstack/ceilometer-0" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.604010 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d3623f2-ec27-4ad6-8cb4-553cb0527e15-log-httpd\") pod \"ceilometer-0\" (UID: \"6d3623f2-ec27-4ad6-8cb4-553cb0527e15\") " pod="openstack/ceilometer-0" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.604031 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0711763b-3ada-4180-b697-ad4911f48641-horizon-secret-key\") pod \"horizon-bf8c4749f-wv7s9\" (UID: \"0711763b-3ada-4180-b697-ad4911f48641\") " pod="openstack/horizon-bf8c4749f-wv7s9" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.604650 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0711763b-3ada-4180-b697-ad4911f48641-logs\") pod \"horizon-bf8c4749f-wv7s9\" (UID: \"0711763b-3ada-4180-b697-ad4911f48641\") " pod="openstack/horizon-bf8c4749f-wv7s9" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.605079 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0711763b-3ada-4180-b697-ad4911f48641-scripts\") pod \"horizon-bf8c4749f-wv7s9\" (UID: \"0711763b-3ada-4180-b697-ad4911f48641\") " pod="openstack/horizon-bf8c4749f-wv7s9" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.609034 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0711763b-3ada-4180-b697-ad4911f48641-config-data\") pod \"horizon-bf8c4749f-wv7s9\" (UID: \"0711763b-3ada-4180-b697-ad4911f48641\") " pod="openstack/horizon-bf8c4749f-wv7s9" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.613069 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-jdqmt"] Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.613540 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0711763b-3ada-4180-b697-ad4911f48641-horizon-secret-key\") pod \"horizon-bf8c4749f-wv7s9\" (UID: \"0711763b-3ada-4180-b697-ad4911f48641\") " pod="openstack/horizon-bf8c4749f-wv7s9" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.614501 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-jdqmt" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.624781 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-2htwh"] Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.628636 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-565dc5fd65-jxjns" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.634675 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rz4dv\" (UniqueName: \"kubernetes.io/projected/0711763b-3ada-4180-b697-ad4911f48641-kube-api-access-rz4dv\") pod \"horizon-bf8c4749f-wv7s9\" (UID: \"0711763b-3ada-4180-b697-ad4911f48641\") " pod="openstack/horizon-bf8c4749f-wv7s9" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.636383 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-jdqmt"] Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.706474 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/444e35b8-1d2a-4d83-be6c-2184ae0e3110-config-data\") pod \"placement-db-sync-2htwh\" (UID: \"444e35b8-1d2a-4d83-be6c-2184ae0e3110\") " pod="openstack/placement-db-sync-2htwh" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.706522 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d3623f2-ec27-4ad6-8cb4-553cb0527e15-scripts\") pod \"ceilometer-0\" (UID: \"6d3623f2-ec27-4ad6-8cb4-553cb0527e15\") " pod="openstack/ceilometer-0" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.706541 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d3623f2-ec27-4ad6-8cb4-553cb0527e15-config-data\") pod \"ceilometer-0\" (UID: \"6d3623f2-ec27-4ad6-8cb4-553cb0527e15\") " pod="openstack/ceilometer-0" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.708414 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/444e35b8-1d2a-4d83-be6c-2184ae0e3110-scripts\") pod \"placement-db-sync-2htwh\" (UID: \"444e35b8-1d2a-4d83-be6c-2184ae0e3110\") " pod="openstack/placement-db-sync-2htwh" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.708443 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d3623f2-ec27-4ad6-8cb4-553cb0527e15-run-httpd\") pod \"ceilometer-0\" (UID: \"6d3623f2-ec27-4ad6-8cb4-553cb0527e15\") " pod="openstack/ceilometer-0" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.708474 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bth4\" (UniqueName: \"kubernetes.io/projected/6d3623f2-ec27-4ad6-8cb4-553cb0527e15-kube-api-access-4bth4\") pod \"ceilometer-0\" (UID: \"6d3623f2-ec27-4ad6-8cb4-553cb0527e15\") " pod="openstack/ceilometer-0" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.708525 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/444e35b8-1d2a-4d83-be6c-2184ae0e3110-combined-ca-bundle\") pod \"placement-db-sync-2htwh\" (UID: \"444e35b8-1d2a-4d83-be6c-2184ae0e3110\") " pod="openstack/placement-db-sync-2htwh" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.708561 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6d3623f2-ec27-4ad6-8cb4-553cb0527e15-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6d3623f2-ec27-4ad6-8cb4-553cb0527e15\") " pod="openstack/ceilometer-0" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.708577 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/35fb0ae5-1450-4f85-90f5-16a18667a582-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-jdqmt\" (UID: \"35fb0ae5-1450-4f85-90f5-16a18667a582\") " pod="openstack/dnsmasq-dns-8b5c85b87-jdqmt" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.708605 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/35fb0ae5-1450-4f85-90f5-16a18667a582-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-jdqmt\" (UID: \"35fb0ae5-1450-4f85-90f5-16a18667a582\") " pod="openstack/dnsmasq-dns-8b5c85b87-jdqmt" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.708635 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d3623f2-ec27-4ad6-8cb4-553cb0527e15-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6d3623f2-ec27-4ad6-8cb4-553cb0527e15\") " pod="openstack/ceilometer-0" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.708657 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnndp\" (UniqueName: \"kubernetes.io/projected/444e35b8-1d2a-4d83-be6c-2184ae0e3110-kube-api-access-vnndp\") pod \"placement-db-sync-2htwh\" (UID: \"444e35b8-1d2a-4d83-be6c-2184ae0e3110\") " pod="openstack/placement-db-sync-2htwh" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.708673 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/35fb0ae5-1450-4f85-90f5-16a18667a582-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-jdqmt\" (UID: \"35fb0ae5-1450-4f85-90f5-16a18667a582\") " pod="openstack/dnsmasq-dns-8b5c85b87-jdqmt" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.708692 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d3623f2-ec27-4ad6-8cb4-553cb0527e15-log-httpd\") pod \"ceilometer-0\" (UID: \"6d3623f2-ec27-4ad6-8cb4-553cb0527e15\") " pod="openstack/ceilometer-0" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.708734 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35fb0ae5-1450-4f85-90f5-16a18667a582-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-jdqmt\" (UID: \"35fb0ae5-1450-4f85-90f5-16a18667a582\") " pod="openstack/dnsmasq-dns-8b5c85b87-jdqmt" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.708781 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35fb0ae5-1450-4f85-90f5-16a18667a582-config\") pod \"dnsmasq-dns-8b5c85b87-jdqmt\" (UID: \"35fb0ae5-1450-4f85-90f5-16a18667a582\") " pod="openstack/dnsmasq-dns-8b5c85b87-jdqmt" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.708798 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/444e35b8-1d2a-4d83-be6c-2184ae0e3110-logs\") pod \"placement-db-sync-2htwh\" (UID: \"444e35b8-1d2a-4d83-be6c-2184ae0e3110\") " pod="openstack/placement-db-sync-2htwh" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.708823 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lnqx\" (UniqueName: \"kubernetes.io/projected/35fb0ae5-1450-4f85-90f5-16a18667a582-kube-api-access-8lnqx\") pod \"dnsmasq-dns-8b5c85b87-jdqmt\" (UID: \"35fb0ae5-1450-4f85-90f5-16a18667a582\") " pod="openstack/dnsmasq-dns-8b5c85b87-jdqmt" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.709678 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d3623f2-ec27-4ad6-8cb4-553cb0527e15-run-httpd\") pod \"ceilometer-0\" (UID: \"6d3623f2-ec27-4ad6-8cb4-553cb0527e15\") " pod="openstack/ceilometer-0" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.713256 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d3623f2-ec27-4ad6-8cb4-553cb0527e15-log-httpd\") pod \"ceilometer-0\" (UID: \"6d3623f2-ec27-4ad6-8cb4-553cb0527e15\") " pod="openstack/ceilometer-0" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.717948 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d3623f2-ec27-4ad6-8cb4-553cb0527e15-config-data\") pod \"ceilometer-0\" (UID: \"6d3623f2-ec27-4ad6-8cb4-553cb0527e15\") " pod="openstack/ceilometer-0" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.725724 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6d3623f2-ec27-4ad6-8cb4-553cb0527e15-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6d3623f2-ec27-4ad6-8cb4-553cb0527e15\") " pod="openstack/ceilometer-0" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.729870 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d3623f2-ec27-4ad6-8cb4-553cb0527e15-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6d3623f2-ec27-4ad6-8cb4-553cb0527e15\") " pod="openstack/ceilometer-0" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.737098 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d3623f2-ec27-4ad6-8cb4-553cb0527e15-scripts\") pod \"ceilometer-0\" (UID: \"6d3623f2-ec27-4ad6-8cb4-553cb0527e15\") " pod="openstack/ceilometer-0" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.742495 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bth4\" (UniqueName: \"kubernetes.io/projected/6d3623f2-ec27-4ad6-8cb4-553cb0527e15-kube-api-access-4bth4\") pod \"ceilometer-0\" (UID: \"6d3623f2-ec27-4ad6-8cb4-553cb0527e15\") " pod="openstack/ceilometer-0" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.809843 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/444e35b8-1d2a-4d83-be6c-2184ae0e3110-config-data\") pod \"placement-db-sync-2htwh\" (UID: \"444e35b8-1d2a-4d83-be6c-2184ae0e3110\") " pod="openstack/placement-db-sync-2htwh" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.810051 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/444e35b8-1d2a-4d83-be6c-2184ae0e3110-scripts\") pod \"placement-db-sync-2htwh\" (UID: \"444e35b8-1d2a-4d83-be6c-2184ae0e3110\") " pod="openstack/placement-db-sync-2htwh" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.810086 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/444e35b8-1d2a-4d83-be6c-2184ae0e3110-combined-ca-bundle\") pod \"placement-db-sync-2htwh\" (UID: \"444e35b8-1d2a-4d83-be6c-2184ae0e3110\") " pod="openstack/placement-db-sync-2htwh" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.810111 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/35fb0ae5-1450-4f85-90f5-16a18667a582-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-jdqmt\" (UID: \"35fb0ae5-1450-4f85-90f5-16a18667a582\") " pod="openstack/dnsmasq-dns-8b5c85b87-jdqmt" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.810131 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/35fb0ae5-1450-4f85-90f5-16a18667a582-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-jdqmt\" (UID: \"35fb0ae5-1450-4f85-90f5-16a18667a582\") " pod="openstack/dnsmasq-dns-8b5c85b87-jdqmt" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.810156 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnndp\" (UniqueName: \"kubernetes.io/projected/444e35b8-1d2a-4d83-be6c-2184ae0e3110-kube-api-access-vnndp\") pod \"placement-db-sync-2htwh\" (UID: \"444e35b8-1d2a-4d83-be6c-2184ae0e3110\") " pod="openstack/placement-db-sync-2htwh" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.810173 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/35fb0ae5-1450-4f85-90f5-16a18667a582-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-jdqmt\" (UID: \"35fb0ae5-1450-4f85-90f5-16a18667a582\") " pod="openstack/dnsmasq-dns-8b5c85b87-jdqmt" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.810209 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35fb0ae5-1450-4f85-90f5-16a18667a582-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-jdqmt\" (UID: \"35fb0ae5-1450-4f85-90f5-16a18667a582\") " pod="openstack/dnsmasq-dns-8b5c85b87-jdqmt" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.810237 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35fb0ae5-1450-4f85-90f5-16a18667a582-config\") pod \"dnsmasq-dns-8b5c85b87-jdqmt\" (UID: \"35fb0ae5-1450-4f85-90f5-16a18667a582\") " pod="openstack/dnsmasq-dns-8b5c85b87-jdqmt" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.810257 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/444e35b8-1d2a-4d83-be6c-2184ae0e3110-logs\") pod \"placement-db-sync-2htwh\" (UID: \"444e35b8-1d2a-4d83-be6c-2184ae0e3110\") " pod="openstack/placement-db-sync-2htwh" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.810275 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lnqx\" (UniqueName: \"kubernetes.io/projected/35fb0ae5-1450-4f85-90f5-16a18667a582-kube-api-access-8lnqx\") pod \"dnsmasq-dns-8b5c85b87-jdqmt\" (UID: \"35fb0ae5-1450-4f85-90f5-16a18667a582\") " pod="openstack/dnsmasq-dns-8b5c85b87-jdqmt" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.818602 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/35fb0ae5-1450-4f85-90f5-16a18667a582-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-jdqmt\" (UID: \"35fb0ae5-1450-4f85-90f5-16a18667a582\") " pod="openstack/dnsmasq-dns-8b5c85b87-jdqmt" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.819398 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/444e35b8-1d2a-4d83-be6c-2184ae0e3110-config-data\") pod \"placement-db-sync-2htwh\" (UID: \"444e35b8-1d2a-4d83-be6c-2184ae0e3110\") " pod="openstack/placement-db-sync-2htwh" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.819873 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/35fb0ae5-1450-4f85-90f5-16a18667a582-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-jdqmt\" (UID: \"35fb0ae5-1450-4f85-90f5-16a18667a582\") " pod="openstack/dnsmasq-dns-8b5c85b87-jdqmt" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.820370 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35fb0ae5-1450-4f85-90f5-16a18667a582-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-jdqmt\" (UID: \"35fb0ae5-1450-4f85-90f5-16a18667a582\") " pod="openstack/dnsmasq-dns-8b5c85b87-jdqmt" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.820584 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/444e35b8-1d2a-4d83-be6c-2184ae0e3110-logs\") pod \"placement-db-sync-2htwh\" (UID: \"444e35b8-1d2a-4d83-be6c-2184ae0e3110\") " pod="openstack/placement-db-sync-2htwh" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.824616 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35fb0ae5-1450-4f85-90f5-16a18667a582-config\") pod \"dnsmasq-dns-8b5c85b87-jdqmt\" (UID: \"35fb0ae5-1450-4f85-90f5-16a18667a582\") " pod="openstack/dnsmasq-dns-8b5c85b87-jdqmt" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.824989 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/35fb0ae5-1450-4f85-90f5-16a18667a582-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-jdqmt\" (UID: \"35fb0ae5-1450-4f85-90f5-16a18667a582\") " pod="openstack/dnsmasq-dns-8b5c85b87-jdqmt" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.836714 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/444e35b8-1d2a-4d83-be6c-2184ae0e3110-scripts\") pod \"placement-db-sync-2htwh\" (UID: \"444e35b8-1d2a-4d83-be6c-2184ae0e3110\") " pod="openstack/placement-db-sync-2htwh" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.837233 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/444e35b8-1d2a-4d83-be6c-2184ae0e3110-combined-ca-bundle\") pod \"placement-db-sync-2htwh\" (UID: \"444e35b8-1d2a-4d83-be6c-2184ae0e3110\") " pod="openstack/placement-db-sync-2htwh" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.841075 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnndp\" (UniqueName: \"kubernetes.io/projected/444e35b8-1d2a-4d83-be6c-2184ae0e3110-kube-api-access-vnndp\") pod \"placement-db-sync-2htwh\" (UID: \"444e35b8-1d2a-4d83-be6c-2184ae0e3110\") " pod="openstack/placement-db-sync-2htwh" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.844105 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lnqx\" (UniqueName: \"kubernetes.io/projected/35fb0ae5-1450-4f85-90f5-16a18667a582-kube-api-access-8lnqx\") pod \"dnsmasq-dns-8b5c85b87-jdqmt\" (UID: \"35fb0ae5-1450-4f85-90f5-16a18667a582\") " pod="openstack/dnsmasq-dns-8b5c85b87-jdqmt" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.868597 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-bf8c4749f-wv7s9" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.897703 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.934357 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2htwh" Oct 13 17:39:49 crc kubenswrapper[4720]: I1013 17:39:49.951272 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-jdqmt" Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.173751 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-ndkhx"] Oct 13 17:39:50 crc kubenswrapper[4720]: W1013 17:39:50.180282 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f3d3ba5_1521_4ac1_a005_871d7bb7eea0.slice/crio-b50487d5fbb8630b04d11a53a4560ac0d8712a495de23af44bd411b5cbb9988f WatchSource:0}: Error finding container b50487d5fbb8630b04d11a53a4560ac0d8712a495de23af44bd411b5cbb9988f: Status 404 returned error can't find the container with id b50487d5fbb8630b04d11a53a4560ac0d8712a495de23af44bd411b5cbb9988f Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.293218 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-8dwkl"] Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.334237 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.346264 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.350917 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-fwlrr" Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.351045 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.351166 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.356685 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.420895 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32985099-946d-4e5f-923c-66944d4bfdcb-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"32985099-946d-4e5f-923c-66944d4bfdcb\") " pod="openstack/glance-default-external-api-0" Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.420964 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32985099-946d-4e5f-923c-66944d4bfdcb-logs\") pod \"glance-default-external-api-0\" (UID: \"32985099-946d-4e5f-923c-66944d4bfdcb\") " pod="openstack/glance-default-external-api-0" Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.421018 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/32985099-946d-4e5f-923c-66944d4bfdcb-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"32985099-946d-4e5f-923c-66944d4bfdcb\") " pod="openstack/glance-default-external-api-0" Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.421074 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsk5b\" (UniqueName: \"kubernetes.io/projected/32985099-946d-4e5f-923c-66944d4bfdcb-kube-api-access-wsk5b\") pod \"glance-default-external-api-0\" (UID: \"32985099-946d-4e5f-923c-66944d4bfdcb\") " pod="openstack/glance-default-external-api-0" Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.421099 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32985099-946d-4e5f-923c-66944d4bfdcb-config-data\") pod \"glance-default-external-api-0\" (UID: \"32985099-946d-4e5f-923c-66944d4bfdcb\") " pod="openstack/glance-default-external-api-0" Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.421163 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"32985099-946d-4e5f-923c-66944d4bfdcb\") " pod="openstack/glance-default-external-api-0" Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.421243 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32985099-946d-4e5f-923c-66944d4bfdcb-scripts\") pod \"glance-default-external-api-0\" (UID: \"32985099-946d-4e5f-923c-66944d4bfdcb\") " pod="openstack/glance-default-external-api-0" Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.426953 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-565dc5fd65-jxjns"] Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.530360 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"32985099-946d-4e5f-923c-66944d4bfdcb\") " pod="openstack/glance-default-external-api-0" Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.530413 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32985099-946d-4e5f-923c-66944d4bfdcb-scripts\") pod \"glance-default-external-api-0\" (UID: \"32985099-946d-4e5f-923c-66944d4bfdcb\") " pod="openstack/glance-default-external-api-0" Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.530435 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32985099-946d-4e5f-923c-66944d4bfdcb-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"32985099-946d-4e5f-923c-66944d4bfdcb\") " pod="openstack/glance-default-external-api-0" Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.530469 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32985099-946d-4e5f-923c-66944d4bfdcb-logs\") pod \"glance-default-external-api-0\" (UID: \"32985099-946d-4e5f-923c-66944d4bfdcb\") " pod="openstack/glance-default-external-api-0" Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.530508 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/32985099-946d-4e5f-923c-66944d4bfdcb-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"32985099-946d-4e5f-923c-66944d4bfdcb\") " pod="openstack/glance-default-external-api-0" Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.530548 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsk5b\" (UniqueName: \"kubernetes.io/projected/32985099-946d-4e5f-923c-66944d4bfdcb-kube-api-access-wsk5b\") pod \"glance-default-external-api-0\" (UID: \"32985099-946d-4e5f-923c-66944d4bfdcb\") " pod="openstack/glance-default-external-api-0" Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.530568 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32985099-946d-4e5f-923c-66944d4bfdcb-config-data\") pod \"glance-default-external-api-0\" (UID: \"32985099-946d-4e5f-923c-66944d4bfdcb\") " pod="openstack/glance-default-external-api-0" Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.531650 4720 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"32985099-946d-4e5f-923c-66944d4bfdcb\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-external-api-0" Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.538704 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32985099-946d-4e5f-923c-66944d4bfdcb-scripts\") pod \"glance-default-external-api-0\" (UID: \"32985099-946d-4e5f-923c-66944d4bfdcb\") " pod="openstack/glance-default-external-api-0" Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.539015 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/32985099-946d-4e5f-923c-66944d4bfdcb-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"32985099-946d-4e5f-923c-66944d4bfdcb\") " pod="openstack/glance-default-external-api-0" Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.539557 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32985099-946d-4e5f-923c-66944d4bfdcb-logs\") pod \"glance-default-external-api-0\" (UID: \"32985099-946d-4e5f-923c-66944d4bfdcb\") " pod="openstack/glance-default-external-api-0" Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.550543 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-bf8c4749f-wv7s9"] Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.560003 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.561419 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32985099-946d-4e5f-923c-66944d4bfdcb-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"32985099-946d-4e5f-923c-66944d4bfdcb\") " pod="openstack/glance-default-external-api-0" Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.562998 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32985099-946d-4e5f-923c-66944d4bfdcb-config-data\") pod \"glance-default-external-api-0\" (UID: \"32985099-946d-4e5f-923c-66944d4bfdcb\") " pod="openstack/glance-default-external-api-0" Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.569634 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-2htwh"] Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.569826 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsk5b\" (UniqueName: \"kubernetes.io/projected/32985099-946d-4e5f-923c-66944d4bfdcb-kube-api-access-wsk5b\") pod \"glance-default-external-api-0\" (UID: \"32985099-946d-4e5f-923c-66944d4bfdcb\") " pod="openstack/glance-default-external-api-0" Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.571129 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"32985099-946d-4e5f-923c-66944d4bfdcb\") " pod="openstack/glance-default-external-api-0" Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.677414 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.682113 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.685161 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.716319 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 13 17:39:50 crc kubenswrapper[4720]: W1013 17:39:50.718805 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod35fb0ae5_1450_4f85_90f5_16a18667a582.slice/crio-948cbf1221f3de5f661fa23b20847678cfcf01771e6811bda8bf16f75e252d85 WatchSource:0}: Error finding container 948cbf1221f3de5f661fa23b20847678cfcf01771e6811bda8bf16f75e252d85: Status 404 returned error can't find the container with id 948cbf1221f3de5f661fa23b20847678cfcf01771e6811bda8bf16f75e252d85 Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.745785 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-jdqmt"] Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.755761 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.836432 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9db54922-09ac-46bd-85ad-aad0b3e8738b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9db54922-09ac-46bd-85ad-aad0b3e8738b\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.836497 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9db54922-09ac-46bd-85ad-aad0b3e8738b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9db54922-09ac-46bd-85ad-aad0b3e8738b\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.836537 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9db54922-09ac-46bd-85ad-aad0b3e8738b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9db54922-09ac-46bd-85ad-aad0b3e8738b\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.836571 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7246n\" (UniqueName: \"kubernetes.io/projected/9db54922-09ac-46bd-85ad-aad0b3e8738b-kube-api-access-7246n\") pod \"glance-default-internal-api-0\" (UID: \"9db54922-09ac-46bd-85ad-aad0b3e8738b\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.836617 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9db54922-09ac-46bd-85ad-aad0b3e8738b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9db54922-09ac-46bd-85ad-aad0b3e8738b\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.836665 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9db54922-09ac-46bd-85ad-aad0b3e8738b-logs\") pod \"glance-default-internal-api-0\" (UID: \"9db54922-09ac-46bd-85ad-aad0b3e8738b\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.836711 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"9db54922-09ac-46bd-85ad-aad0b3e8738b\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.938757 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9db54922-09ac-46bd-85ad-aad0b3e8738b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9db54922-09ac-46bd-85ad-aad0b3e8738b\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.938833 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9db54922-09ac-46bd-85ad-aad0b3e8738b-logs\") pod \"glance-default-internal-api-0\" (UID: \"9db54922-09ac-46bd-85ad-aad0b3e8738b\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.938871 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"9db54922-09ac-46bd-85ad-aad0b3e8738b\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.938925 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9db54922-09ac-46bd-85ad-aad0b3e8738b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9db54922-09ac-46bd-85ad-aad0b3e8738b\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.938948 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9db54922-09ac-46bd-85ad-aad0b3e8738b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9db54922-09ac-46bd-85ad-aad0b3e8738b\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.938973 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9db54922-09ac-46bd-85ad-aad0b3e8738b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9db54922-09ac-46bd-85ad-aad0b3e8738b\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.938998 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7246n\" (UniqueName: \"kubernetes.io/projected/9db54922-09ac-46bd-85ad-aad0b3e8738b-kube-api-access-7246n\") pod \"glance-default-internal-api-0\" (UID: \"9db54922-09ac-46bd-85ad-aad0b3e8738b\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.940273 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9db54922-09ac-46bd-85ad-aad0b3e8738b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9db54922-09ac-46bd-85ad-aad0b3e8738b\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.940372 4720 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"9db54922-09ac-46bd-85ad-aad0b3e8738b\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.940395 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9db54922-09ac-46bd-85ad-aad0b3e8738b-logs\") pod \"glance-default-internal-api-0\" (UID: \"9db54922-09ac-46bd-85ad-aad0b3e8738b\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.944898 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9db54922-09ac-46bd-85ad-aad0b3e8738b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9db54922-09ac-46bd-85ad-aad0b3e8738b\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.946707 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9db54922-09ac-46bd-85ad-aad0b3e8738b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9db54922-09ac-46bd-85ad-aad0b3e8738b\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.952357 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-bf8c4749f-wv7s9" event={"ID":"0711763b-3ada-4180-b697-ad4911f48641","Type":"ContainerStarted","Data":"367dccf13badc2c4820f82eb098afb455f16c3e05532f7afe1e7f7d32fdf0f22"} Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.953506 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ndkhx" event={"ID":"7f3d3ba5-1521-4ac1-a005-871d7bb7eea0","Type":"ContainerStarted","Data":"c8e5ab5295419fad6f74268ce31c7466cb0d45be3ec0dfaf7ece64cae1beb14c"} Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.953536 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ndkhx" event={"ID":"7f3d3ba5-1521-4ac1-a005-871d7bb7eea0","Type":"ContainerStarted","Data":"b50487d5fbb8630b04d11a53a4560ac0d8712a495de23af44bd411b5cbb9988f"} Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.954431 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9db54922-09ac-46bd-85ad-aad0b3e8738b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9db54922-09ac-46bd-85ad-aad0b3e8738b\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.956820 4720 generic.go:334] "Generic (PLEG): container finished" podID="5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502" containerID="e4cb5984a581bdac98e99b27bca2d53972a299b54514e1d562d0cd91116d5057" exitCode=0 Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.956927 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fff446b9-8dwkl" event={"ID":"5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502","Type":"ContainerDied","Data":"e4cb5984a581bdac98e99b27bca2d53972a299b54514e1d562d0cd91116d5057"} Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.956955 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fff446b9-8dwkl" event={"ID":"5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502","Type":"ContainerStarted","Data":"0dccb36b9c83047123a45354ddc28459a75e391f31dcf81b332d96326d89b42d"} Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.959138 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-565dc5fd65-jxjns" event={"ID":"182cb87f-6f29-43c2-932e-f9de187d4fa0","Type":"ContainerStarted","Data":"e7f2334caaa4976bd25aa50d0df920e1cddf04438a8131f3dc3f7614080208f7"} Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.959234 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7246n\" (UniqueName: \"kubernetes.io/projected/9db54922-09ac-46bd-85ad-aad0b3e8738b-kube-api-access-7246n\") pod \"glance-default-internal-api-0\" (UID: \"9db54922-09ac-46bd-85ad-aad0b3e8738b\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.960073 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d3623f2-ec27-4ad6-8cb4-553cb0527e15","Type":"ContainerStarted","Data":"483488aa8542aa83f80a1ea3c1fb9658b9062604a295cb5d64c257d4231ab072"} Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.965611 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-jdqmt" event={"ID":"35fb0ae5-1450-4f85-90f5-16a18667a582","Type":"ContainerStarted","Data":"948cbf1221f3de5f661fa23b20847678cfcf01771e6811bda8bf16f75e252d85"} Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.968563 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"9db54922-09ac-46bd-85ad-aad0b3e8738b\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.970586 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2htwh" event={"ID":"444e35b8-1d2a-4d83-be6c-2184ae0e3110","Type":"ContainerStarted","Data":"e39116e555360f96fc4444f8b547f1e09c12a0ff17254ce1332d6ffcad2e942c"} Oct 13 17:39:50 crc kubenswrapper[4720]: I1013 17:39:50.972692 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-ndkhx" podStartSLOduration=1.972676081 podStartE2EDuration="1.972676081s" podCreationTimestamp="2025-10-13 17:39:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:39:50.967131188 +0000 UTC m=+936.424381320" watchObservedRunningTime="2025-10-13 17:39:50.972676081 +0000 UTC m=+936.429926213" Oct 13 17:39:51 crc kubenswrapper[4720]: I1013 17:39:51.009907 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 13 17:39:51 crc kubenswrapper[4720]: I1013 17:39:51.215902 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 13 17:39:51 crc kubenswrapper[4720]: I1013 17:39:51.296958 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 13 17:39:51 crc kubenswrapper[4720]: I1013 17:39:51.330783 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-565dc5fd65-jxjns"] Oct 13 17:39:51 crc kubenswrapper[4720]: I1013 17:39:51.363103 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 13 17:39:51 crc kubenswrapper[4720]: I1013 17:39:51.380148 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7bf7d9f5bf-6khmt"] Oct 13 17:39:51 crc kubenswrapper[4720]: I1013 17:39:51.381826 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7bf7d9f5bf-6khmt" Oct 13 17:39:51 crc kubenswrapper[4720]: I1013 17:39:51.390107 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7bf7d9f5bf-6khmt"] Oct 13 17:39:51 crc kubenswrapper[4720]: I1013 17:39:51.452947 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02414a49-0001-4f13-97cb-641937473fb6-scripts\") pod \"horizon-7bf7d9f5bf-6khmt\" (UID: \"02414a49-0001-4f13-97cb-641937473fb6\") " pod="openstack/horizon-7bf7d9f5bf-6khmt" Oct 13 17:39:51 crc kubenswrapper[4720]: I1013 17:39:51.453046 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/02414a49-0001-4f13-97cb-641937473fb6-horizon-secret-key\") pod \"horizon-7bf7d9f5bf-6khmt\" (UID: \"02414a49-0001-4f13-97cb-641937473fb6\") " pod="openstack/horizon-7bf7d9f5bf-6khmt" Oct 13 17:39:51 crc kubenswrapper[4720]: I1013 17:39:51.453071 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/02414a49-0001-4f13-97cb-641937473fb6-config-data\") pod \"horizon-7bf7d9f5bf-6khmt\" (UID: \"02414a49-0001-4f13-97cb-641937473fb6\") " pod="openstack/horizon-7bf7d9f5bf-6khmt" Oct 13 17:39:51 crc kubenswrapper[4720]: I1013 17:39:51.453092 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbks6\" (UniqueName: \"kubernetes.io/projected/02414a49-0001-4f13-97cb-641937473fb6-kube-api-access-nbks6\") pod \"horizon-7bf7d9f5bf-6khmt\" (UID: \"02414a49-0001-4f13-97cb-641937473fb6\") " pod="openstack/horizon-7bf7d9f5bf-6khmt" Oct 13 17:39:51 crc kubenswrapper[4720]: I1013 17:39:51.453151 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02414a49-0001-4f13-97cb-641937473fb6-logs\") pod \"horizon-7bf7d9f5bf-6khmt\" (UID: \"02414a49-0001-4f13-97cb-641937473fb6\") " pod="openstack/horizon-7bf7d9f5bf-6khmt" Oct 13 17:39:51 crc kubenswrapper[4720]: I1013 17:39:51.467104 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fff446b9-8dwkl" Oct 13 17:39:51 crc kubenswrapper[4720]: I1013 17:39:51.469107 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 13 17:39:51 crc kubenswrapper[4720]: I1013 17:39:51.554373 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502-ovsdbserver-nb\") pod \"5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502\" (UID: \"5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502\") " Oct 13 17:39:51 crc kubenswrapper[4720]: I1013 17:39:51.554431 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502-config\") pod \"5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502\" (UID: \"5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502\") " Oct 13 17:39:51 crc kubenswrapper[4720]: I1013 17:39:51.554466 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502-ovsdbserver-sb\") pod \"5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502\" (UID: \"5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502\") " Oct 13 17:39:51 crc kubenswrapper[4720]: I1013 17:39:51.554502 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ps5h7\" (UniqueName: \"kubernetes.io/projected/5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502-kube-api-access-ps5h7\") pod \"5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502\" (UID: \"5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502\") " Oct 13 17:39:51 crc kubenswrapper[4720]: I1013 17:39:51.555312 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502-dns-svc\") pod \"5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502\" (UID: \"5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502\") " Oct 13 17:39:51 crc kubenswrapper[4720]: I1013 17:39:51.555350 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502-dns-swift-storage-0\") pod \"5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502\" (UID: \"5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502\") " Oct 13 17:39:51 crc kubenswrapper[4720]: I1013 17:39:51.555612 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/02414a49-0001-4f13-97cb-641937473fb6-horizon-secret-key\") pod \"horizon-7bf7d9f5bf-6khmt\" (UID: \"02414a49-0001-4f13-97cb-641937473fb6\") " pod="openstack/horizon-7bf7d9f5bf-6khmt" Oct 13 17:39:51 crc kubenswrapper[4720]: I1013 17:39:51.555648 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/02414a49-0001-4f13-97cb-641937473fb6-config-data\") pod \"horizon-7bf7d9f5bf-6khmt\" (UID: \"02414a49-0001-4f13-97cb-641937473fb6\") " pod="openstack/horizon-7bf7d9f5bf-6khmt" Oct 13 17:39:51 crc kubenswrapper[4720]: I1013 17:39:51.555675 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbks6\" (UniqueName: \"kubernetes.io/projected/02414a49-0001-4f13-97cb-641937473fb6-kube-api-access-nbks6\") pod \"horizon-7bf7d9f5bf-6khmt\" (UID: \"02414a49-0001-4f13-97cb-641937473fb6\") " pod="openstack/horizon-7bf7d9f5bf-6khmt" Oct 13 17:39:51 crc kubenswrapper[4720]: I1013 17:39:51.555720 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02414a49-0001-4f13-97cb-641937473fb6-logs\") pod \"horizon-7bf7d9f5bf-6khmt\" (UID: \"02414a49-0001-4f13-97cb-641937473fb6\") " pod="openstack/horizon-7bf7d9f5bf-6khmt" Oct 13 17:39:51 crc kubenswrapper[4720]: I1013 17:39:51.555752 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02414a49-0001-4f13-97cb-641937473fb6-scripts\") pod \"horizon-7bf7d9f5bf-6khmt\" (UID: \"02414a49-0001-4f13-97cb-641937473fb6\") " pod="openstack/horizon-7bf7d9f5bf-6khmt" Oct 13 17:39:51 crc kubenswrapper[4720]: I1013 17:39:51.557248 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02414a49-0001-4f13-97cb-641937473fb6-scripts\") pod \"horizon-7bf7d9f5bf-6khmt\" (UID: \"02414a49-0001-4f13-97cb-641937473fb6\") " pod="openstack/horizon-7bf7d9f5bf-6khmt" Oct 13 17:39:51 crc kubenswrapper[4720]: I1013 17:39:51.558102 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/02414a49-0001-4f13-97cb-641937473fb6-config-data\") pod \"horizon-7bf7d9f5bf-6khmt\" (UID: \"02414a49-0001-4f13-97cb-641937473fb6\") " pod="openstack/horizon-7bf7d9f5bf-6khmt" Oct 13 17:39:51 crc kubenswrapper[4720]: I1013 17:39:51.559996 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02414a49-0001-4f13-97cb-641937473fb6-logs\") pod \"horizon-7bf7d9f5bf-6khmt\" (UID: \"02414a49-0001-4f13-97cb-641937473fb6\") " pod="openstack/horizon-7bf7d9f5bf-6khmt" Oct 13 17:39:51 crc kubenswrapper[4720]: I1013 17:39:51.560920 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/02414a49-0001-4f13-97cb-641937473fb6-horizon-secret-key\") pod \"horizon-7bf7d9f5bf-6khmt\" (UID: \"02414a49-0001-4f13-97cb-641937473fb6\") " pod="openstack/horizon-7bf7d9f5bf-6khmt" Oct 13 17:39:51 crc kubenswrapper[4720]: I1013 17:39:51.571386 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502-kube-api-access-ps5h7" (OuterVolumeSpecName: "kube-api-access-ps5h7") pod "5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502" (UID: "5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502"). InnerVolumeSpecName "kube-api-access-ps5h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:39:51 crc kubenswrapper[4720]: I1013 17:39:51.574332 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbks6\" (UniqueName: \"kubernetes.io/projected/02414a49-0001-4f13-97cb-641937473fb6-kube-api-access-nbks6\") pod \"horizon-7bf7d9f5bf-6khmt\" (UID: \"02414a49-0001-4f13-97cb-641937473fb6\") " pod="openstack/horizon-7bf7d9f5bf-6khmt" Oct 13 17:39:51 crc kubenswrapper[4720]: I1013 17:39:51.597213 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502" (UID: "5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:39:51 crc kubenswrapper[4720]: I1013 17:39:51.608072 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502" (UID: "5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:39:51 crc kubenswrapper[4720]: I1013 17:39:51.615607 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502-config" (OuterVolumeSpecName: "config") pod "5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502" (UID: "5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:39:51 crc kubenswrapper[4720]: I1013 17:39:51.621877 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502" (UID: "5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:39:51 crc kubenswrapper[4720]: I1013 17:39:51.623788 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502" (UID: "5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:39:51 crc kubenswrapper[4720]: I1013 17:39:51.632651 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 13 17:39:51 crc kubenswrapper[4720]: I1013 17:39:51.657206 4720 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:51 crc kubenswrapper[4720]: I1013 17:39:51.657226 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502-config\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:51 crc kubenswrapper[4720]: I1013 17:39:51.657235 4720 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:51 crc kubenswrapper[4720]: I1013 17:39:51.657245 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ps5h7\" (UniqueName: \"kubernetes.io/projected/5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502-kube-api-access-ps5h7\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:51 crc kubenswrapper[4720]: I1013 17:39:51.657255 4720 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:51 crc kubenswrapper[4720]: I1013 17:39:51.657263 4720 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:51 crc kubenswrapper[4720]: I1013 17:39:51.771583 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7bf7d9f5bf-6khmt" Oct 13 17:39:51 crc kubenswrapper[4720]: I1013 17:39:51.992122 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9db54922-09ac-46bd-85ad-aad0b3e8738b","Type":"ContainerStarted","Data":"9b0fc072061b170b8f7d6b5fd92963b76bd0d6187d0bc435ad11e259d25cd885"} Oct 13 17:39:51 crc kubenswrapper[4720]: I1013 17:39:51.994143 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fff446b9-8dwkl" event={"ID":"5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502","Type":"ContainerDied","Data":"0dccb36b9c83047123a45354ddc28459a75e391f31dcf81b332d96326d89b42d"} Oct 13 17:39:51 crc kubenswrapper[4720]: I1013 17:39:51.994159 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fff446b9-8dwkl" Oct 13 17:39:51 crc kubenswrapper[4720]: I1013 17:39:51.994169 4720 scope.go:117] "RemoveContainer" containerID="e4cb5984a581bdac98e99b27bca2d53972a299b54514e1d562d0cd91116d5057" Oct 13 17:39:52 crc kubenswrapper[4720]: I1013 17:39:52.005358 4720 generic.go:334] "Generic (PLEG): container finished" podID="35fb0ae5-1450-4f85-90f5-16a18667a582" containerID="30f6d59962ba89ea59abf1c7b6702b5153c7c63349c90d93aaad1fbc1bdf46a6" exitCode=0 Oct 13 17:39:52 crc kubenswrapper[4720]: I1013 17:39:52.005569 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-jdqmt" event={"ID":"35fb0ae5-1450-4f85-90f5-16a18667a582","Type":"ContainerDied","Data":"30f6d59962ba89ea59abf1c7b6702b5153c7c63349c90d93aaad1fbc1bdf46a6"} Oct 13 17:39:52 crc kubenswrapper[4720]: I1013 17:39:52.008064 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"32985099-946d-4e5f-923c-66944d4bfdcb","Type":"ContainerStarted","Data":"35235969d9432493d6b5f4e33110996abb5dbf80fe345ed833af4bce110dcccb"} Oct 13 17:39:52 crc kubenswrapper[4720]: I1013 17:39:52.073262 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-8dwkl"] Oct 13 17:39:52 crc kubenswrapper[4720]: I1013 17:39:52.085855 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-8dwkl"] Oct 13 17:39:52 crc kubenswrapper[4720]: I1013 17:39:52.241522 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7bf7d9f5bf-6khmt"] Oct 13 17:39:52 crc kubenswrapper[4720]: W1013 17:39:52.251880 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02414a49_0001_4f13_97cb_641937473fb6.slice/crio-019e17a4beba1065a350b97a47c64a8dc080c72b10f4c11fb15f29267226d39f WatchSource:0}: Error finding container 019e17a4beba1065a350b97a47c64a8dc080c72b10f4c11fb15f29267226d39f: Status 404 returned error can't find the container with id 019e17a4beba1065a350b97a47c64a8dc080c72b10f4c11fb15f29267226d39f Oct 13 17:39:53 crc kubenswrapper[4720]: I1013 17:39:53.034951 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-jdqmt" event={"ID":"35fb0ae5-1450-4f85-90f5-16a18667a582","Type":"ContainerStarted","Data":"a050cdb202e55c8bf384a2da4137a41a8dc8220e8658429b0ec88fb3768b3fe1"} Oct 13 17:39:53 crc kubenswrapper[4720]: I1013 17:39:53.035289 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8b5c85b87-jdqmt" Oct 13 17:39:53 crc kubenswrapper[4720]: I1013 17:39:53.043720 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"32985099-946d-4e5f-923c-66944d4bfdcb","Type":"ContainerStarted","Data":"fdc324ddd8be1aa561b65f8ecf36582cb89d639f575e2bbd85d709bac6f274b8"} Oct 13 17:39:53 crc kubenswrapper[4720]: I1013 17:39:53.043762 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"32985099-946d-4e5f-923c-66944d4bfdcb","Type":"ContainerStarted","Data":"3165279ec9273ca2d005d41ed3529d0b2e0edcdbd3b7d718119379ab5f0776b2"} Oct 13 17:39:53 crc kubenswrapper[4720]: I1013 17:39:53.043869 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="32985099-946d-4e5f-923c-66944d4bfdcb" containerName="glance-log" containerID="cri-o://3165279ec9273ca2d005d41ed3529d0b2e0edcdbd3b7d718119379ab5f0776b2" gracePeriod=30 Oct 13 17:39:53 crc kubenswrapper[4720]: I1013 17:39:53.044165 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="32985099-946d-4e5f-923c-66944d4bfdcb" containerName="glance-httpd" containerID="cri-o://fdc324ddd8be1aa561b65f8ecf36582cb89d639f575e2bbd85d709bac6f274b8" gracePeriod=30 Oct 13 17:39:53 crc kubenswrapper[4720]: I1013 17:39:53.048530 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9db54922-09ac-46bd-85ad-aad0b3e8738b","Type":"ContainerStarted","Data":"98129028d4b908a7ef367e4b9c80a052538c6c6c1ce35796676bdb423cdaa835"} Oct 13 17:39:53 crc kubenswrapper[4720]: I1013 17:39:53.054251 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8b5c85b87-jdqmt" podStartSLOduration=4.054234661 podStartE2EDuration="4.054234661s" podCreationTimestamp="2025-10-13 17:39:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:39:53.050443183 +0000 UTC m=+938.507693315" watchObservedRunningTime="2025-10-13 17:39:53.054234661 +0000 UTC m=+938.511484793" Oct 13 17:39:53 crc kubenswrapper[4720]: I1013 17:39:53.058534 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bf7d9f5bf-6khmt" event={"ID":"02414a49-0001-4f13-97cb-641937473fb6","Type":"ContainerStarted","Data":"019e17a4beba1065a350b97a47c64a8dc080c72b10f4c11fb15f29267226d39f"} Oct 13 17:39:53 crc kubenswrapper[4720]: I1013 17:39:53.076364 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.07635081 podStartE2EDuration="4.07635081s" podCreationTimestamp="2025-10-13 17:39:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:39:53.075918389 +0000 UTC m=+938.533168521" watchObservedRunningTime="2025-10-13 17:39:53.07635081 +0000 UTC m=+938.533600952" Oct 13 17:39:53 crc kubenswrapper[4720]: I1013 17:39:53.186295 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502" path="/var/lib/kubelet/pods/5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502/volumes" Oct 13 17:39:54 crc kubenswrapper[4720]: I1013 17:39:54.077277 4720 generic.go:334] "Generic (PLEG): container finished" podID="7f3d3ba5-1521-4ac1-a005-871d7bb7eea0" containerID="c8e5ab5295419fad6f74268ce31c7466cb0d45be3ec0dfaf7ece64cae1beb14c" exitCode=0 Oct 13 17:39:54 crc kubenswrapper[4720]: I1013 17:39:54.077352 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ndkhx" event={"ID":"7f3d3ba5-1521-4ac1-a005-871d7bb7eea0","Type":"ContainerDied","Data":"c8e5ab5295419fad6f74268ce31c7466cb0d45be3ec0dfaf7ece64cae1beb14c"} Oct 13 17:39:54 crc kubenswrapper[4720]: I1013 17:39:54.087749 4720 generic.go:334] "Generic (PLEG): container finished" podID="32985099-946d-4e5f-923c-66944d4bfdcb" containerID="fdc324ddd8be1aa561b65f8ecf36582cb89d639f575e2bbd85d709bac6f274b8" exitCode=0 Oct 13 17:39:54 crc kubenswrapper[4720]: I1013 17:39:54.087773 4720 generic.go:334] "Generic (PLEG): container finished" podID="32985099-946d-4e5f-923c-66944d4bfdcb" containerID="3165279ec9273ca2d005d41ed3529d0b2e0edcdbd3b7d718119379ab5f0776b2" exitCode=143 Oct 13 17:39:54 crc kubenswrapper[4720]: I1013 17:39:54.087831 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"32985099-946d-4e5f-923c-66944d4bfdcb","Type":"ContainerDied","Data":"fdc324ddd8be1aa561b65f8ecf36582cb89d639f575e2bbd85d709bac6f274b8"} Oct 13 17:39:54 crc kubenswrapper[4720]: I1013 17:39:54.087903 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"32985099-946d-4e5f-923c-66944d4bfdcb","Type":"ContainerDied","Data":"3165279ec9273ca2d005d41ed3529d0b2e0edcdbd3b7d718119379ab5f0776b2"} Oct 13 17:39:54 crc kubenswrapper[4720]: I1013 17:39:54.092208 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9db54922-09ac-46bd-85ad-aad0b3e8738b","Type":"ContainerStarted","Data":"4dc194e29078e4f1faded9436f3abc1e52b23a8774480dfb1c4cf02b144282a7"} Oct 13 17:39:54 crc kubenswrapper[4720]: I1013 17:39:54.092660 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="9db54922-09ac-46bd-85ad-aad0b3e8738b" containerName="glance-log" containerID="cri-o://98129028d4b908a7ef367e4b9c80a052538c6c6c1ce35796676bdb423cdaa835" gracePeriod=30 Oct 13 17:39:54 crc kubenswrapper[4720]: I1013 17:39:54.092718 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="9db54922-09ac-46bd-85ad-aad0b3e8738b" containerName="glance-httpd" containerID="cri-o://4dc194e29078e4f1faded9436f3abc1e52b23a8774480dfb1c4cf02b144282a7" gracePeriod=30 Oct 13 17:39:54 crc kubenswrapper[4720]: I1013 17:39:54.134449 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-8c7e-account-create-ftbcl"] Oct 13 17:39:54 crc kubenswrapper[4720]: E1013 17:39:54.135385 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502" containerName="init" Oct 13 17:39:54 crc kubenswrapper[4720]: I1013 17:39:54.135408 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502" containerName="init" Oct 13 17:39:54 crc kubenswrapper[4720]: I1013 17:39:54.135978 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c76a9e6-6bfd-4e6d-84ae-cffcb18cc502" containerName="init" Oct 13 17:39:54 crc kubenswrapper[4720]: I1013 17:39:54.136718 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8c7e-account-create-ftbcl" Oct 13 17:39:54 crc kubenswrapper[4720]: I1013 17:39:54.138877 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Oct 13 17:39:54 crc kubenswrapper[4720]: I1013 17:39:54.146613 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-8c7e-account-create-ftbcl"] Oct 13 17:39:54 crc kubenswrapper[4720]: I1013 17:39:54.152040 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.152023898 podStartE2EDuration="5.152023898s" podCreationTimestamp="2025-10-13 17:39:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:39:54.134627381 +0000 UTC m=+939.591877523" watchObservedRunningTime="2025-10-13 17:39:54.152023898 +0000 UTC m=+939.609274040" Oct 13 17:39:54 crc kubenswrapper[4720]: I1013 17:39:54.215155 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-1089-account-create-rz4lq"] Oct 13 17:39:54 crc kubenswrapper[4720]: I1013 17:39:54.215554 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4h94\" (UniqueName: \"kubernetes.io/projected/77e2a869-ae9a-47cd-973e-bc3597a2365d-kube-api-access-d4h94\") pod \"cinder-8c7e-account-create-ftbcl\" (UID: \"77e2a869-ae9a-47cd-973e-bc3597a2365d\") " pod="openstack/cinder-8c7e-account-create-ftbcl" Oct 13 17:39:54 crc kubenswrapper[4720]: I1013 17:39:54.216160 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-1089-account-create-rz4lq" Oct 13 17:39:54 crc kubenswrapper[4720]: I1013 17:39:54.220474 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Oct 13 17:39:54 crc kubenswrapper[4720]: I1013 17:39:54.235575 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-1089-account-create-rz4lq"] Oct 13 17:39:54 crc kubenswrapper[4720]: I1013 17:39:54.317824 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x74zh\" (UniqueName: \"kubernetes.io/projected/5cab6f53-adba-4474-b8b6-195faff8e193-kube-api-access-x74zh\") pod \"barbican-1089-account-create-rz4lq\" (UID: \"5cab6f53-adba-4474-b8b6-195faff8e193\") " pod="openstack/barbican-1089-account-create-rz4lq" Oct 13 17:39:54 crc kubenswrapper[4720]: I1013 17:39:54.317957 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4h94\" (UniqueName: \"kubernetes.io/projected/77e2a869-ae9a-47cd-973e-bc3597a2365d-kube-api-access-d4h94\") pod \"cinder-8c7e-account-create-ftbcl\" (UID: \"77e2a869-ae9a-47cd-973e-bc3597a2365d\") " pod="openstack/cinder-8c7e-account-create-ftbcl" Oct 13 17:39:54 crc kubenswrapper[4720]: I1013 17:39:54.334621 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4h94\" (UniqueName: \"kubernetes.io/projected/77e2a869-ae9a-47cd-973e-bc3597a2365d-kube-api-access-d4h94\") pod \"cinder-8c7e-account-create-ftbcl\" (UID: \"77e2a869-ae9a-47cd-973e-bc3597a2365d\") " pod="openstack/cinder-8c7e-account-create-ftbcl" Oct 13 17:39:54 crc kubenswrapper[4720]: I1013 17:39:54.419109 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x74zh\" (UniqueName: \"kubernetes.io/projected/5cab6f53-adba-4474-b8b6-195faff8e193-kube-api-access-x74zh\") pod \"barbican-1089-account-create-rz4lq\" (UID: \"5cab6f53-adba-4474-b8b6-195faff8e193\") " pod="openstack/barbican-1089-account-create-rz4lq" Oct 13 17:39:54 crc kubenswrapper[4720]: I1013 17:39:54.434678 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x74zh\" (UniqueName: \"kubernetes.io/projected/5cab6f53-adba-4474-b8b6-195faff8e193-kube-api-access-x74zh\") pod \"barbican-1089-account-create-rz4lq\" (UID: \"5cab6f53-adba-4474-b8b6-195faff8e193\") " pod="openstack/barbican-1089-account-create-rz4lq" Oct 13 17:39:54 crc kubenswrapper[4720]: I1013 17:39:54.458222 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8c7e-account-create-ftbcl" Oct 13 17:39:54 crc kubenswrapper[4720]: I1013 17:39:54.525440 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-fd8d-account-create-v5dkv"] Oct 13 17:39:54 crc kubenswrapper[4720]: I1013 17:39:54.527123 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-fd8d-account-create-v5dkv" Oct 13 17:39:54 crc kubenswrapper[4720]: I1013 17:39:54.531611 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Oct 13 17:39:54 crc kubenswrapper[4720]: I1013 17:39:54.535910 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-1089-account-create-rz4lq" Oct 13 17:39:54 crc kubenswrapper[4720]: I1013 17:39:54.546726 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-fd8d-account-create-v5dkv"] Oct 13 17:39:54 crc kubenswrapper[4720]: I1013 17:39:54.622884 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t8cg\" (UniqueName: \"kubernetes.io/projected/b6edabb2-1018-4d9a-b43f-2414235bbfdc-kube-api-access-6t8cg\") pod \"neutron-fd8d-account-create-v5dkv\" (UID: \"b6edabb2-1018-4d9a-b43f-2414235bbfdc\") " pod="openstack/neutron-fd8d-account-create-v5dkv" Oct 13 17:39:54 crc kubenswrapper[4720]: I1013 17:39:54.725130 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6t8cg\" (UniqueName: \"kubernetes.io/projected/b6edabb2-1018-4d9a-b43f-2414235bbfdc-kube-api-access-6t8cg\") pod \"neutron-fd8d-account-create-v5dkv\" (UID: \"b6edabb2-1018-4d9a-b43f-2414235bbfdc\") " pod="openstack/neutron-fd8d-account-create-v5dkv" Oct 13 17:39:54 crc kubenswrapper[4720]: I1013 17:39:54.760838 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6t8cg\" (UniqueName: \"kubernetes.io/projected/b6edabb2-1018-4d9a-b43f-2414235bbfdc-kube-api-access-6t8cg\") pod \"neutron-fd8d-account-create-v5dkv\" (UID: \"b6edabb2-1018-4d9a-b43f-2414235bbfdc\") " pod="openstack/neutron-fd8d-account-create-v5dkv" Oct 13 17:39:54 crc kubenswrapper[4720]: I1013 17:39:54.874282 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-fd8d-account-create-v5dkv" Oct 13 17:39:55 crc kubenswrapper[4720]: I1013 17:39:55.108858 4720 generic.go:334] "Generic (PLEG): container finished" podID="9db54922-09ac-46bd-85ad-aad0b3e8738b" containerID="4dc194e29078e4f1faded9436f3abc1e52b23a8774480dfb1c4cf02b144282a7" exitCode=0 Oct 13 17:39:55 crc kubenswrapper[4720]: I1013 17:39:55.109117 4720 generic.go:334] "Generic (PLEG): container finished" podID="9db54922-09ac-46bd-85ad-aad0b3e8738b" containerID="98129028d4b908a7ef367e4b9c80a052538c6c6c1ce35796676bdb423cdaa835" exitCode=143 Oct 13 17:39:55 crc kubenswrapper[4720]: I1013 17:39:55.108990 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9db54922-09ac-46bd-85ad-aad0b3e8738b","Type":"ContainerDied","Data":"4dc194e29078e4f1faded9436f3abc1e52b23a8774480dfb1c4cf02b144282a7"} Oct 13 17:39:55 crc kubenswrapper[4720]: I1013 17:39:55.109311 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9db54922-09ac-46bd-85ad-aad0b3e8738b","Type":"ContainerDied","Data":"98129028d4b908a7ef367e4b9c80a052538c6c6c1ce35796676bdb423cdaa835"} Oct 13 17:39:56 crc kubenswrapper[4720]: I1013 17:39:56.099040 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ndkhx" Oct 13 17:39:56 crc kubenswrapper[4720]: I1013 17:39:56.124578 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ndkhx" event={"ID":"7f3d3ba5-1521-4ac1-a005-871d7bb7eea0","Type":"ContainerDied","Data":"b50487d5fbb8630b04d11a53a4560ac0d8712a495de23af44bd411b5cbb9988f"} Oct 13 17:39:56 crc kubenswrapper[4720]: I1013 17:39:56.124616 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b50487d5fbb8630b04d11a53a4560ac0d8712a495de23af44bd411b5cbb9988f" Oct 13 17:39:56 crc kubenswrapper[4720]: I1013 17:39:56.124688 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ndkhx" Oct 13 17:39:56 crc kubenswrapper[4720]: I1013 17:39:56.151963 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7f3d3ba5-1521-4ac1-a005-871d7bb7eea0-fernet-keys\") pod \"7f3d3ba5-1521-4ac1-a005-871d7bb7eea0\" (UID: \"7f3d3ba5-1521-4ac1-a005-871d7bb7eea0\") " Oct 13 17:39:56 crc kubenswrapper[4720]: I1013 17:39:56.152061 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7f3d3ba5-1521-4ac1-a005-871d7bb7eea0-credential-keys\") pod \"7f3d3ba5-1521-4ac1-a005-871d7bb7eea0\" (UID: \"7f3d3ba5-1521-4ac1-a005-871d7bb7eea0\") " Oct 13 17:39:56 crc kubenswrapper[4720]: I1013 17:39:56.152217 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f3d3ba5-1521-4ac1-a005-871d7bb7eea0-combined-ca-bundle\") pod \"7f3d3ba5-1521-4ac1-a005-871d7bb7eea0\" (UID: \"7f3d3ba5-1521-4ac1-a005-871d7bb7eea0\") " Oct 13 17:39:56 crc kubenswrapper[4720]: I1013 17:39:56.152276 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54hmp\" (UniqueName: \"kubernetes.io/projected/7f3d3ba5-1521-4ac1-a005-871d7bb7eea0-kube-api-access-54hmp\") pod \"7f3d3ba5-1521-4ac1-a005-871d7bb7eea0\" (UID: \"7f3d3ba5-1521-4ac1-a005-871d7bb7eea0\") " Oct 13 17:39:56 crc kubenswrapper[4720]: I1013 17:39:56.152343 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f3d3ba5-1521-4ac1-a005-871d7bb7eea0-scripts\") pod \"7f3d3ba5-1521-4ac1-a005-871d7bb7eea0\" (UID: \"7f3d3ba5-1521-4ac1-a005-871d7bb7eea0\") " Oct 13 17:39:56 crc kubenswrapper[4720]: I1013 17:39:56.152418 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f3d3ba5-1521-4ac1-a005-871d7bb7eea0-config-data\") pod \"7f3d3ba5-1521-4ac1-a005-871d7bb7eea0\" (UID: \"7f3d3ba5-1521-4ac1-a005-871d7bb7eea0\") " Oct 13 17:39:56 crc kubenswrapper[4720]: I1013 17:39:56.159964 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f3d3ba5-1521-4ac1-a005-871d7bb7eea0-kube-api-access-54hmp" (OuterVolumeSpecName: "kube-api-access-54hmp") pod "7f3d3ba5-1521-4ac1-a005-871d7bb7eea0" (UID: "7f3d3ba5-1521-4ac1-a005-871d7bb7eea0"). InnerVolumeSpecName "kube-api-access-54hmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:39:56 crc kubenswrapper[4720]: I1013 17:39:56.160018 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f3d3ba5-1521-4ac1-a005-871d7bb7eea0-scripts" (OuterVolumeSpecName: "scripts") pod "7f3d3ba5-1521-4ac1-a005-871d7bb7eea0" (UID: "7f3d3ba5-1521-4ac1-a005-871d7bb7eea0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:39:56 crc kubenswrapper[4720]: I1013 17:39:56.160055 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f3d3ba5-1521-4ac1-a005-871d7bb7eea0-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "7f3d3ba5-1521-4ac1-a005-871d7bb7eea0" (UID: "7f3d3ba5-1521-4ac1-a005-871d7bb7eea0"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:39:56 crc kubenswrapper[4720]: I1013 17:39:56.163668 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f3d3ba5-1521-4ac1-a005-871d7bb7eea0-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "7f3d3ba5-1521-4ac1-a005-871d7bb7eea0" (UID: "7f3d3ba5-1521-4ac1-a005-871d7bb7eea0"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:39:56 crc kubenswrapper[4720]: I1013 17:39:56.191261 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f3d3ba5-1521-4ac1-a005-871d7bb7eea0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7f3d3ba5-1521-4ac1-a005-871d7bb7eea0" (UID: "7f3d3ba5-1521-4ac1-a005-871d7bb7eea0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:39:56 crc kubenswrapper[4720]: I1013 17:39:56.217378 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f3d3ba5-1521-4ac1-a005-871d7bb7eea0-config-data" (OuterVolumeSpecName: "config-data") pod "7f3d3ba5-1521-4ac1-a005-871d7bb7eea0" (UID: "7f3d3ba5-1521-4ac1-a005-871d7bb7eea0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:39:56 crc kubenswrapper[4720]: I1013 17:39:56.303901 4720 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7f3d3ba5-1521-4ac1-a005-871d7bb7eea0-credential-keys\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:56 crc kubenswrapper[4720]: I1013 17:39:56.303932 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f3d3ba5-1521-4ac1-a005-871d7bb7eea0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:56 crc kubenswrapper[4720]: I1013 17:39:56.303941 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54hmp\" (UniqueName: \"kubernetes.io/projected/7f3d3ba5-1521-4ac1-a005-871d7bb7eea0-kube-api-access-54hmp\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:56 crc kubenswrapper[4720]: I1013 17:39:56.303953 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f3d3ba5-1521-4ac1-a005-871d7bb7eea0-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:56 crc kubenswrapper[4720]: I1013 17:39:56.303962 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f3d3ba5-1521-4ac1-a005-871d7bb7eea0-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:56 crc kubenswrapper[4720]: I1013 17:39:56.303972 4720 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7f3d3ba5-1521-4ac1-a005-871d7bb7eea0-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:56 crc kubenswrapper[4720]: I1013 17:39:56.638138 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 13 17:39:56 crc kubenswrapper[4720]: I1013 17:39:56.709916 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32985099-946d-4e5f-923c-66944d4bfdcb-logs\") pod \"32985099-946d-4e5f-923c-66944d4bfdcb\" (UID: \"32985099-946d-4e5f-923c-66944d4bfdcb\") " Oct 13 17:39:56 crc kubenswrapper[4720]: I1013 17:39:56.710002 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32985099-946d-4e5f-923c-66944d4bfdcb-config-data\") pod \"32985099-946d-4e5f-923c-66944d4bfdcb\" (UID: \"32985099-946d-4e5f-923c-66944d4bfdcb\") " Oct 13 17:39:56 crc kubenswrapper[4720]: I1013 17:39:56.710051 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"32985099-946d-4e5f-923c-66944d4bfdcb\" (UID: \"32985099-946d-4e5f-923c-66944d4bfdcb\") " Oct 13 17:39:56 crc kubenswrapper[4720]: I1013 17:39:56.710101 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32985099-946d-4e5f-923c-66944d4bfdcb-scripts\") pod \"32985099-946d-4e5f-923c-66944d4bfdcb\" (UID: \"32985099-946d-4e5f-923c-66944d4bfdcb\") " Oct 13 17:39:56 crc kubenswrapper[4720]: I1013 17:39:56.710157 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32985099-946d-4e5f-923c-66944d4bfdcb-combined-ca-bundle\") pod \"32985099-946d-4e5f-923c-66944d4bfdcb\" (UID: \"32985099-946d-4e5f-923c-66944d4bfdcb\") " Oct 13 17:39:56 crc kubenswrapper[4720]: I1013 17:39:56.710225 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/32985099-946d-4e5f-923c-66944d4bfdcb-httpd-run\") pod \"32985099-946d-4e5f-923c-66944d4bfdcb\" (UID: \"32985099-946d-4e5f-923c-66944d4bfdcb\") " Oct 13 17:39:56 crc kubenswrapper[4720]: I1013 17:39:56.710334 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsk5b\" (UniqueName: \"kubernetes.io/projected/32985099-946d-4e5f-923c-66944d4bfdcb-kube-api-access-wsk5b\") pod \"32985099-946d-4e5f-923c-66944d4bfdcb\" (UID: \"32985099-946d-4e5f-923c-66944d4bfdcb\") " Oct 13 17:39:56 crc kubenswrapper[4720]: I1013 17:39:56.710520 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32985099-946d-4e5f-923c-66944d4bfdcb-logs" (OuterVolumeSpecName: "logs") pod "32985099-946d-4e5f-923c-66944d4bfdcb" (UID: "32985099-946d-4e5f-923c-66944d4bfdcb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:39:56 crc kubenswrapper[4720]: I1013 17:39:56.710824 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32985099-946d-4e5f-923c-66944d4bfdcb-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "32985099-946d-4e5f-923c-66944d4bfdcb" (UID: "32985099-946d-4e5f-923c-66944d4bfdcb"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:39:56 crc kubenswrapper[4720]: I1013 17:39:56.711067 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32985099-946d-4e5f-923c-66944d4bfdcb-logs\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:56 crc kubenswrapper[4720]: I1013 17:39:56.711087 4720 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/32985099-946d-4e5f-923c-66944d4bfdcb-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:56 crc kubenswrapper[4720]: I1013 17:39:56.713824 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32985099-946d-4e5f-923c-66944d4bfdcb-scripts" (OuterVolumeSpecName: "scripts") pod "32985099-946d-4e5f-923c-66944d4bfdcb" (UID: "32985099-946d-4e5f-923c-66944d4bfdcb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:39:56 crc kubenswrapper[4720]: I1013 17:39:56.713856 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "32985099-946d-4e5f-923c-66944d4bfdcb" (UID: "32985099-946d-4e5f-923c-66944d4bfdcb"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 13 17:39:56 crc kubenswrapper[4720]: I1013 17:39:56.717163 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32985099-946d-4e5f-923c-66944d4bfdcb-kube-api-access-wsk5b" (OuterVolumeSpecName: "kube-api-access-wsk5b") pod "32985099-946d-4e5f-923c-66944d4bfdcb" (UID: "32985099-946d-4e5f-923c-66944d4bfdcb"). InnerVolumeSpecName "kube-api-access-wsk5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:39:56 crc kubenswrapper[4720]: I1013 17:39:56.733892 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32985099-946d-4e5f-923c-66944d4bfdcb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "32985099-946d-4e5f-923c-66944d4bfdcb" (UID: "32985099-946d-4e5f-923c-66944d4bfdcb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:39:56 crc kubenswrapper[4720]: I1013 17:39:56.776243 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32985099-946d-4e5f-923c-66944d4bfdcb-config-data" (OuterVolumeSpecName: "config-data") pod "32985099-946d-4e5f-923c-66944d4bfdcb" (UID: "32985099-946d-4e5f-923c-66944d4bfdcb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:39:56 crc kubenswrapper[4720]: I1013 17:39:56.812312 4720 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Oct 13 17:39:56 crc kubenswrapper[4720]: I1013 17:39:56.812398 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32985099-946d-4e5f-923c-66944d4bfdcb-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:56 crc kubenswrapper[4720]: I1013 17:39:56.812408 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32985099-946d-4e5f-923c-66944d4bfdcb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:56 crc kubenswrapper[4720]: I1013 17:39:56.812419 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wsk5b\" (UniqueName: \"kubernetes.io/projected/32985099-946d-4e5f-923c-66944d4bfdcb-kube-api-access-wsk5b\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:56 crc kubenswrapper[4720]: I1013 17:39:56.812426 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32985099-946d-4e5f-923c-66944d4bfdcb-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:56 crc kubenswrapper[4720]: I1013 17:39:56.828832 4720 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Oct 13 17:39:56 crc kubenswrapper[4720]: I1013 17:39:56.914039 4720 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Oct 13 17:39:57 crc kubenswrapper[4720]: I1013 17:39:57.147504 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"32985099-946d-4e5f-923c-66944d4bfdcb","Type":"ContainerDied","Data":"35235969d9432493d6b5f4e33110996abb5dbf80fe345ed833af4bce110dcccb"} Oct 13 17:39:57 crc kubenswrapper[4720]: I1013 17:39:57.147561 4720 scope.go:117] "RemoveContainer" containerID="fdc324ddd8be1aa561b65f8ecf36582cb89d639f575e2bbd85d709bac6f274b8" Oct 13 17:39:57 crc kubenswrapper[4720]: I1013 17:39:57.147726 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 13 17:39:57 crc kubenswrapper[4720]: I1013 17:39:57.197033 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-ndkhx"] Oct 13 17:39:57 crc kubenswrapper[4720]: I1013 17:39:57.204691 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-ndkhx"] Oct 13 17:39:57 crc kubenswrapper[4720]: I1013 17:39:57.212139 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 13 17:39:57 crc kubenswrapper[4720]: I1013 17:39:57.222999 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 13 17:39:57 crc kubenswrapper[4720]: I1013 17:39:57.237464 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Oct 13 17:39:57 crc kubenswrapper[4720]: E1013 17:39:57.237870 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f3d3ba5-1521-4ac1-a005-871d7bb7eea0" containerName="keystone-bootstrap" Oct 13 17:39:57 crc kubenswrapper[4720]: I1013 17:39:57.237886 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f3d3ba5-1521-4ac1-a005-871d7bb7eea0" containerName="keystone-bootstrap" Oct 13 17:39:57 crc kubenswrapper[4720]: E1013 17:39:57.237913 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32985099-946d-4e5f-923c-66944d4bfdcb" containerName="glance-log" Oct 13 17:39:57 crc kubenswrapper[4720]: I1013 17:39:57.237919 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="32985099-946d-4e5f-923c-66944d4bfdcb" containerName="glance-log" Oct 13 17:39:57 crc kubenswrapper[4720]: E1013 17:39:57.237929 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32985099-946d-4e5f-923c-66944d4bfdcb" containerName="glance-httpd" Oct 13 17:39:57 crc kubenswrapper[4720]: I1013 17:39:57.237936 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="32985099-946d-4e5f-923c-66944d4bfdcb" containerName="glance-httpd" Oct 13 17:39:57 crc kubenswrapper[4720]: I1013 17:39:57.238098 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f3d3ba5-1521-4ac1-a005-871d7bb7eea0" containerName="keystone-bootstrap" Oct 13 17:39:57 crc kubenswrapper[4720]: I1013 17:39:57.238123 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="32985099-946d-4e5f-923c-66944d4bfdcb" containerName="glance-httpd" Oct 13 17:39:57 crc kubenswrapper[4720]: I1013 17:39:57.238139 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="32985099-946d-4e5f-923c-66944d4bfdcb" containerName="glance-log" Oct 13 17:39:57 crc kubenswrapper[4720]: I1013 17:39:57.238998 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 13 17:39:57 crc kubenswrapper[4720]: I1013 17:39:57.241711 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Oct 13 17:39:57 crc kubenswrapper[4720]: I1013 17:39:57.245045 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 13 17:39:57 crc kubenswrapper[4720]: I1013 17:39:57.292916 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-2z4fc"] Oct 13 17:39:57 crc kubenswrapper[4720]: I1013 17:39:57.295149 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2z4fc" Oct 13 17:39:57 crc kubenswrapper[4720]: I1013 17:39:57.296981 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 13 17:39:57 crc kubenswrapper[4720]: I1013 17:39:57.297076 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 13 17:39:57 crc kubenswrapper[4720]: I1013 17:39:57.297140 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 13 17:39:57 crc kubenswrapper[4720]: I1013 17:39:57.296982 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-gnxbs" Oct 13 17:39:57 crc kubenswrapper[4720]: I1013 17:39:57.301291 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-2z4fc"] Oct 13 17:39:57 crc kubenswrapper[4720]: I1013 17:39:57.320124 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79085d79-21f8-43c4-af7c-e51e0c9f9610-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"79085d79-21f8-43c4-af7c-e51e0c9f9610\") " pod="openstack/glance-default-external-api-0" Oct 13 17:39:57 crc kubenswrapper[4720]: I1013 17:39:57.320244 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldgwl\" (UniqueName: \"kubernetes.io/projected/79085d79-21f8-43c4-af7c-e51e0c9f9610-kube-api-access-ldgwl\") pod \"glance-default-external-api-0\" (UID: \"79085d79-21f8-43c4-af7c-e51e0c9f9610\") " pod="openstack/glance-default-external-api-0" Oct 13 17:39:57 crc kubenswrapper[4720]: I1013 17:39:57.320674 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/79085d79-21f8-43c4-af7c-e51e0c9f9610-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"79085d79-21f8-43c4-af7c-e51e0c9f9610\") " pod="openstack/glance-default-external-api-0" Oct 13 17:39:57 crc kubenswrapper[4720]: I1013 17:39:57.320733 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79085d79-21f8-43c4-af7c-e51e0c9f9610-config-data\") pod \"glance-default-external-api-0\" (UID: \"79085d79-21f8-43c4-af7c-e51e0c9f9610\") " pod="openstack/glance-default-external-api-0" Oct 13 17:39:57 crc kubenswrapper[4720]: I1013 17:39:57.320755 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"79085d79-21f8-43c4-af7c-e51e0c9f9610\") " pod="openstack/glance-default-external-api-0" Oct 13 17:39:57 crc kubenswrapper[4720]: I1013 17:39:57.321054 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79085d79-21f8-43c4-af7c-e51e0c9f9610-logs\") pod \"glance-default-external-api-0\" (UID: \"79085d79-21f8-43c4-af7c-e51e0c9f9610\") " pod="openstack/glance-default-external-api-0" Oct 13 17:39:57 crc kubenswrapper[4720]: I1013 17:39:57.321092 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79085d79-21f8-43c4-af7c-e51e0c9f9610-scripts\") pod \"glance-default-external-api-0\" (UID: \"79085d79-21f8-43c4-af7c-e51e0c9f9610\") " pod="openstack/glance-default-external-api-0" Oct 13 17:39:57 crc kubenswrapper[4720]: I1013 17:39:57.424731 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/15d273d6-ce41-4aeb-88e1-42a1f9423737-credential-keys\") pod \"keystone-bootstrap-2z4fc\" (UID: \"15d273d6-ce41-4aeb-88e1-42a1f9423737\") " pod="openstack/keystone-bootstrap-2z4fc" Oct 13 17:39:57 crc kubenswrapper[4720]: I1013 17:39:57.424795 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79085d79-21f8-43c4-af7c-e51e0c9f9610-logs\") pod \"glance-default-external-api-0\" (UID: \"79085d79-21f8-43c4-af7c-e51e0c9f9610\") " pod="openstack/glance-default-external-api-0" Oct 13 17:39:57 crc kubenswrapper[4720]: I1013 17:39:57.424823 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79085d79-21f8-43c4-af7c-e51e0c9f9610-scripts\") pod \"glance-default-external-api-0\" (UID: \"79085d79-21f8-43c4-af7c-e51e0c9f9610\") " pod="openstack/glance-default-external-api-0" Oct 13 17:39:57 crc kubenswrapper[4720]: I1013 17:39:57.424862 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15d273d6-ce41-4aeb-88e1-42a1f9423737-scripts\") pod \"keystone-bootstrap-2z4fc\" (UID: \"15d273d6-ce41-4aeb-88e1-42a1f9423737\") " pod="openstack/keystone-bootstrap-2z4fc" Oct 13 17:39:57 crc kubenswrapper[4720]: I1013 17:39:57.424879 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15d273d6-ce41-4aeb-88e1-42a1f9423737-combined-ca-bundle\") pod \"keystone-bootstrap-2z4fc\" (UID: \"15d273d6-ce41-4aeb-88e1-42a1f9423737\") " pod="openstack/keystone-bootstrap-2z4fc" Oct 13 17:39:57 crc kubenswrapper[4720]: I1013 17:39:57.424914 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/15d273d6-ce41-4aeb-88e1-42a1f9423737-fernet-keys\") pod \"keystone-bootstrap-2z4fc\" (UID: \"15d273d6-ce41-4aeb-88e1-42a1f9423737\") " pod="openstack/keystone-bootstrap-2z4fc" Oct 13 17:39:57 crc kubenswrapper[4720]: I1013 17:39:57.424931 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79085d79-21f8-43c4-af7c-e51e0c9f9610-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"79085d79-21f8-43c4-af7c-e51e0c9f9610\") " pod="openstack/glance-default-external-api-0" Oct 13 17:39:57 crc kubenswrapper[4720]: I1013 17:39:57.424963 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15d273d6-ce41-4aeb-88e1-42a1f9423737-config-data\") pod \"keystone-bootstrap-2z4fc\" (UID: \"15d273d6-ce41-4aeb-88e1-42a1f9423737\") " pod="openstack/keystone-bootstrap-2z4fc" Oct 13 17:39:57 crc kubenswrapper[4720]: I1013 17:39:57.425045 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldgwl\" (UniqueName: \"kubernetes.io/projected/79085d79-21f8-43c4-af7c-e51e0c9f9610-kube-api-access-ldgwl\") pod \"glance-default-external-api-0\" (UID: \"79085d79-21f8-43c4-af7c-e51e0c9f9610\") " pod="openstack/glance-default-external-api-0" Oct 13 17:39:57 crc kubenswrapper[4720]: I1013 17:39:57.425444 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/79085d79-21f8-43c4-af7c-e51e0c9f9610-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"79085d79-21f8-43c4-af7c-e51e0c9f9610\") " pod="openstack/glance-default-external-api-0" Oct 13 17:39:57 crc kubenswrapper[4720]: I1013 17:39:57.425471 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79085d79-21f8-43c4-af7c-e51e0c9f9610-config-data\") pod \"glance-default-external-api-0\" (UID: \"79085d79-21f8-43c4-af7c-e51e0c9f9610\") " pod="openstack/glance-default-external-api-0" Oct 13 17:39:57 crc kubenswrapper[4720]: I1013 17:39:57.425494 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"79085d79-21f8-43c4-af7c-e51e0c9f9610\") " pod="openstack/glance-default-external-api-0" Oct 13 17:39:57 crc kubenswrapper[4720]: I1013 17:39:57.425551 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rvh4\" (UniqueName: \"kubernetes.io/projected/15d273d6-ce41-4aeb-88e1-42a1f9423737-kube-api-access-2rvh4\") pod \"keystone-bootstrap-2z4fc\" (UID: \"15d273d6-ce41-4aeb-88e1-42a1f9423737\") " pod="openstack/keystone-bootstrap-2z4fc" Oct 13 17:39:57 crc kubenswrapper[4720]: I1013 17:39:57.425930 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/79085d79-21f8-43c4-af7c-e51e0c9f9610-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"79085d79-21f8-43c4-af7c-e51e0c9f9610\") " pod="openstack/glance-default-external-api-0" Oct 13 17:39:57 crc kubenswrapper[4720]: I1013 17:39:57.426175 4720 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"79085d79-21f8-43c4-af7c-e51e0c9f9610\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-external-api-0" Oct 13 17:39:57 crc kubenswrapper[4720]: I1013 17:39:57.426617 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79085d79-21f8-43c4-af7c-e51e0c9f9610-logs\") pod \"glance-default-external-api-0\" (UID: \"79085d79-21f8-43c4-af7c-e51e0c9f9610\") " pod="openstack/glance-default-external-api-0" Oct 13 17:39:57 crc kubenswrapper[4720]: I1013 17:39:57.430338 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79085d79-21f8-43c4-af7c-e51e0c9f9610-scripts\") pod \"glance-default-external-api-0\" (UID: \"79085d79-21f8-43c4-af7c-e51e0c9f9610\") " pod="openstack/glance-default-external-api-0" Oct 13 17:39:57 crc kubenswrapper[4720]: I1013 17:39:57.430871 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79085d79-21f8-43c4-af7c-e51e0c9f9610-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"79085d79-21f8-43c4-af7c-e51e0c9f9610\") " pod="openstack/glance-default-external-api-0" Oct 13 17:39:57 crc kubenswrapper[4720]: I1013 17:39:57.432011 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79085d79-21f8-43c4-af7c-e51e0c9f9610-config-data\") pod \"glance-default-external-api-0\" (UID: \"79085d79-21f8-43c4-af7c-e51e0c9f9610\") " pod="openstack/glance-default-external-api-0" Oct 13 17:39:57 crc kubenswrapper[4720]: I1013 17:39:57.457663 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldgwl\" (UniqueName: \"kubernetes.io/projected/79085d79-21f8-43c4-af7c-e51e0c9f9610-kube-api-access-ldgwl\") pod \"glance-default-external-api-0\" (UID: \"79085d79-21f8-43c4-af7c-e51e0c9f9610\") " pod="openstack/glance-default-external-api-0" Oct 13 17:39:57 crc kubenswrapper[4720]: I1013 17:39:57.467903 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"79085d79-21f8-43c4-af7c-e51e0c9f9610\") " pod="openstack/glance-default-external-api-0" Oct 13 17:39:57 crc kubenswrapper[4720]: I1013 17:39:57.526606 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rvh4\" (UniqueName: \"kubernetes.io/projected/15d273d6-ce41-4aeb-88e1-42a1f9423737-kube-api-access-2rvh4\") pod \"keystone-bootstrap-2z4fc\" (UID: \"15d273d6-ce41-4aeb-88e1-42a1f9423737\") " pod="openstack/keystone-bootstrap-2z4fc" Oct 13 17:39:57 crc kubenswrapper[4720]: I1013 17:39:57.526679 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/15d273d6-ce41-4aeb-88e1-42a1f9423737-credential-keys\") pod \"keystone-bootstrap-2z4fc\" (UID: \"15d273d6-ce41-4aeb-88e1-42a1f9423737\") " pod="openstack/keystone-bootstrap-2z4fc" Oct 13 17:39:57 crc kubenswrapper[4720]: I1013 17:39:57.526728 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15d273d6-ce41-4aeb-88e1-42a1f9423737-scripts\") pod \"keystone-bootstrap-2z4fc\" (UID: \"15d273d6-ce41-4aeb-88e1-42a1f9423737\") " pod="openstack/keystone-bootstrap-2z4fc" Oct 13 17:39:57 crc kubenswrapper[4720]: I1013 17:39:57.526748 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15d273d6-ce41-4aeb-88e1-42a1f9423737-combined-ca-bundle\") pod \"keystone-bootstrap-2z4fc\" (UID: \"15d273d6-ce41-4aeb-88e1-42a1f9423737\") " pod="openstack/keystone-bootstrap-2z4fc" Oct 13 17:39:57 crc kubenswrapper[4720]: I1013 17:39:57.526781 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/15d273d6-ce41-4aeb-88e1-42a1f9423737-fernet-keys\") pod \"keystone-bootstrap-2z4fc\" (UID: \"15d273d6-ce41-4aeb-88e1-42a1f9423737\") " pod="openstack/keystone-bootstrap-2z4fc" Oct 13 17:39:57 crc kubenswrapper[4720]: I1013 17:39:57.526809 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15d273d6-ce41-4aeb-88e1-42a1f9423737-config-data\") pod \"keystone-bootstrap-2z4fc\" (UID: \"15d273d6-ce41-4aeb-88e1-42a1f9423737\") " pod="openstack/keystone-bootstrap-2z4fc" Oct 13 17:39:57 crc kubenswrapper[4720]: I1013 17:39:57.531010 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/15d273d6-ce41-4aeb-88e1-42a1f9423737-credential-keys\") pod \"keystone-bootstrap-2z4fc\" (UID: \"15d273d6-ce41-4aeb-88e1-42a1f9423737\") " pod="openstack/keystone-bootstrap-2z4fc" Oct 13 17:39:57 crc kubenswrapper[4720]: I1013 17:39:57.531794 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15d273d6-ce41-4aeb-88e1-42a1f9423737-scripts\") pod \"keystone-bootstrap-2z4fc\" (UID: \"15d273d6-ce41-4aeb-88e1-42a1f9423737\") " pod="openstack/keystone-bootstrap-2z4fc" Oct 13 17:39:57 crc kubenswrapper[4720]: I1013 17:39:57.532495 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15d273d6-ce41-4aeb-88e1-42a1f9423737-config-data\") pod \"keystone-bootstrap-2z4fc\" (UID: \"15d273d6-ce41-4aeb-88e1-42a1f9423737\") " pod="openstack/keystone-bootstrap-2z4fc" Oct 13 17:39:57 crc kubenswrapper[4720]: I1013 17:39:57.538225 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15d273d6-ce41-4aeb-88e1-42a1f9423737-combined-ca-bundle\") pod \"keystone-bootstrap-2z4fc\" (UID: \"15d273d6-ce41-4aeb-88e1-42a1f9423737\") " pod="openstack/keystone-bootstrap-2z4fc" Oct 13 17:39:57 crc kubenswrapper[4720]: I1013 17:39:57.542718 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/15d273d6-ce41-4aeb-88e1-42a1f9423737-fernet-keys\") pod \"keystone-bootstrap-2z4fc\" (UID: \"15d273d6-ce41-4aeb-88e1-42a1f9423737\") " pod="openstack/keystone-bootstrap-2z4fc" Oct 13 17:39:57 crc kubenswrapper[4720]: I1013 17:39:57.543722 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rvh4\" (UniqueName: \"kubernetes.io/projected/15d273d6-ce41-4aeb-88e1-42a1f9423737-kube-api-access-2rvh4\") pod \"keystone-bootstrap-2z4fc\" (UID: \"15d273d6-ce41-4aeb-88e1-42a1f9423737\") " pod="openstack/keystone-bootstrap-2z4fc" Oct 13 17:39:57 crc kubenswrapper[4720]: I1013 17:39:57.558223 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 13 17:39:57 crc kubenswrapper[4720]: I1013 17:39:57.618984 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2z4fc" Oct 13 17:39:59 crc kubenswrapper[4720]: I1013 17:39:59.189426 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32985099-946d-4e5f-923c-66944d4bfdcb" path="/var/lib/kubelet/pods/32985099-946d-4e5f-923c-66944d4bfdcb/volumes" Oct 13 17:39:59 crc kubenswrapper[4720]: I1013 17:39:59.190758 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f3d3ba5-1521-4ac1-a005-871d7bb7eea0" path="/var/lib/kubelet/pods/7f3d3ba5-1521-4ac1-a005-871d7bb7eea0/volumes" Oct 13 17:39:59 crc kubenswrapper[4720]: I1013 17:39:59.953579 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8b5c85b87-jdqmt" Oct 13 17:40:00 crc kubenswrapper[4720]: I1013 17:40:00.022797 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-hqcjj"] Oct 13 17:40:00 crc kubenswrapper[4720]: I1013 17:40:00.023366 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-77585f5f8c-hqcjj" podUID="6720cae9-e687-497d-955d-53e36250c8a4" containerName="dnsmasq-dns" containerID="cri-o://b0002647c37f7785f53d93639f39f1283c44c9eb5f43c35de676c9b8cb38b83b" gracePeriod=10 Oct 13 17:40:01 crc kubenswrapper[4720]: I1013 17:40:01.197080 4720 generic.go:334] "Generic (PLEG): container finished" podID="6720cae9-e687-497d-955d-53e36250c8a4" containerID="b0002647c37f7785f53d93639f39f1283c44c9eb5f43c35de676c9b8cb38b83b" exitCode=0 Oct 13 17:40:01 crc kubenswrapper[4720]: I1013 17:40:01.197126 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-hqcjj" event={"ID":"6720cae9-e687-497d-955d-53e36250c8a4","Type":"ContainerDied","Data":"b0002647c37f7785f53d93639f39f1283c44c9eb5f43c35de676c9b8cb38b83b"} Oct 13 17:40:01 crc kubenswrapper[4720]: I1013 17:40:01.216210 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77585f5f8c-hqcjj" podUID="6720cae9-e687-497d-955d-53e36250c8a4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.127:5353: connect: connection refused" Oct 13 17:40:01 crc kubenswrapper[4720]: I1013 17:40:01.284097 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 13 17:40:01 crc kubenswrapper[4720]: I1013 17:40:01.705596 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-bf8c4749f-wv7s9"] Oct 13 17:40:01 crc kubenswrapper[4720]: I1013 17:40:01.739083 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7df8489788-ntn24"] Oct 13 17:40:01 crc kubenswrapper[4720]: I1013 17:40:01.740503 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7df8489788-ntn24" Oct 13 17:40:01 crc kubenswrapper[4720]: I1013 17:40:01.744890 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Oct 13 17:40:01 crc kubenswrapper[4720]: I1013 17:40:01.765613 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7df8489788-ntn24"] Oct 13 17:40:01 crc kubenswrapper[4720]: I1013 17:40:01.806494 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/139c2e02-2c20-4a21-a5c0-753c6003473b-config-data\") pod \"horizon-7df8489788-ntn24\" (UID: \"139c2e02-2c20-4a21-a5c0-753c6003473b\") " pod="openstack/horizon-7df8489788-ntn24" Oct 13 17:40:01 crc kubenswrapper[4720]: I1013 17:40:01.806557 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/139c2e02-2c20-4a21-a5c0-753c6003473b-scripts\") pod \"horizon-7df8489788-ntn24\" (UID: \"139c2e02-2c20-4a21-a5c0-753c6003473b\") " pod="openstack/horizon-7df8489788-ntn24" Oct 13 17:40:01 crc kubenswrapper[4720]: I1013 17:40:01.806602 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrd2q\" (UniqueName: \"kubernetes.io/projected/139c2e02-2c20-4a21-a5c0-753c6003473b-kube-api-access-lrd2q\") pod \"horizon-7df8489788-ntn24\" (UID: \"139c2e02-2c20-4a21-a5c0-753c6003473b\") " pod="openstack/horizon-7df8489788-ntn24" Oct 13 17:40:01 crc kubenswrapper[4720]: I1013 17:40:01.806644 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/139c2e02-2c20-4a21-a5c0-753c6003473b-horizon-secret-key\") pod \"horizon-7df8489788-ntn24\" (UID: \"139c2e02-2c20-4a21-a5c0-753c6003473b\") " pod="openstack/horizon-7df8489788-ntn24" Oct 13 17:40:01 crc kubenswrapper[4720]: I1013 17:40:01.806688 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/139c2e02-2c20-4a21-a5c0-753c6003473b-combined-ca-bundle\") pod \"horizon-7df8489788-ntn24\" (UID: \"139c2e02-2c20-4a21-a5c0-753c6003473b\") " pod="openstack/horizon-7df8489788-ntn24" Oct 13 17:40:01 crc kubenswrapper[4720]: I1013 17:40:01.806718 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/139c2e02-2c20-4a21-a5c0-753c6003473b-logs\") pod \"horizon-7df8489788-ntn24\" (UID: \"139c2e02-2c20-4a21-a5c0-753c6003473b\") " pod="openstack/horizon-7df8489788-ntn24" Oct 13 17:40:01 crc kubenswrapper[4720]: I1013 17:40:01.806733 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/139c2e02-2c20-4a21-a5c0-753c6003473b-horizon-tls-certs\") pod \"horizon-7df8489788-ntn24\" (UID: \"139c2e02-2c20-4a21-a5c0-753c6003473b\") " pod="openstack/horizon-7df8489788-ntn24" Oct 13 17:40:01 crc kubenswrapper[4720]: I1013 17:40:01.836863 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7bf7d9f5bf-6khmt"] Oct 13 17:40:01 crc kubenswrapper[4720]: I1013 17:40:01.877070 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7984dcc5d8-8c2ss"] Oct 13 17:40:01 crc kubenswrapper[4720]: I1013 17:40:01.879162 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7984dcc5d8-8c2ss" Oct 13 17:40:01 crc kubenswrapper[4720]: I1013 17:40:01.888806 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7984dcc5d8-8c2ss"] Oct 13 17:40:01 crc kubenswrapper[4720]: I1013 17:40:01.911329 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/139c2e02-2c20-4a21-a5c0-753c6003473b-combined-ca-bundle\") pod \"horizon-7df8489788-ntn24\" (UID: \"139c2e02-2c20-4a21-a5c0-753c6003473b\") " pod="openstack/horizon-7df8489788-ntn24" Oct 13 17:40:01 crc kubenswrapper[4720]: I1013 17:40:01.911401 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/139c2e02-2c20-4a21-a5c0-753c6003473b-logs\") pod \"horizon-7df8489788-ntn24\" (UID: \"139c2e02-2c20-4a21-a5c0-753c6003473b\") " pod="openstack/horizon-7df8489788-ntn24" Oct 13 17:40:01 crc kubenswrapper[4720]: I1013 17:40:01.911432 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27768d75-429c-45c3-bf03-98527e94fe63-logs\") pod \"horizon-7984dcc5d8-8c2ss\" (UID: \"27768d75-429c-45c3-bf03-98527e94fe63\") " pod="openstack/horizon-7984dcc5d8-8c2ss" Oct 13 17:40:01 crc kubenswrapper[4720]: I1013 17:40:01.911455 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/139c2e02-2c20-4a21-a5c0-753c6003473b-horizon-tls-certs\") pod \"horizon-7df8489788-ntn24\" (UID: \"139c2e02-2c20-4a21-a5c0-753c6003473b\") " pod="openstack/horizon-7df8489788-ntn24" Oct 13 17:40:01 crc kubenswrapper[4720]: I1013 17:40:01.911497 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/139c2e02-2c20-4a21-a5c0-753c6003473b-config-data\") pod \"horizon-7df8489788-ntn24\" (UID: \"139c2e02-2c20-4a21-a5c0-753c6003473b\") " pod="openstack/horizon-7df8489788-ntn24" Oct 13 17:40:01 crc kubenswrapper[4720]: I1013 17:40:01.911557 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/27768d75-429c-45c3-bf03-98527e94fe63-config-data\") pod \"horizon-7984dcc5d8-8c2ss\" (UID: \"27768d75-429c-45c3-bf03-98527e94fe63\") " pod="openstack/horizon-7984dcc5d8-8c2ss" Oct 13 17:40:01 crc kubenswrapper[4720]: I1013 17:40:01.911606 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/139c2e02-2c20-4a21-a5c0-753c6003473b-scripts\") pod \"horizon-7df8489788-ntn24\" (UID: \"139c2e02-2c20-4a21-a5c0-753c6003473b\") " pod="openstack/horizon-7df8489788-ntn24" Oct 13 17:40:01 crc kubenswrapper[4720]: I1013 17:40:01.911642 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27768d75-429c-45c3-bf03-98527e94fe63-combined-ca-bundle\") pod \"horizon-7984dcc5d8-8c2ss\" (UID: \"27768d75-429c-45c3-bf03-98527e94fe63\") " pod="openstack/horizon-7984dcc5d8-8c2ss" Oct 13 17:40:01 crc kubenswrapper[4720]: I1013 17:40:01.911687 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrd2q\" (UniqueName: \"kubernetes.io/projected/139c2e02-2c20-4a21-a5c0-753c6003473b-kube-api-access-lrd2q\") pod \"horizon-7df8489788-ntn24\" (UID: \"139c2e02-2c20-4a21-a5c0-753c6003473b\") " pod="openstack/horizon-7df8489788-ntn24" Oct 13 17:40:01 crc kubenswrapper[4720]: I1013 17:40:01.911737 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/139c2e02-2c20-4a21-a5c0-753c6003473b-horizon-secret-key\") pod \"horizon-7df8489788-ntn24\" (UID: \"139c2e02-2c20-4a21-a5c0-753c6003473b\") " pod="openstack/horizon-7df8489788-ntn24" Oct 13 17:40:01 crc kubenswrapper[4720]: I1013 17:40:01.911759 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d99pz\" (UniqueName: \"kubernetes.io/projected/27768d75-429c-45c3-bf03-98527e94fe63-kube-api-access-d99pz\") pod \"horizon-7984dcc5d8-8c2ss\" (UID: \"27768d75-429c-45c3-bf03-98527e94fe63\") " pod="openstack/horizon-7984dcc5d8-8c2ss" Oct 13 17:40:01 crc kubenswrapper[4720]: I1013 17:40:01.911787 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/27768d75-429c-45c3-bf03-98527e94fe63-horizon-tls-certs\") pod \"horizon-7984dcc5d8-8c2ss\" (UID: \"27768d75-429c-45c3-bf03-98527e94fe63\") " pod="openstack/horizon-7984dcc5d8-8c2ss" Oct 13 17:40:01 crc kubenswrapper[4720]: I1013 17:40:01.911817 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/27768d75-429c-45c3-bf03-98527e94fe63-horizon-secret-key\") pod \"horizon-7984dcc5d8-8c2ss\" (UID: \"27768d75-429c-45c3-bf03-98527e94fe63\") " pod="openstack/horizon-7984dcc5d8-8c2ss" Oct 13 17:40:01 crc kubenswrapper[4720]: I1013 17:40:01.911842 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/27768d75-429c-45c3-bf03-98527e94fe63-scripts\") pod \"horizon-7984dcc5d8-8c2ss\" (UID: \"27768d75-429c-45c3-bf03-98527e94fe63\") " pod="openstack/horizon-7984dcc5d8-8c2ss" Oct 13 17:40:01 crc kubenswrapper[4720]: I1013 17:40:01.913696 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/139c2e02-2c20-4a21-a5c0-753c6003473b-config-data\") pod \"horizon-7df8489788-ntn24\" (UID: \"139c2e02-2c20-4a21-a5c0-753c6003473b\") " pod="openstack/horizon-7df8489788-ntn24" Oct 13 17:40:01 crc kubenswrapper[4720]: I1013 17:40:01.914021 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/139c2e02-2c20-4a21-a5c0-753c6003473b-scripts\") pod \"horizon-7df8489788-ntn24\" (UID: \"139c2e02-2c20-4a21-a5c0-753c6003473b\") " pod="openstack/horizon-7df8489788-ntn24" Oct 13 17:40:01 crc kubenswrapper[4720]: I1013 17:40:01.914029 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/139c2e02-2c20-4a21-a5c0-753c6003473b-logs\") pod \"horizon-7df8489788-ntn24\" (UID: \"139c2e02-2c20-4a21-a5c0-753c6003473b\") " pod="openstack/horizon-7df8489788-ntn24" Oct 13 17:40:01 crc kubenswrapper[4720]: I1013 17:40:01.941951 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/139c2e02-2c20-4a21-a5c0-753c6003473b-horizon-secret-key\") pod \"horizon-7df8489788-ntn24\" (UID: \"139c2e02-2c20-4a21-a5c0-753c6003473b\") " pod="openstack/horizon-7df8489788-ntn24" Oct 13 17:40:01 crc kubenswrapper[4720]: I1013 17:40:01.942063 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/139c2e02-2c20-4a21-a5c0-753c6003473b-horizon-tls-certs\") pod \"horizon-7df8489788-ntn24\" (UID: \"139c2e02-2c20-4a21-a5c0-753c6003473b\") " pod="openstack/horizon-7df8489788-ntn24" Oct 13 17:40:01 crc kubenswrapper[4720]: I1013 17:40:01.942313 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/139c2e02-2c20-4a21-a5c0-753c6003473b-combined-ca-bundle\") pod \"horizon-7df8489788-ntn24\" (UID: \"139c2e02-2c20-4a21-a5c0-753c6003473b\") " pod="openstack/horizon-7df8489788-ntn24" Oct 13 17:40:01 crc kubenswrapper[4720]: I1013 17:40:01.943686 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrd2q\" (UniqueName: \"kubernetes.io/projected/139c2e02-2c20-4a21-a5c0-753c6003473b-kube-api-access-lrd2q\") pod \"horizon-7df8489788-ntn24\" (UID: \"139c2e02-2c20-4a21-a5c0-753c6003473b\") " pod="openstack/horizon-7df8489788-ntn24" Oct 13 17:40:02 crc kubenswrapper[4720]: I1013 17:40:02.013946 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/27768d75-429c-45c3-bf03-98527e94fe63-config-data\") pod \"horizon-7984dcc5d8-8c2ss\" (UID: \"27768d75-429c-45c3-bf03-98527e94fe63\") " pod="openstack/horizon-7984dcc5d8-8c2ss" Oct 13 17:40:02 crc kubenswrapper[4720]: I1013 17:40:02.014030 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27768d75-429c-45c3-bf03-98527e94fe63-combined-ca-bundle\") pod \"horizon-7984dcc5d8-8c2ss\" (UID: \"27768d75-429c-45c3-bf03-98527e94fe63\") " pod="openstack/horizon-7984dcc5d8-8c2ss" Oct 13 17:40:02 crc kubenswrapper[4720]: I1013 17:40:02.014093 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d99pz\" (UniqueName: \"kubernetes.io/projected/27768d75-429c-45c3-bf03-98527e94fe63-kube-api-access-d99pz\") pod \"horizon-7984dcc5d8-8c2ss\" (UID: \"27768d75-429c-45c3-bf03-98527e94fe63\") " pod="openstack/horizon-7984dcc5d8-8c2ss" Oct 13 17:40:02 crc kubenswrapper[4720]: I1013 17:40:02.014115 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/27768d75-429c-45c3-bf03-98527e94fe63-horizon-tls-certs\") pod \"horizon-7984dcc5d8-8c2ss\" (UID: \"27768d75-429c-45c3-bf03-98527e94fe63\") " pod="openstack/horizon-7984dcc5d8-8c2ss" Oct 13 17:40:02 crc kubenswrapper[4720]: I1013 17:40:02.014132 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/27768d75-429c-45c3-bf03-98527e94fe63-horizon-secret-key\") pod \"horizon-7984dcc5d8-8c2ss\" (UID: \"27768d75-429c-45c3-bf03-98527e94fe63\") " pod="openstack/horizon-7984dcc5d8-8c2ss" Oct 13 17:40:02 crc kubenswrapper[4720]: I1013 17:40:02.014151 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/27768d75-429c-45c3-bf03-98527e94fe63-scripts\") pod \"horizon-7984dcc5d8-8c2ss\" (UID: \"27768d75-429c-45c3-bf03-98527e94fe63\") " pod="openstack/horizon-7984dcc5d8-8c2ss" Oct 13 17:40:02 crc kubenswrapper[4720]: I1013 17:40:02.014217 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27768d75-429c-45c3-bf03-98527e94fe63-logs\") pod \"horizon-7984dcc5d8-8c2ss\" (UID: \"27768d75-429c-45c3-bf03-98527e94fe63\") " pod="openstack/horizon-7984dcc5d8-8c2ss" Oct 13 17:40:02 crc kubenswrapper[4720]: I1013 17:40:02.014666 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27768d75-429c-45c3-bf03-98527e94fe63-logs\") pod \"horizon-7984dcc5d8-8c2ss\" (UID: \"27768d75-429c-45c3-bf03-98527e94fe63\") " pod="openstack/horizon-7984dcc5d8-8c2ss" Oct 13 17:40:02 crc kubenswrapper[4720]: I1013 17:40:02.015262 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/27768d75-429c-45c3-bf03-98527e94fe63-config-data\") pod \"horizon-7984dcc5d8-8c2ss\" (UID: \"27768d75-429c-45c3-bf03-98527e94fe63\") " pod="openstack/horizon-7984dcc5d8-8c2ss" Oct 13 17:40:02 crc kubenswrapper[4720]: I1013 17:40:02.015369 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/27768d75-429c-45c3-bf03-98527e94fe63-scripts\") pod \"horizon-7984dcc5d8-8c2ss\" (UID: \"27768d75-429c-45c3-bf03-98527e94fe63\") " pod="openstack/horizon-7984dcc5d8-8c2ss" Oct 13 17:40:02 crc kubenswrapper[4720]: I1013 17:40:02.017806 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/27768d75-429c-45c3-bf03-98527e94fe63-horizon-tls-certs\") pod \"horizon-7984dcc5d8-8c2ss\" (UID: \"27768d75-429c-45c3-bf03-98527e94fe63\") " pod="openstack/horizon-7984dcc5d8-8c2ss" Oct 13 17:40:02 crc kubenswrapper[4720]: I1013 17:40:02.018063 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/27768d75-429c-45c3-bf03-98527e94fe63-horizon-secret-key\") pod \"horizon-7984dcc5d8-8c2ss\" (UID: \"27768d75-429c-45c3-bf03-98527e94fe63\") " pod="openstack/horizon-7984dcc5d8-8c2ss" Oct 13 17:40:02 crc kubenswrapper[4720]: I1013 17:40:02.018645 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27768d75-429c-45c3-bf03-98527e94fe63-combined-ca-bundle\") pod \"horizon-7984dcc5d8-8c2ss\" (UID: \"27768d75-429c-45c3-bf03-98527e94fe63\") " pod="openstack/horizon-7984dcc5d8-8c2ss" Oct 13 17:40:02 crc kubenswrapper[4720]: I1013 17:40:02.033170 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d99pz\" (UniqueName: \"kubernetes.io/projected/27768d75-429c-45c3-bf03-98527e94fe63-kube-api-access-d99pz\") pod \"horizon-7984dcc5d8-8c2ss\" (UID: \"27768d75-429c-45c3-bf03-98527e94fe63\") " pod="openstack/horizon-7984dcc5d8-8c2ss" Oct 13 17:40:02 crc kubenswrapper[4720]: I1013 17:40:02.087793 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7df8489788-ntn24" Oct 13 17:40:02 crc kubenswrapper[4720]: I1013 17:40:02.197884 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7984dcc5d8-8c2ss" Oct 13 17:40:04 crc kubenswrapper[4720]: E1013 17:40:04.162054 4720 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Oct 13 17:40:04 crc kubenswrapper[4720]: E1013 17:40:04.162435 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n7dh54dhc5h644hf9hcfh67ch667h5b9h84h5d7h55hcbh675h696h5ch68ch5dh574h6ch578h688h74h656h6bh588hc9h6ch55fhddh69h95q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gxn7j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-565dc5fd65-jxjns_openstack(182cb87f-6f29-43c2-932e-f9de187d4fa0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 13 17:40:04 crc kubenswrapper[4720]: E1013 17:40:04.176114 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-565dc5fd65-jxjns" podUID="182cb87f-6f29-43c2-932e-f9de187d4fa0" Oct 13 17:40:06 crc kubenswrapper[4720]: E1013 17:40:06.302839 4720 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Oct 13 17:40:06 crc kubenswrapper[4720]: E1013 17:40:06.304649 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vnndp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-2htwh_openstack(444e35b8-1d2a-4d83-be6c-2184ae0e3110): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 13 17:40:06 crc kubenswrapper[4720]: E1013 17:40:06.305952 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-2htwh" podUID="444e35b8-1d2a-4d83-be6c-2184ae0e3110" Oct 13 17:40:06 crc kubenswrapper[4720]: E1013 17:40:06.323367 4720 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Oct 13 17:40:06 crc kubenswrapper[4720]: E1013 17:40:06.323584 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n89h68ch59chb7h57ch555h588h5fdh7bh599hcch5f7h5fdh64h56bh64dhd6h94h589hbdh68bh569h584h644h86h5d4h589h6bhcdh649h5bch5ddq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rz4dv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-bf8c4749f-wv7s9_openstack(0711763b-3ada-4180-b697-ad4911f48641): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 13 17:40:06 crc kubenswrapper[4720]: E1013 17:40:06.336260 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-bf8c4749f-wv7s9" podUID="0711763b-3ada-4180-b697-ad4911f48641" Oct 13 17:40:06 crc kubenswrapper[4720]: I1013 17:40:06.448940 4720 scope.go:117] "RemoveContainer" containerID="3165279ec9273ca2d005d41ed3529d0b2e0edcdbd3b7d718119379ab5f0776b2" Oct 13 17:40:06 crc kubenswrapper[4720]: I1013 17:40:06.814220 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-hqcjj" Oct 13 17:40:06 crc kubenswrapper[4720]: I1013 17:40:06.854242 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 13 17:40:06 crc kubenswrapper[4720]: I1013 17:40:06.903286 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-565dc5fd65-jxjns" Oct 13 17:40:06 crc kubenswrapper[4720]: I1013 17:40:06.928314 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9db54922-09ac-46bd-85ad-aad0b3e8738b-logs\") pod \"9db54922-09ac-46bd-85ad-aad0b3e8738b\" (UID: \"9db54922-09ac-46bd-85ad-aad0b3e8738b\") " Oct 13 17:40:06 crc kubenswrapper[4720]: I1013 17:40:06.928388 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnt9k\" (UniqueName: \"kubernetes.io/projected/6720cae9-e687-497d-955d-53e36250c8a4-kube-api-access-pnt9k\") pod \"6720cae9-e687-497d-955d-53e36250c8a4\" (UID: \"6720cae9-e687-497d-955d-53e36250c8a4\") " Oct 13 17:40:06 crc kubenswrapper[4720]: I1013 17:40:06.928421 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6720cae9-e687-497d-955d-53e36250c8a4-ovsdbserver-sb\") pod \"6720cae9-e687-497d-955d-53e36250c8a4\" (UID: \"6720cae9-e687-497d-955d-53e36250c8a4\") " Oct 13 17:40:06 crc kubenswrapper[4720]: I1013 17:40:06.928439 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6720cae9-e687-497d-955d-53e36250c8a4-dns-swift-storage-0\") pod \"6720cae9-e687-497d-955d-53e36250c8a4\" (UID: \"6720cae9-e687-497d-955d-53e36250c8a4\") " Oct 13 17:40:06 crc kubenswrapper[4720]: I1013 17:40:06.928458 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9db54922-09ac-46bd-85ad-aad0b3e8738b-combined-ca-bundle\") pod \"9db54922-09ac-46bd-85ad-aad0b3e8738b\" (UID: \"9db54922-09ac-46bd-85ad-aad0b3e8738b\") " Oct 13 17:40:06 crc kubenswrapper[4720]: I1013 17:40:06.928533 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"9db54922-09ac-46bd-85ad-aad0b3e8738b\" (UID: \"9db54922-09ac-46bd-85ad-aad0b3e8738b\") " Oct 13 17:40:06 crc kubenswrapper[4720]: I1013 17:40:06.928569 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6720cae9-e687-497d-955d-53e36250c8a4-ovsdbserver-nb\") pod \"6720cae9-e687-497d-955d-53e36250c8a4\" (UID: \"6720cae9-e687-497d-955d-53e36250c8a4\") " Oct 13 17:40:06 crc kubenswrapper[4720]: I1013 17:40:06.928589 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9db54922-09ac-46bd-85ad-aad0b3e8738b-scripts\") pod \"9db54922-09ac-46bd-85ad-aad0b3e8738b\" (UID: \"9db54922-09ac-46bd-85ad-aad0b3e8738b\") " Oct 13 17:40:06 crc kubenswrapper[4720]: I1013 17:40:06.928605 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6720cae9-e687-497d-955d-53e36250c8a4-dns-svc\") pod \"6720cae9-e687-497d-955d-53e36250c8a4\" (UID: \"6720cae9-e687-497d-955d-53e36250c8a4\") " Oct 13 17:40:06 crc kubenswrapper[4720]: I1013 17:40:06.928643 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9db54922-09ac-46bd-85ad-aad0b3e8738b-config-data\") pod \"9db54922-09ac-46bd-85ad-aad0b3e8738b\" (UID: \"9db54922-09ac-46bd-85ad-aad0b3e8738b\") " Oct 13 17:40:06 crc kubenswrapper[4720]: I1013 17:40:06.928682 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6720cae9-e687-497d-955d-53e36250c8a4-config\") pod \"6720cae9-e687-497d-955d-53e36250c8a4\" (UID: \"6720cae9-e687-497d-955d-53e36250c8a4\") " Oct 13 17:40:06 crc kubenswrapper[4720]: I1013 17:40:06.928740 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9db54922-09ac-46bd-85ad-aad0b3e8738b-httpd-run\") pod \"9db54922-09ac-46bd-85ad-aad0b3e8738b\" (UID: \"9db54922-09ac-46bd-85ad-aad0b3e8738b\") " Oct 13 17:40:06 crc kubenswrapper[4720]: I1013 17:40:06.928755 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7246n\" (UniqueName: \"kubernetes.io/projected/9db54922-09ac-46bd-85ad-aad0b3e8738b-kube-api-access-7246n\") pod \"9db54922-09ac-46bd-85ad-aad0b3e8738b\" (UID: \"9db54922-09ac-46bd-85ad-aad0b3e8738b\") " Oct 13 17:40:06 crc kubenswrapper[4720]: I1013 17:40:06.931420 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9db54922-09ac-46bd-85ad-aad0b3e8738b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "9db54922-09ac-46bd-85ad-aad0b3e8738b" (UID: "9db54922-09ac-46bd-85ad-aad0b3e8738b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:40:06 crc kubenswrapper[4720]: I1013 17:40:06.931563 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9db54922-09ac-46bd-85ad-aad0b3e8738b-logs" (OuterVolumeSpecName: "logs") pod "9db54922-09ac-46bd-85ad-aad0b3e8738b" (UID: "9db54922-09ac-46bd-85ad-aad0b3e8738b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:40:06 crc kubenswrapper[4720]: I1013 17:40:06.936328 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9db54922-09ac-46bd-85ad-aad0b3e8738b-kube-api-access-7246n" (OuterVolumeSpecName: "kube-api-access-7246n") pod "9db54922-09ac-46bd-85ad-aad0b3e8738b" (UID: "9db54922-09ac-46bd-85ad-aad0b3e8738b"). InnerVolumeSpecName "kube-api-access-7246n". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:40:06 crc kubenswrapper[4720]: I1013 17:40:06.936795 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "9db54922-09ac-46bd-85ad-aad0b3e8738b" (UID: "9db54922-09ac-46bd-85ad-aad0b3e8738b"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 13 17:40:06 crc kubenswrapper[4720]: I1013 17:40:06.939850 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6720cae9-e687-497d-955d-53e36250c8a4-kube-api-access-pnt9k" (OuterVolumeSpecName: "kube-api-access-pnt9k") pod "6720cae9-e687-497d-955d-53e36250c8a4" (UID: "6720cae9-e687-497d-955d-53e36250c8a4"). InnerVolumeSpecName "kube-api-access-pnt9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:40:06 crc kubenswrapper[4720]: I1013 17:40:06.941820 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9db54922-09ac-46bd-85ad-aad0b3e8738b-scripts" (OuterVolumeSpecName: "scripts") pod "9db54922-09ac-46bd-85ad-aad0b3e8738b" (UID: "9db54922-09ac-46bd-85ad-aad0b3e8738b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:40:06 crc kubenswrapper[4720]: I1013 17:40:06.977969 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9db54922-09ac-46bd-85ad-aad0b3e8738b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9db54922-09ac-46bd-85ad-aad0b3e8738b" (UID: "9db54922-09ac-46bd-85ad-aad0b3e8738b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.003306 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6720cae9-e687-497d-955d-53e36250c8a4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6720cae9-e687-497d-955d-53e36250c8a4" (UID: "6720cae9-e687-497d-955d-53e36250c8a4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.006005 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6720cae9-e687-497d-955d-53e36250c8a4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6720cae9-e687-497d-955d-53e36250c8a4" (UID: "6720cae9-e687-497d-955d-53e36250c8a4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.006654 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6720cae9-e687-497d-955d-53e36250c8a4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6720cae9-e687-497d-955d-53e36250c8a4" (UID: "6720cae9-e687-497d-955d-53e36250c8a4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.011729 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-fd8d-account-create-v5dkv"] Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.013697 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6720cae9-e687-497d-955d-53e36250c8a4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6720cae9-e687-497d-955d-53e36250c8a4" (UID: "6720cae9-e687-497d-955d-53e36250c8a4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.017715 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6720cae9-e687-497d-955d-53e36250c8a4-config" (OuterVolumeSpecName: "config") pod "6720cae9-e687-497d-955d-53e36250c8a4" (UID: "6720cae9-e687-497d-955d-53e36250c8a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.020718 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-8c7e-account-create-ftbcl"] Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.024977 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9db54922-09ac-46bd-85ad-aad0b3e8738b-config-data" (OuterVolumeSpecName: "config-data") pod "9db54922-09ac-46bd-85ad-aad0b3e8738b" (UID: "9db54922-09ac-46bd-85ad-aad0b3e8738b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.029740 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/182cb87f-6f29-43c2-932e-f9de187d4fa0-scripts\") pod \"182cb87f-6f29-43c2-932e-f9de187d4fa0\" (UID: \"182cb87f-6f29-43c2-932e-f9de187d4fa0\") " Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.029841 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/182cb87f-6f29-43c2-932e-f9de187d4fa0-horizon-secret-key\") pod \"182cb87f-6f29-43c2-932e-f9de187d4fa0\" (UID: \"182cb87f-6f29-43c2-932e-f9de187d4fa0\") " Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.029932 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxn7j\" (UniqueName: \"kubernetes.io/projected/182cb87f-6f29-43c2-932e-f9de187d4fa0-kube-api-access-gxn7j\") pod \"182cb87f-6f29-43c2-932e-f9de187d4fa0\" (UID: \"182cb87f-6f29-43c2-932e-f9de187d4fa0\") " Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.029970 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/182cb87f-6f29-43c2-932e-f9de187d4fa0-logs\") pod \"182cb87f-6f29-43c2-932e-f9de187d4fa0\" (UID: \"182cb87f-6f29-43c2-932e-f9de187d4fa0\") " Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.029991 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/182cb87f-6f29-43c2-932e-f9de187d4fa0-config-data\") pod \"182cb87f-6f29-43c2-932e-f9de187d4fa0\" (UID: \"182cb87f-6f29-43c2-932e-f9de187d4fa0\") " Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.030844 4720 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9db54922-09ac-46bd-85ad-aad0b3e8738b-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.030859 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7246n\" (UniqueName: \"kubernetes.io/projected/9db54922-09ac-46bd-85ad-aad0b3e8738b-kube-api-access-7246n\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.030868 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9db54922-09ac-46bd-85ad-aad0b3e8738b-logs\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.030876 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnt9k\" (UniqueName: \"kubernetes.io/projected/6720cae9-e687-497d-955d-53e36250c8a4-kube-api-access-pnt9k\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.030885 4720 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6720cae9-e687-497d-955d-53e36250c8a4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.030893 4720 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6720cae9-e687-497d-955d-53e36250c8a4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.030902 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9db54922-09ac-46bd-85ad-aad0b3e8738b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.030921 4720 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.030931 4720 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6720cae9-e687-497d-955d-53e36250c8a4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.030940 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9db54922-09ac-46bd-85ad-aad0b3e8738b-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.030948 4720 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6720cae9-e687-497d-955d-53e36250c8a4-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.030956 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9db54922-09ac-46bd-85ad-aad0b3e8738b-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.030964 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6720cae9-e687-497d-955d-53e36250c8a4-config\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.032971 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/182cb87f-6f29-43c2-932e-f9de187d4fa0-logs" (OuterVolumeSpecName: "logs") pod "182cb87f-6f29-43c2-932e-f9de187d4fa0" (UID: "182cb87f-6f29-43c2-932e-f9de187d4fa0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.033099 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/182cb87f-6f29-43c2-932e-f9de187d4fa0-scripts" (OuterVolumeSpecName: "scripts") pod "182cb87f-6f29-43c2-932e-f9de187d4fa0" (UID: "182cb87f-6f29-43c2-932e-f9de187d4fa0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.033130 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/182cb87f-6f29-43c2-932e-f9de187d4fa0-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "182cb87f-6f29-43c2-932e-f9de187d4fa0" (UID: "182cb87f-6f29-43c2-932e-f9de187d4fa0"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.033872 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/182cb87f-6f29-43c2-932e-f9de187d4fa0-config-data" (OuterVolumeSpecName: "config-data") pod "182cb87f-6f29-43c2-932e-f9de187d4fa0" (UID: "182cb87f-6f29-43c2-932e-f9de187d4fa0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.043216 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/182cb87f-6f29-43c2-932e-f9de187d4fa0-kube-api-access-gxn7j" (OuterVolumeSpecName: "kube-api-access-gxn7j") pod "182cb87f-6f29-43c2-932e-f9de187d4fa0" (UID: "182cb87f-6f29-43c2-932e-f9de187d4fa0"). InnerVolumeSpecName "kube-api-access-gxn7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.049465 4720 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.133273 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxn7j\" (UniqueName: \"kubernetes.io/projected/182cb87f-6f29-43c2-932e-f9de187d4fa0-kube-api-access-gxn7j\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.133302 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/182cb87f-6f29-43c2-932e-f9de187d4fa0-logs\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.133313 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/182cb87f-6f29-43c2-932e-f9de187d4fa0-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.133322 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/182cb87f-6f29-43c2-932e-f9de187d4fa0-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.133331 4720 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.133339 4720 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/182cb87f-6f29-43c2-932e-f9de187d4fa0-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.156799 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-1089-account-create-rz4lq"] Oct 13 17:40:07 crc kubenswrapper[4720]: W1013 17:40:07.174552 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5cab6f53_adba_4474_b8b6_195faff8e193.slice/crio-e9f730278a70bd002f9784dc80cf7b92a12207e744e06f4496f271e6bebde485 WatchSource:0}: Error finding container e9f730278a70bd002f9784dc80cf7b92a12207e744e06f4496f271e6bebde485: Status 404 returned error can't find the container with id e9f730278a70bd002f9784dc80cf7b92a12207e744e06f4496f271e6bebde485 Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.188025 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-2z4fc"] Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.206023 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7984dcc5d8-8c2ss"] Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.231165 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7df8489788-ntn24"] Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.252781 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 13 17:40:07 crc kubenswrapper[4720]: W1013 17:40:07.265544 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79085d79_21f8_43c4_af7c_e51e0c9f9610.slice/crio-7384e7933bfabacefdb8c9bf8acb66d9d36e47f70504472b91f3575fe67547f0 WatchSource:0}: Error finding container 7384e7933bfabacefdb8c9bf8acb66d9d36e47f70504472b91f3575fe67547f0: Status 404 returned error can't find the container with id 7384e7933bfabacefdb8c9bf8acb66d9d36e47f70504472b91f3575fe67547f0 Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.266813 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2z4fc" event={"ID":"15d273d6-ce41-4aeb-88e1-42a1f9423737","Type":"ContainerStarted","Data":"df36b53fdae7087a001b1d299f510a8dc73dd7d5d2174fea030c0c7fbeeaf738"} Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.267844 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7984dcc5d8-8c2ss" event={"ID":"27768d75-429c-45c3-bf03-98527e94fe63","Type":"ContainerStarted","Data":"63dcb5338dcd42a866b85dd2882f06affa2ea9c38a5c32eb16e453cfeb39a756"} Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.269214 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d3623f2-ec27-4ad6-8cb4-553cb0527e15","Type":"ContainerStarted","Data":"0edb2dae877587f39ad2f0b1c67183bdd28566eb1418aee78cff788789e0dd9c"} Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.271144 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-hqcjj" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.272057 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-hqcjj" event={"ID":"6720cae9-e687-497d-955d-53e36250c8a4","Type":"ContainerDied","Data":"e05f52b43ed93d0335fc2a9761a4c3086de82864fddbb564d08d8f5e9f6d57d1"} Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.272123 4720 scope.go:117] "RemoveContainer" containerID="b0002647c37f7785f53d93639f39f1283c44c9eb5f43c35de676c9b8cb38b83b" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.276579 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-fd8d-account-create-v5dkv" event={"ID":"b6edabb2-1018-4d9a-b43f-2414235bbfdc","Type":"ContainerStarted","Data":"8f508b9803991b29ca92bc309a82aa31aac4204d4be5a41f8589fe1b3a36d365"} Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.276605 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-fd8d-account-create-v5dkv" event={"ID":"b6edabb2-1018-4d9a-b43f-2414235bbfdc","Type":"ContainerStarted","Data":"b86aa2abfe0a63ff2e4e6acf9cbcbd6a787c8901e72b112f32b925765b5bd594"} Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.279376 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7df8489788-ntn24" event={"ID":"139c2e02-2c20-4a21-a5c0-753c6003473b","Type":"ContainerStarted","Data":"f706f305954c24412cc62e6c0afba1dd5060e9a161d6edfb94a7e2ba190d826c"} Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.284866 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-1089-account-create-rz4lq" event={"ID":"5cab6f53-adba-4474-b8b6-195faff8e193","Type":"ContainerStarted","Data":"e9f730278a70bd002f9784dc80cf7b92a12207e744e06f4496f271e6bebde485"} Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.293941 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-fd8d-account-create-v5dkv" podStartSLOduration=13.293920492 podStartE2EDuration="13.293920492s" podCreationTimestamp="2025-10-13 17:39:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:40:07.288587465 +0000 UTC m=+952.745837597" watchObservedRunningTime="2025-10-13 17:40:07.293920492 +0000 UTC m=+952.751170624" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.319069 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-hqcjj"] Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.319762 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9db54922-09ac-46bd-85ad-aad0b3e8738b","Type":"ContainerDied","Data":"9b0fc072061b170b8f7d6b5fd92963b76bd0d6187d0bc435ad11e259d25cd885"} Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.319898 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.325262 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-hqcjj"] Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.327493 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-8c7e-account-create-ftbcl" event={"ID":"77e2a869-ae9a-47cd-973e-bc3597a2365d","Type":"ContainerStarted","Data":"75dcef59c7735fa127edc067f5f3fc3a1cc6c701e8cb9453bb54444af59e1a16"} Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.327522 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-8c7e-account-create-ftbcl" event={"ID":"77e2a869-ae9a-47cd-973e-bc3597a2365d","Type":"ContainerStarted","Data":"050023d216272455ddda02dc159a36002a75c13054f969a0335e84dcc914eb5c"} Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.335440 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-565dc5fd65-jxjns" event={"ID":"182cb87f-6f29-43c2-932e-f9de187d4fa0","Type":"ContainerDied","Data":"e7f2334caaa4976bd25aa50d0df920e1cddf04438a8131f3dc3f7614080208f7"} Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.335528 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-565dc5fd65-jxjns" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.337483 4720 scope.go:117] "RemoveContainer" containerID="a3d1cacbdf268225b85cc828e14e7021e9ec58cb45e446ceecdf9930b389d2f9" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.341430 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7bf7d9f5bf-6khmt" podUID="02414a49-0001-4f13-97cb-641937473fb6" containerName="horizon-log" containerID="cri-o://376eab0bc0716b64ad3f2fd2a966e08f673cb6d62ce53f42923577d22092150f" gracePeriod=30 Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.342541 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bf7d9f5bf-6khmt" event={"ID":"02414a49-0001-4f13-97cb-641937473fb6","Type":"ContainerStarted","Data":"c14fb6faa8b00dd9e510c736f924ec85392019e51fc549152333fb08188c5cf7"} Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.342589 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bf7d9f5bf-6khmt" event={"ID":"02414a49-0001-4f13-97cb-641937473fb6","Type":"ContainerStarted","Data":"376eab0bc0716b64ad3f2fd2a966e08f673cb6d62ce53f42923577d22092150f"} Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.342718 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7bf7d9f5bf-6khmt" podUID="02414a49-0001-4f13-97cb-641937473fb6" containerName="horizon" containerID="cri-o://c14fb6faa8b00dd9e510c736f924ec85392019e51fc549152333fb08188c5cf7" gracePeriod=30 Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.350702 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-8c7e-account-create-ftbcl" podStartSLOduration=13.350662762 podStartE2EDuration="13.350662762s" podCreationTimestamp="2025-10-13 17:39:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:40:07.344120524 +0000 UTC m=+952.801370656" watchObservedRunningTime="2025-10-13 17:40:07.350662762 +0000 UTC m=+952.807912894" Oct 13 17:40:07 crc kubenswrapper[4720]: E1013 17:40:07.356467 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-2htwh" podUID="444e35b8-1d2a-4d83-be6c-2184ae0e3110" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.401005 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7bf7d9f5bf-6khmt" podStartSLOduration=2.223851786 podStartE2EDuration="16.400970986s" podCreationTimestamp="2025-10-13 17:39:51 +0000 UTC" firstStartedPulling="2025-10-13 17:39:52.254138125 +0000 UTC m=+937.711388257" lastFinishedPulling="2025-10-13 17:40:06.431257285 +0000 UTC m=+951.888507457" observedRunningTime="2025-10-13 17:40:07.390102227 +0000 UTC m=+952.847352359" watchObservedRunningTime="2025-10-13 17:40:07.400970986 +0000 UTC m=+952.858221118" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.611100 4720 scope.go:117] "RemoveContainer" containerID="4dc194e29078e4f1faded9436f3abc1e52b23a8774480dfb1c4cf02b144282a7" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.722748 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-bf8c4749f-wv7s9" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.725583 4720 scope.go:117] "RemoveContainer" containerID="98129028d4b908a7ef367e4b9c80a052538c6c6c1ce35796676bdb423cdaa835" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.740409 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-565dc5fd65-jxjns"] Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.743244 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-565dc5fd65-jxjns"] Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.755245 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.764693 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.783051 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 13 17:40:07 crc kubenswrapper[4720]: E1013 17:40:07.783563 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9db54922-09ac-46bd-85ad-aad0b3e8738b" containerName="glance-httpd" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.783578 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="9db54922-09ac-46bd-85ad-aad0b3e8738b" containerName="glance-httpd" Oct 13 17:40:07 crc kubenswrapper[4720]: E1013 17:40:07.783590 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6720cae9-e687-497d-955d-53e36250c8a4" containerName="dnsmasq-dns" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.783595 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="6720cae9-e687-497d-955d-53e36250c8a4" containerName="dnsmasq-dns" Oct 13 17:40:07 crc kubenswrapper[4720]: E1013 17:40:07.783612 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9db54922-09ac-46bd-85ad-aad0b3e8738b" containerName="glance-log" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.783618 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="9db54922-09ac-46bd-85ad-aad0b3e8738b" containerName="glance-log" Oct 13 17:40:07 crc kubenswrapper[4720]: E1013 17:40:07.783649 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6720cae9-e687-497d-955d-53e36250c8a4" containerName="init" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.783654 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="6720cae9-e687-497d-955d-53e36250c8a4" containerName="init" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.783826 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="9db54922-09ac-46bd-85ad-aad0b3e8738b" containerName="glance-httpd" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.783841 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="6720cae9-e687-497d-955d-53e36250c8a4" containerName="dnsmasq-dns" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.783854 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="9db54922-09ac-46bd-85ad-aad0b3e8738b" containerName="glance-log" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.784936 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.792933 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.793152 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.796637 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.846798 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0711763b-3ada-4180-b697-ad4911f48641-scripts\") pod \"0711763b-3ada-4180-b697-ad4911f48641\" (UID: \"0711763b-3ada-4180-b697-ad4911f48641\") " Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.846842 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rz4dv\" (UniqueName: \"kubernetes.io/projected/0711763b-3ada-4180-b697-ad4911f48641-kube-api-access-rz4dv\") pod \"0711763b-3ada-4180-b697-ad4911f48641\" (UID: \"0711763b-3ada-4180-b697-ad4911f48641\") " Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.846912 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0711763b-3ada-4180-b697-ad4911f48641-horizon-secret-key\") pod \"0711763b-3ada-4180-b697-ad4911f48641\" (UID: \"0711763b-3ada-4180-b697-ad4911f48641\") " Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.847037 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0711763b-3ada-4180-b697-ad4911f48641-config-data\") pod \"0711763b-3ada-4180-b697-ad4911f48641\" (UID: \"0711763b-3ada-4180-b697-ad4911f48641\") " Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.847124 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0711763b-3ada-4180-b697-ad4911f48641-logs\") pod \"0711763b-3ada-4180-b697-ad4911f48641\" (UID: \"0711763b-3ada-4180-b697-ad4911f48641\") " Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.847370 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1742c95-4906-4545-8e62-38903c72b168-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f1742c95-4906-4545-8e62-38903c72b168\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.847395 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8nwk\" (UniqueName: \"kubernetes.io/projected/f1742c95-4906-4545-8e62-38903c72b168-kube-api-access-f8nwk\") pod \"glance-default-internal-api-0\" (UID: \"f1742c95-4906-4545-8e62-38903c72b168\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.847447 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1742c95-4906-4545-8e62-38903c72b168-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f1742c95-4906-4545-8e62-38903c72b168\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.847472 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"f1742c95-4906-4545-8e62-38903c72b168\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.847495 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f1742c95-4906-4545-8e62-38903c72b168-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f1742c95-4906-4545-8e62-38903c72b168\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.847523 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1742c95-4906-4545-8e62-38903c72b168-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f1742c95-4906-4545-8e62-38903c72b168\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.847582 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1742c95-4906-4545-8e62-38903c72b168-logs\") pod \"glance-default-internal-api-0\" (UID: \"f1742c95-4906-4545-8e62-38903c72b168\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.847628 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1742c95-4906-4545-8e62-38903c72b168-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f1742c95-4906-4545-8e62-38903c72b168\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.848328 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0711763b-3ada-4180-b697-ad4911f48641-scripts" (OuterVolumeSpecName: "scripts") pod "0711763b-3ada-4180-b697-ad4911f48641" (UID: "0711763b-3ada-4180-b697-ad4911f48641"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.849873 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0711763b-3ada-4180-b697-ad4911f48641-logs" (OuterVolumeSpecName: "logs") pod "0711763b-3ada-4180-b697-ad4911f48641" (UID: "0711763b-3ada-4180-b697-ad4911f48641"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.849940 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0711763b-3ada-4180-b697-ad4911f48641-config-data" (OuterVolumeSpecName: "config-data") pod "0711763b-3ada-4180-b697-ad4911f48641" (UID: "0711763b-3ada-4180-b697-ad4911f48641"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.856211 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0711763b-3ada-4180-b697-ad4911f48641-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "0711763b-3ada-4180-b697-ad4911f48641" (UID: "0711763b-3ada-4180-b697-ad4911f48641"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.857475 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0711763b-3ada-4180-b697-ad4911f48641-kube-api-access-rz4dv" (OuterVolumeSpecName: "kube-api-access-rz4dv") pod "0711763b-3ada-4180-b697-ad4911f48641" (UID: "0711763b-3ada-4180-b697-ad4911f48641"). InnerVolumeSpecName "kube-api-access-rz4dv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.949160 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1742c95-4906-4545-8e62-38903c72b168-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f1742c95-4906-4545-8e62-38903c72b168\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.949258 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1742c95-4906-4545-8e62-38903c72b168-logs\") pod \"glance-default-internal-api-0\" (UID: \"f1742c95-4906-4545-8e62-38903c72b168\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.949332 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1742c95-4906-4545-8e62-38903c72b168-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f1742c95-4906-4545-8e62-38903c72b168\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.949369 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1742c95-4906-4545-8e62-38903c72b168-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f1742c95-4906-4545-8e62-38903c72b168\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.949402 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8nwk\" (UniqueName: \"kubernetes.io/projected/f1742c95-4906-4545-8e62-38903c72b168-kube-api-access-f8nwk\") pod \"glance-default-internal-api-0\" (UID: \"f1742c95-4906-4545-8e62-38903c72b168\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.949466 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1742c95-4906-4545-8e62-38903c72b168-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f1742c95-4906-4545-8e62-38903c72b168\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.949501 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"f1742c95-4906-4545-8e62-38903c72b168\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.949531 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f1742c95-4906-4545-8e62-38903c72b168-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f1742c95-4906-4545-8e62-38903c72b168\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.949585 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0711763b-3ada-4180-b697-ad4911f48641-logs\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.949602 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0711763b-3ada-4180-b697-ad4911f48641-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.949613 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rz4dv\" (UniqueName: \"kubernetes.io/projected/0711763b-3ada-4180-b697-ad4911f48641-kube-api-access-rz4dv\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.949626 4720 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0711763b-3ada-4180-b697-ad4911f48641-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.949637 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0711763b-3ada-4180-b697-ad4911f48641-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.950109 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f1742c95-4906-4545-8e62-38903c72b168-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f1742c95-4906-4545-8e62-38903c72b168\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.952682 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1742c95-4906-4545-8e62-38903c72b168-logs\") pod \"glance-default-internal-api-0\" (UID: \"f1742c95-4906-4545-8e62-38903c72b168\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.957361 4720 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"f1742c95-4906-4545-8e62-38903c72b168\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.962893 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1742c95-4906-4545-8e62-38903c72b168-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f1742c95-4906-4545-8e62-38903c72b168\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.969732 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1742c95-4906-4545-8e62-38903c72b168-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f1742c95-4906-4545-8e62-38903c72b168\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.970297 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1742c95-4906-4545-8e62-38903c72b168-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f1742c95-4906-4545-8e62-38903c72b168\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.972948 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8nwk\" (UniqueName: \"kubernetes.io/projected/f1742c95-4906-4545-8e62-38903c72b168-kube-api-access-f8nwk\") pod \"glance-default-internal-api-0\" (UID: \"f1742c95-4906-4545-8e62-38903c72b168\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:40:07 crc kubenswrapper[4720]: I1013 17:40:07.973713 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1742c95-4906-4545-8e62-38903c72b168-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f1742c95-4906-4545-8e62-38903c72b168\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:40:08 crc kubenswrapper[4720]: I1013 17:40:08.003007 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"f1742c95-4906-4545-8e62-38903c72b168\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:40:08 crc kubenswrapper[4720]: I1013 17:40:08.135807 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 13 17:40:08 crc kubenswrapper[4720]: I1013 17:40:08.360511 4720 generic.go:334] "Generic (PLEG): container finished" podID="77e2a869-ae9a-47cd-973e-bc3597a2365d" containerID="75dcef59c7735fa127edc067f5f3fc3a1cc6c701e8cb9453bb54444af59e1a16" exitCode=0 Oct 13 17:40:08 crc kubenswrapper[4720]: I1013 17:40:08.361135 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-8c7e-account-create-ftbcl" event={"ID":"77e2a869-ae9a-47cd-973e-bc3597a2365d","Type":"ContainerDied","Data":"75dcef59c7735fa127edc067f5f3fc3a1cc6c701e8cb9453bb54444af59e1a16"} Oct 13 17:40:08 crc kubenswrapper[4720]: I1013 17:40:08.364157 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-bf8c4749f-wv7s9" event={"ID":"0711763b-3ada-4180-b697-ad4911f48641","Type":"ContainerDied","Data":"367dccf13badc2c4820f82eb098afb455f16c3e05532f7afe1e7f7d32fdf0f22"} Oct 13 17:40:08 crc kubenswrapper[4720]: I1013 17:40:08.364219 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-bf8c4749f-wv7s9" Oct 13 17:40:08 crc kubenswrapper[4720]: I1013 17:40:08.371776 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7df8489788-ntn24" event={"ID":"139c2e02-2c20-4a21-a5c0-753c6003473b","Type":"ContainerStarted","Data":"b16a8f5b551805315d0632eef1b1948aa4930fa2db29ab6d2fe042db2c832eb1"} Oct 13 17:40:08 crc kubenswrapper[4720]: I1013 17:40:08.371823 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7df8489788-ntn24" event={"ID":"139c2e02-2c20-4a21-a5c0-753c6003473b","Type":"ContainerStarted","Data":"10c21702a32bc872627030d27e43d3321339dec03c70d87460aa37a2d88b5a48"} Oct 13 17:40:08 crc kubenswrapper[4720]: I1013 17:40:08.373515 4720 generic.go:334] "Generic (PLEG): container finished" podID="5cab6f53-adba-4474-b8b6-195faff8e193" containerID="4a52864d77f90957914d2b23ad16fd2881818561c80e5d4b153da9f3e7f9ffec" exitCode=0 Oct 13 17:40:08 crc kubenswrapper[4720]: I1013 17:40:08.373556 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-1089-account-create-rz4lq" event={"ID":"5cab6f53-adba-4474-b8b6-195faff8e193","Type":"ContainerDied","Data":"4a52864d77f90957914d2b23ad16fd2881818561c80e5d4b153da9f3e7f9ffec"} Oct 13 17:40:08 crc kubenswrapper[4720]: I1013 17:40:08.375594 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"79085d79-21f8-43c4-af7c-e51e0c9f9610","Type":"ContainerStarted","Data":"002eaf81489c47e66c4852c35e3cff8128157a07136836938b0c061973cc7bf5"} Oct 13 17:40:08 crc kubenswrapper[4720]: I1013 17:40:08.375619 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"79085d79-21f8-43c4-af7c-e51e0c9f9610","Type":"ContainerStarted","Data":"7384e7933bfabacefdb8c9bf8acb66d9d36e47f70504472b91f3575fe67547f0"} Oct 13 17:40:08 crc kubenswrapper[4720]: I1013 17:40:08.378430 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2z4fc" event={"ID":"15d273d6-ce41-4aeb-88e1-42a1f9423737","Type":"ContainerStarted","Data":"7ddce86b334fb8ab710d0db40f4fdd4b0f5396c166c8c2e50cfdc6fe7e47ab0b"} Oct 13 17:40:08 crc kubenswrapper[4720]: I1013 17:40:08.380142 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7984dcc5d8-8c2ss" event={"ID":"27768d75-429c-45c3-bf03-98527e94fe63","Type":"ContainerStarted","Data":"2cc8fe2fe7c476b95fe7893083ccedeedbad604576ac6eb7016834b099d79bfd"} Oct 13 17:40:08 crc kubenswrapper[4720]: I1013 17:40:08.384722 4720 generic.go:334] "Generic (PLEG): container finished" podID="b6edabb2-1018-4d9a-b43f-2414235bbfdc" containerID="8f508b9803991b29ca92bc309a82aa31aac4204d4be5a41f8589fe1b3a36d365" exitCode=0 Oct 13 17:40:08 crc kubenswrapper[4720]: I1013 17:40:08.384791 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-fd8d-account-create-v5dkv" event={"ID":"b6edabb2-1018-4d9a-b43f-2414235bbfdc","Type":"ContainerDied","Data":"8f508b9803991b29ca92bc309a82aa31aac4204d4be5a41f8589fe1b3a36d365"} Oct 13 17:40:08 crc kubenswrapper[4720]: I1013 17:40:08.470492 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7df8489788-ntn24" podStartSLOduration=7.470467956 podStartE2EDuration="7.470467956s" podCreationTimestamp="2025-10-13 17:40:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:40:08.426913055 +0000 UTC m=+953.884163177" watchObservedRunningTime="2025-10-13 17:40:08.470467956 +0000 UTC m=+953.927718088" Oct 13 17:40:08 crc kubenswrapper[4720]: I1013 17:40:08.535763 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-2z4fc" podStartSLOduration=11.535654223 podStartE2EDuration="11.535654223s" podCreationTimestamp="2025-10-13 17:39:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:40:08.454934886 +0000 UTC m=+953.912185018" watchObservedRunningTime="2025-10-13 17:40:08.535654223 +0000 UTC m=+953.992904355" Oct 13 17:40:08 crc kubenswrapper[4720]: I1013 17:40:08.686330 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-bf8c4749f-wv7s9"] Oct 13 17:40:08 crc kubenswrapper[4720]: I1013 17:40:08.697770 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-bf8c4749f-wv7s9"] Oct 13 17:40:09 crc kubenswrapper[4720]: I1013 17:40:09.092371 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 13 17:40:09 crc kubenswrapper[4720]: W1013 17:40:09.106739 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1742c95_4906_4545_8e62_38903c72b168.slice/crio-9c5e3d5537d68d093fa80b7305900cbac5ce46c2bdf8a7147c033bd39365de58 WatchSource:0}: Error finding container 9c5e3d5537d68d093fa80b7305900cbac5ce46c2bdf8a7147c033bd39365de58: Status 404 returned error can't find the container with id 9c5e3d5537d68d093fa80b7305900cbac5ce46c2bdf8a7147c033bd39365de58 Oct 13 17:40:09 crc kubenswrapper[4720]: I1013 17:40:09.183899 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0711763b-3ada-4180-b697-ad4911f48641" path="/var/lib/kubelet/pods/0711763b-3ada-4180-b697-ad4911f48641/volumes" Oct 13 17:40:09 crc kubenswrapper[4720]: I1013 17:40:09.184401 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="182cb87f-6f29-43c2-932e-f9de187d4fa0" path="/var/lib/kubelet/pods/182cb87f-6f29-43c2-932e-f9de187d4fa0/volumes" Oct 13 17:40:09 crc kubenswrapper[4720]: I1013 17:40:09.184754 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6720cae9-e687-497d-955d-53e36250c8a4" path="/var/lib/kubelet/pods/6720cae9-e687-497d-955d-53e36250c8a4/volumes" Oct 13 17:40:09 crc kubenswrapper[4720]: I1013 17:40:09.185664 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9db54922-09ac-46bd-85ad-aad0b3e8738b" path="/var/lib/kubelet/pods/9db54922-09ac-46bd-85ad-aad0b3e8738b/volumes" Oct 13 17:40:09 crc kubenswrapper[4720]: I1013 17:40:09.408019 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7984dcc5d8-8c2ss" event={"ID":"27768d75-429c-45c3-bf03-98527e94fe63","Type":"ContainerStarted","Data":"91c88fdebf2ced15e507981fba01c7e6eeeaa1a72110355332b7ea8c62ed0252"} Oct 13 17:40:09 crc kubenswrapper[4720]: I1013 17:40:09.417382 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f1742c95-4906-4545-8e62-38903c72b168","Type":"ContainerStarted","Data":"9c5e3d5537d68d093fa80b7305900cbac5ce46c2bdf8a7147c033bd39365de58"} Oct 13 17:40:09 crc kubenswrapper[4720]: I1013 17:40:09.425377 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d3623f2-ec27-4ad6-8cb4-553cb0527e15","Type":"ContainerStarted","Data":"f9e8de6048bec5a4704aa9fca47068f43b9d94f967db4928fc0e4d17006ef487"} Oct 13 17:40:09 crc kubenswrapper[4720]: I1013 17:40:09.432392 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="79085d79-21f8-43c4-af7c-e51e0c9f9610" containerName="glance-log" containerID="cri-o://002eaf81489c47e66c4852c35e3cff8128157a07136836938b0c061973cc7bf5" gracePeriod=30 Oct 13 17:40:09 crc kubenswrapper[4720]: I1013 17:40:09.432564 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="79085d79-21f8-43c4-af7c-e51e0c9f9610" containerName="glance-httpd" containerID="cri-o://2cf47d3ffa4c9d3bb7757ac6393136e864808f3f5fce368cdfa3c04d1067490c" gracePeriod=30 Oct 13 17:40:09 crc kubenswrapper[4720]: I1013 17:40:09.432872 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"79085d79-21f8-43c4-af7c-e51e0c9f9610","Type":"ContainerStarted","Data":"2cf47d3ffa4c9d3bb7757ac6393136e864808f3f5fce368cdfa3c04d1067490c"} Oct 13 17:40:09 crc kubenswrapper[4720]: I1013 17:40:09.447071 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7984dcc5d8-8c2ss" podStartSLOduration=8.447057205 podStartE2EDuration="8.447057205s" podCreationTimestamp="2025-10-13 17:40:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:40:09.440707811 +0000 UTC m=+954.897957933" watchObservedRunningTime="2025-10-13 17:40:09.447057205 +0000 UTC m=+954.904307337" Oct 13 17:40:09 crc kubenswrapper[4720]: I1013 17:40:09.472427 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=12.472398697 podStartE2EDuration="12.472398697s" podCreationTimestamp="2025-10-13 17:39:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:40:09.467607533 +0000 UTC m=+954.924857675" watchObservedRunningTime="2025-10-13 17:40:09.472398697 +0000 UTC m=+954.929648829" Oct 13 17:40:09 crc kubenswrapper[4720]: I1013 17:40:09.857775 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8c7e-account-create-ftbcl" Oct 13 17:40:09 crc kubenswrapper[4720]: I1013 17:40:09.903258 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-fd8d-account-create-v5dkv" Oct 13 17:40:09 crc kubenswrapper[4720]: I1013 17:40:09.908101 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4h94\" (UniqueName: \"kubernetes.io/projected/77e2a869-ae9a-47cd-973e-bc3597a2365d-kube-api-access-d4h94\") pod \"77e2a869-ae9a-47cd-973e-bc3597a2365d\" (UID: \"77e2a869-ae9a-47cd-973e-bc3597a2365d\") " Oct 13 17:40:09 crc kubenswrapper[4720]: I1013 17:40:09.914510 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77e2a869-ae9a-47cd-973e-bc3597a2365d-kube-api-access-d4h94" (OuterVolumeSpecName: "kube-api-access-d4h94") pod "77e2a869-ae9a-47cd-973e-bc3597a2365d" (UID: "77e2a869-ae9a-47cd-973e-bc3597a2365d"). InnerVolumeSpecName "kube-api-access-d4h94". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:40:09 crc kubenswrapper[4720]: I1013 17:40:09.937613 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-1089-account-create-rz4lq" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.009220 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6t8cg\" (UniqueName: \"kubernetes.io/projected/b6edabb2-1018-4d9a-b43f-2414235bbfdc-kube-api-access-6t8cg\") pod \"b6edabb2-1018-4d9a-b43f-2414235bbfdc\" (UID: \"b6edabb2-1018-4d9a-b43f-2414235bbfdc\") " Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.009270 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x74zh\" (UniqueName: \"kubernetes.io/projected/5cab6f53-adba-4474-b8b6-195faff8e193-kube-api-access-x74zh\") pod \"5cab6f53-adba-4474-b8b6-195faff8e193\" (UID: \"5cab6f53-adba-4474-b8b6-195faff8e193\") " Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.009821 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4h94\" (UniqueName: \"kubernetes.io/projected/77e2a869-ae9a-47cd-973e-bc3597a2365d-kube-api-access-d4h94\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.013949 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cab6f53-adba-4474-b8b6-195faff8e193-kube-api-access-x74zh" (OuterVolumeSpecName: "kube-api-access-x74zh") pod "5cab6f53-adba-4474-b8b6-195faff8e193" (UID: "5cab6f53-adba-4474-b8b6-195faff8e193"). InnerVolumeSpecName "kube-api-access-x74zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.020217 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6edabb2-1018-4d9a-b43f-2414235bbfdc-kube-api-access-6t8cg" (OuterVolumeSpecName: "kube-api-access-6t8cg") pod "b6edabb2-1018-4d9a-b43f-2414235bbfdc" (UID: "b6edabb2-1018-4d9a-b43f-2414235bbfdc"). InnerVolumeSpecName "kube-api-access-6t8cg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.111633 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6t8cg\" (UniqueName: \"kubernetes.io/projected/b6edabb2-1018-4d9a-b43f-2414235bbfdc-kube-api-access-6t8cg\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.111665 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x74zh\" (UniqueName: \"kubernetes.io/projected/5cab6f53-adba-4474-b8b6-195faff8e193-kube-api-access-x74zh\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.262274 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.314932 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldgwl\" (UniqueName: \"kubernetes.io/projected/79085d79-21f8-43c4-af7c-e51e0c9f9610-kube-api-access-ldgwl\") pod \"79085d79-21f8-43c4-af7c-e51e0c9f9610\" (UID: \"79085d79-21f8-43c4-af7c-e51e0c9f9610\") " Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.314995 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79085d79-21f8-43c4-af7c-e51e0c9f9610-scripts\") pod \"79085d79-21f8-43c4-af7c-e51e0c9f9610\" (UID: \"79085d79-21f8-43c4-af7c-e51e0c9f9610\") " Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.315026 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/79085d79-21f8-43c4-af7c-e51e0c9f9610-httpd-run\") pod \"79085d79-21f8-43c4-af7c-e51e0c9f9610\" (UID: \"79085d79-21f8-43c4-af7c-e51e0c9f9610\") " Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.315072 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79085d79-21f8-43c4-af7c-e51e0c9f9610-combined-ca-bundle\") pod \"79085d79-21f8-43c4-af7c-e51e0c9f9610\" (UID: \"79085d79-21f8-43c4-af7c-e51e0c9f9610\") " Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.315096 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"79085d79-21f8-43c4-af7c-e51e0c9f9610\" (UID: \"79085d79-21f8-43c4-af7c-e51e0c9f9610\") " Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.315132 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79085d79-21f8-43c4-af7c-e51e0c9f9610-logs\") pod \"79085d79-21f8-43c4-af7c-e51e0c9f9610\" (UID: \"79085d79-21f8-43c4-af7c-e51e0c9f9610\") " Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.315240 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79085d79-21f8-43c4-af7c-e51e0c9f9610-config-data\") pod \"79085d79-21f8-43c4-af7c-e51e0c9f9610\" (UID: \"79085d79-21f8-43c4-af7c-e51e0c9f9610\") " Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.316664 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79085d79-21f8-43c4-af7c-e51e0c9f9610-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "79085d79-21f8-43c4-af7c-e51e0c9f9610" (UID: "79085d79-21f8-43c4-af7c-e51e0c9f9610"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.316746 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79085d79-21f8-43c4-af7c-e51e0c9f9610-logs" (OuterVolumeSpecName: "logs") pod "79085d79-21f8-43c4-af7c-e51e0c9f9610" (UID: "79085d79-21f8-43c4-af7c-e51e0c9f9610"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.323486 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79085d79-21f8-43c4-af7c-e51e0c9f9610-scripts" (OuterVolumeSpecName: "scripts") pod "79085d79-21f8-43c4-af7c-e51e0c9f9610" (UID: "79085d79-21f8-43c4-af7c-e51e0c9f9610"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.323858 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79085d79-21f8-43c4-af7c-e51e0c9f9610-kube-api-access-ldgwl" (OuterVolumeSpecName: "kube-api-access-ldgwl") pod "79085d79-21f8-43c4-af7c-e51e0c9f9610" (UID: "79085d79-21f8-43c4-af7c-e51e0c9f9610"). InnerVolumeSpecName "kube-api-access-ldgwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.337290 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "79085d79-21f8-43c4-af7c-e51e0c9f9610" (UID: "79085d79-21f8-43c4-af7c-e51e0c9f9610"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.352156 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79085d79-21f8-43c4-af7c-e51e0c9f9610-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "79085d79-21f8-43c4-af7c-e51e0c9f9610" (UID: "79085d79-21f8-43c4-af7c-e51e0c9f9610"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.376363 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79085d79-21f8-43c4-af7c-e51e0c9f9610-config-data" (OuterVolumeSpecName: "config-data") pod "79085d79-21f8-43c4-af7c-e51e0c9f9610" (UID: "79085d79-21f8-43c4-af7c-e51e0c9f9610"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.417220 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79085d79-21f8-43c4-af7c-e51e0c9f9610-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.417251 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldgwl\" (UniqueName: \"kubernetes.io/projected/79085d79-21f8-43c4-af7c-e51e0c9f9610-kube-api-access-ldgwl\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.417261 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79085d79-21f8-43c4-af7c-e51e0c9f9610-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.417268 4720 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/79085d79-21f8-43c4-af7c-e51e0c9f9610-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.417277 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79085d79-21f8-43c4-af7c-e51e0c9f9610-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.417300 4720 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.417309 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79085d79-21f8-43c4-af7c-e51e0c9f9610-logs\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.436531 4720 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.445694 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-8c7e-account-create-ftbcl" event={"ID":"77e2a869-ae9a-47cd-973e-bc3597a2365d","Type":"ContainerDied","Data":"050023d216272455ddda02dc159a36002a75c13054f969a0335e84dcc914eb5c"} Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.445724 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="050023d216272455ddda02dc159a36002a75c13054f969a0335e84dcc914eb5c" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.445773 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8c7e-account-create-ftbcl" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.455670 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f1742c95-4906-4545-8e62-38903c72b168","Type":"ContainerStarted","Data":"11524c98eb61e0537ae1d6d91addce7769b6a1193e5f00f1994585c7e37277f3"} Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.457566 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-fd8d-account-create-v5dkv" event={"ID":"b6edabb2-1018-4d9a-b43f-2414235bbfdc","Type":"ContainerDied","Data":"b86aa2abfe0a63ff2e4e6acf9cbcbd6a787c8901e72b112f32b925765b5bd594"} Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.457583 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b86aa2abfe0a63ff2e4e6acf9cbcbd6a787c8901e72b112f32b925765b5bd594" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.457621 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-fd8d-account-create-v5dkv" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.459099 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-1089-account-create-rz4lq" event={"ID":"5cab6f53-adba-4474-b8b6-195faff8e193","Type":"ContainerDied","Data":"e9f730278a70bd002f9784dc80cf7b92a12207e744e06f4496f271e6bebde485"} Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.459134 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-1089-account-create-rz4lq" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.459142 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9f730278a70bd002f9784dc80cf7b92a12207e744e06f4496f271e6bebde485" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.464924 4720 generic.go:334] "Generic (PLEG): container finished" podID="79085d79-21f8-43c4-af7c-e51e0c9f9610" containerID="2cf47d3ffa4c9d3bb7757ac6393136e864808f3f5fce368cdfa3c04d1067490c" exitCode=143 Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.465050 4720 generic.go:334] "Generic (PLEG): container finished" podID="79085d79-21f8-43c4-af7c-e51e0c9f9610" containerID="002eaf81489c47e66c4852c35e3cff8128157a07136836938b0c061973cc7bf5" exitCode=143 Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.466311 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.471253 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"79085d79-21f8-43c4-af7c-e51e0c9f9610","Type":"ContainerDied","Data":"2cf47d3ffa4c9d3bb7757ac6393136e864808f3f5fce368cdfa3c04d1067490c"} Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.486671 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"79085d79-21f8-43c4-af7c-e51e0c9f9610","Type":"ContainerDied","Data":"002eaf81489c47e66c4852c35e3cff8128157a07136836938b0c061973cc7bf5"} Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.486713 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"79085d79-21f8-43c4-af7c-e51e0c9f9610","Type":"ContainerDied","Data":"7384e7933bfabacefdb8c9bf8acb66d9d36e47f70504472b91f3575fe67547f0"} Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.486735 4720 scope.go:117] "RemoveContainer" containerID="2cf47d3ffa4c9d3bb7757ac6393136e864808f3f5fce368cdfa3c04d1067490c" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.529926 4720 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.530313 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.556855 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.568246 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Oct 13 17:40:10 crc kubenswrapper[4720]: E1013 17:40:10.568606 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cab6f53-adba-4474-b8b6-195faff8e193" containerName="mariadb-account-create" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.568619 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cab6f53-adba-4474-b8b6-195faff8e193" containerName="mariadb-account-create" Oct 13 17:40:10 crc kubenswrapper[4720]: E1013 17:40:10.568638 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79085d79-21f8-43c4-af7c-e51e0c9f9610" containerName="glance-log" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.568647 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="79085d79-21f8-43c4-af7c-e51e0c9f9610" containerName="glance-log" Oct 13 17:40:10 crc kubenswrapper[4720]: E1013 17:40:10.568658 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6edabb2-1018-4d9a-b43f-2414235bbfdc" containerName="mariadb-account-create" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.568666 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6edabb2-1018-4d9a-b43f-2414235bbfdc" containerName="mariadb-account-create" Oct 13 17:40:10 crc kubenswrapper[4720]: E1013 17:40:10.568681 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77e2a869-ae9a-47cd-973e-bc3597a2365d" containerName="mariadb-account-create" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.568689 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="77e2a869-ae9a-47cd-973e-bc3597a2365d" containerName="mariadb-account-create" Oct 13 17:40:10 crc kubenswrapper[4720]: E1013 17:40:10.568697 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79085d79-21f8-43c4-af7c-e51e0c9f9610" containerName="glance-httpd" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.568706 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="79085d79-21f8-43c4-af7c-e51e0c9f9610" containerName="glance-httpd" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.568879 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="79085d79-21f8-43c4-af7c-e51e0c9f9610" containerName="glance-httpd" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.568892 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="79085d79-21f8-43c4-af7c-e51e0c9f9610" containerName="glance-log" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.568907 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cab6f53-adba-4474-b8b6-195faff8e193" containerName="mariadb-account-create" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.568915 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="77e2a869-ae9a-47cd-973e-bc3597a2365d" containerName="mariadb-account-create" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.568929 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6edabb2-1018-4d9a-b43f-2414235bbfdc" containerName="mariadb-account-create" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.569874 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.575079 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.575124 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.575341 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.633974 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmw6l\" (UniqueName: \"kubernetes.io/projected/972b98ca-7012-422d-8839-a196b6a3b919-kube-api-access-pmw6l\") pod \"glance-default-external-api-0\" (UID: \"972b98ca-7012-422d-8839-a196b6a3b919\") " pod="openstack/glance-default-external-api-0" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.634017 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/972b98ca-7012-422d-8839-a196b6a3b919-logs\") pod \"glance-default-external-api-0\" (UID: \"972b98ca-7012-422d-8839-a196b6a3b919\") " pod="openstack/glance-default-external-api-0" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.634082 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/972b98ca-7012-422d-8839-a196b6a3b919-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"972b98ca-7012-422d-8839-a196b6a3b919\") " pod="openstack/glance-default-external-api-0" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.634103 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/972b98ca-7012-422d-8839-a196b6a3b919-config-data\") pod \"glance-default-external-api-0\" (UID: \"972b98ca-7012-422d-8839-a196b6a3b919\") " pod="openstack/glance-default-external-api-0" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.634141 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/972b98ca-7012-422d-8839-a196b6a3b919-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"972b98ca-7012-422d-8839-a196b6a3b919\") " pod="openstack/glance-default-external-api-0" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.634175 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/972b98ca-7012-422d-8839-a196b6a3b919-scripts\") pod \"glance-default-external-api-0\" (UID: \"972b98ca-7012-422d-8839-a196b6a3b919\") " pod="openstack/glance-default-external-api-0" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.634215 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"972b98ca-7012-422d-8839-a196b6a3b919\") " pod="openstack/glance-default-external-api-0" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.634235 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/972b98ca-7012-422d-8839-a196b6a3b919-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"972b98ca-7012-422d-8839-a196b6a3b919\") " pod="openstack/glance-default-external-api-0" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.649861 4720 scope.go:117] "RemoveContainer" containerID="002eaf81489c47e66c4852c35e3cff8128157a07136836938b0c061973cc7bf5" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.669020 4720 scope.go:117] "RemoveContainer" containerID="2cf47d3ffa4c9d3bb7757ac6393136e864808f3f5fce368cdfa3c04d1067490c" Oct 13 17:40:10 crc kubenswrapper[4720]: E1013 17:40:10.669507 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cf47d3ffa4c9d3bb7757ac6393136e864808f3f5fce368cdfa3c04d1067490c\": container with ID starting with 2cf47d3ffa4c9d3bb7757ac6393136e864808f3f5fce368cdfa3c04d1067490c not found: ID does not exist" containerID="2cf47d3ffa4c9d3bb7757ac6393136e864808f3f5fce368cdfa3c04d1067490c" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.669539 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cf47d3ffa4c9d3bb7757ac6393136e864808f3f5fce368cdfa3c04d1067490c"} err="failed to get container status \"2cf47d3ffa4c9d3bb7757ac6393136e864808f3f5fce368cdfa3c04d1067490c\": rpc error: code = NotFound desc = could not find container \"2cf47d3ffa4c9d3bb7757ac6393136e864808f3f5fce368cdfa3c04d1067490c\": container with ID starting with 2cf47d3ffa4c9d3bb7757ac6393136e864808f3f5fce368cdfa3c04d1067490c not found: ID does not exist" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.669560 4720 scope.go:117] "RemoveContainer" containerID="002eaf81489c47e66c4852c35e3cff8128157a07136836938b0c061973cc7bf5" Oct 13 17:40:10 crc kubenswrapper[4720]: E1013 17:40:10.669973 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"002eaf81489c47e66c4852c35e3cff8128157a07136836938b0c061973cc7bf5\": container with ID starting with 002eaf81489c47e66c4852c35e3cff8128157a07136836938b0c061973cc7bf5 not found: ID does not exist" containerID="002eaf81489c47e66c4852c35e3cff8128157a07136836938b0c061973cc7bf5" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.669994 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"002eaf81489c47e66c4852c35e3cff8128157a07136836938b0c061973cc7bf5"} err="failed to get container status \"002eaf81489c47e66c4852c35e3cff8128157a07136836938b0c061973cc7bf5\": rpc error: code = NotFound desc = could not find container \"002eaf81489c47e66c4852c35e3cff8128157a07136836938b0c061973cc7bf5\": container with ID starting with 002eaf81489c47e66c4852c35e3cff8128157a07136836938b0c061973cc7bf5 not found: ID does not exist" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.670006 4720 scope.go:117] "RemoveContainer" containerID="2cf47d3ffa4c9d3bb7757ac6393136e864808f3f5fce368cdfa3c04d1067490c" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.670357 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cf47d3ffa4c9d3bb7757ac6393136e864808f3f5fce368cdfa3c04d1067490c"} err="failed to get container status \"2cf47d3ffa4c9d3bb7757ac6393136e864808f3f5fce368cdfa3c04d1067490c\": rpc error: code = NotFound desc = could not find container \"2cf47d3ffa4c9d3bb7757ac6393136e864808f3f5fce368cdfa3c04d1067490c\": container with ID starting with 2cf47d3ffa4c9d3bb7757ac6393136e864808f3f5fce368cdfa3c04d1067490c not found: ID does not exist" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.670442 4720 scope.go:117] "RemoveContainer" containerID="002eaf81489c47e66c4852c35e3cff8128157a07136836938b0c061973cc7bf5" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.670755 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"002eaf81489c47e66c4852c35e3cff8128157a07136836938b0c061973cc7bf5"} err="failed to get container status \"002eaf81489c47e66c4852c35e3cff8128157a07136836938b0c061973cc7bf5\": rpc error: code = NotFound desc = could not find container \"002eaf81489c47e66c4852c35e3cff8128157a07136836938b0c061973cc7bf5\": container with ID starting with 002eaf81489c47e66c4852c35e3cff8128157a07136836938b0c061973cc7bf5 not found: ID does not exist" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.735876 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"972b98ca-7012-422d-8839-a196b6a3b919\") " pod="openstack/glance-default-external-api-0" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.735937 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/972b98ca-7012-422d-8839-a196b6a3b919-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"972b98ca-7012-422d-8839-a196b6a3b919\") " pod="openstack/glance-default-external-api-0" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.736035 4720 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"972b98ca-7012-422d-8839-a196b6a3b919\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-external-api-0" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.736050 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmw6l\" (UniqueName: \"kubernetes.io/projected/972b98ca-7012-422d-8839-a196b6a3b919-kube-api-access-pmw6l\") pod \"glance-default-external-api-0\" (UID: \"972b98ca-7012-422d-8839-a196b6a3b919\") " pod="openstack/glance-default-external-api-0" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.736085 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/972b98ca-7012-422d-8839-a196b6a3b919-logs\") pod \"glance-default-external-api-0\" (UID: \"972b98ca-7012-422d-8839-a196b6a3b919\") " pod="openstack/glance-default-external-api-0" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.736145 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/972b98ca-7012-422d-8839-a196b6a3b919-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"972b98ca-7012-422d-8839-a196b6a3b919\") " pod="openstack/glance-default-external-api-0" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.736178 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/972b98ca-7012-422d-8839-a196b6a3b919-config-data\") pod \"glance-default-external-api-0\" (UID: \"972b98ca-7012-422d-8839-a196b6a3b919\") " pod="openstack/glance-default-external-api-0" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.736244 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/972b98ca-7012-422d-8839-a196b6a3b919-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"972b98ca-7012-422d-8839-a196b6a3b919\") " pod="openstack/glance-default-external-api-0" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.736270 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/972b98ca-7012-422d-8839-a196b6a3b919-scripts\") pod \"glance-default-external-api-0\" (UID: \"972b98ca-7012-422d-8839-a196b6a3b919\") " pod="openstack/glance-default-external-api-0" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.737531 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/972b98ca-7012-422d-8839-a196b6a3b919-logs\") pod \"glance-default-external-api-0\" (UID: \"972b98ca-7012-422d-8839-a196b6a3b919\") " pod="openstack/glance-default-external-api-0" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.737711 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/972b98ca-7012-422d-8839-a196b6a3b919-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"972b98ca-7012-422d-8839-a196b6a3b919\") " pod="openstack/glance-default-external-api-0" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.751452 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/972b98ca-7012-422d-8839-a196b6a3b919-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"972b98ca-7012-422d-8839-a196b6a3b919\") " pod="openstack/glance-default-external-api-0" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.751998 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/972b98ca-7012-422d-8839-a196b6a3b919-config-data\") pod \"glance-default-external-api-0\" (UID: \"972b98ca-7012-422d-8839-a196b6a3b919\") " pod="openstack/glance-default-external-api-0" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.752815 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/972b98ca-7012-422d-8839-a196b6a3b919-scripts\") pod \"glance-default-external-api-0\" (UID: \"972b98ca-7012-422d-8839-a196b6a3b919\") " pod="openstack/glance-default-external-api-0" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.752950 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/972b98ca-7012-422d-8839-a196b6a3b919-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"972b98ca-7012-422d-8839-a196b6a3b919\") " pod="openstack/glance-default-external-api-0" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.760229 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmw6l\" (UniqueName: \"kubernetes.io/projected/972b98ca-7012-422d-8839-a196b6a3b919-kube-api-access-pmw6l\") pod \"glance-default-external-api-0\" (UID: \"972b98ca-7012-422d-8839-a196b6a3b919\") " pod="openstack/glance-default-external-api-0" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.777953 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"972b98ca-7012-422d-8839-a196b6a3b919\") " pod="openstack/glance-default-external-api-0" Oct 13 17:40:10 crc kubenswrapper[4720]: I1013 17:40:10.890788 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 13 17:40:11 crc kubenswrapper[4720]: I1013 17:40:11.181909 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79085d79-21f8-43c4-af7c-e51e0c9f9610" path="/var/lib/kubelet/pods/79085d79-21f8-43c4-af7c-e51e0c9f9610/volumes" Oct 13 17:40:11 crc kubenswrapper[4720]: I1013 17:40:11.215292 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77585f5f8c-hqcjj" podUID="6720cae9-e687-497d-955d-53e36250c8a4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.127:5353: i/o timeout" Oct 13 17:40:11 crc kubenswrapper[4720]: I1013 17:40:11.471394 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 13 17:40:11 crc kubenswrapper[4720]: I1013 17:40:11.486553 4720 generic.go:334] "Generic (PLEG): container finished" podID="15d273d6-ce41-4aeb-88e1-42a1f9423737" containerID="7ddce86b334fb8ab710d0db40f4fdd4b0f5396c166c8c2e50cfdc6fe7e47ab0b" exitCode=0 Oct 13 17:40:11 crc kubenswrapper[4720]: I1013 17:40:11.486615 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2z4fc" event={"ID":"15d273d6-ce41-4aeb-88e1-42a1f9423737","Type":"ContainerDied","Data":"7ddce86b334fb8ab710d0db40f4fdd4b0f5396c166c8c2e50cfdc6fe7e47ab0b"} Oct 13 17:40:11 crc kubenswrapper[4720]: I1013 17:40:11.489240 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f1742c95-4906-4545-8e62-38903c72b168","Type":"ContainerStarted","Data":"08fb538daabd98d65e3554b1d509b5149739a281779328f80b240a2b7df44866"} Oct 13 17:40:11 crc kubenswrapper[4720]: I1013 17:40:11.522610 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.52259284 podStartE2EDuration="4.52259284s" podCreationTimestamp="2025-10-13 17:40:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:40:11.518170996 +0000 UTC m=+956.975421118" watchObservedRunningTime="2025-10-13 17:40:11.52259284 +0000 UTC m=+956.979842962" Oct 13 17:40:11 crc kubenswrapper[4720]: I1013 17:40:11.771892 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7bf7d9f5bf-6khmt" Oct 13 17:40:12 crc kubenswrapper[4720]: I1013 17:40:12.088048 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7df8489788-ntn24" Oct 13 17:40:12 crc kubenswrapper[4720]: I1013 17:40:12.088442 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7df8489788-ntn24" Oct 13 17:40:12 crc kubenswrapper[4720]: I1013 17:40:12.199426 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7984dcc5d8-8c2ss" Oct 13 17:40:12 crc kubenswrapper[4720]: I1013 17:40:12.199473 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7984dcc5d8-8c2ss" Oct 13 17:40:12 crc kubenswrapper[4720]: I1013 17:40:12.501451 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"972b98ca-7012-422d-8839-a196b6a3b919","Type":"ContainerStarted","Data":"1b16ae43e2c7123fd632986ea5110d58609a9175e2cef7c8826b4cf23c18a495"} Oct 13 17:40:14 crc kubenswrapper[4720]: I1013 17:40:14.344269 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-9dcbx"] Oct 13 17:40:14 crc kubenswrapper[4720]: I1013 17:40:14.345846 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-9dcbx" Oct 13 17:40:14 crc kubenswrapper[4720]: I1013 17:40:14.350122 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-rv5gz" Oct 13 17:40:14 crc kubenswrapper[4720]: I1013 17:40:14.350300 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Oct 13 17:40:14 crc kubenswrapper[4720]: I1013 17:40:14.350518 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Oct 13 17:40:14 crc kubenswrapper[4720]: I1013 17:40:14.361692 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-9dcbx"] Oct 13 17:40:14 crc kubenswrapper[4720]: I1013 17:40:14.532516 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4kgh\" (UniqueName: \"kubernetes.io/projected/887aa549-67e8-4d03-acba-dede202496db-kube-api-access-k4kgh\") pod \"cinder-db-sync-9dcbx\" (UID: \"887aa549-67e8-4d03-acba-dede202496db\") " pod="openstack/cinder-db-sync-9dcbx" Oct 13 17:40:14 crc kubenswrapper[4720]: I1013 17:40:14.532558 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/887aa549-67e8-4d03-acba-dede202496db-combined-ca-bundle\") pod \"cinder-db-sync-9dcbx\" (UID: \"887aa549-67e8-4d03-acba-dede202496db\") " pod="openstack/cinder-db-sync-9dcbx" Oct 13 17:40:14 crc kubenswrapper[4720]: I1013 17:40:14.532627 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/887aa549-67e8-4d03-acba-dede202496db-config-data\") pod \"cinder-db-sync-9dcbx\" (UID: \"887aa549-67e8-4d03-acba-dede202496db\") " pod="openstack/cinder-db-sync-9dcbx" Oct 13 17:40:14 crc kubenswrapper[4720]: I1013 17:40:14.532661 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/887aa549-67e8-4d03-acba-dede202496db-etc-machine-id\") pod \"cinder-db-sync-9dcbx\" (UID: \"887aa549-67e8-4d03-acba-dede202496db\") " pod="openstack/cinder-db-sync-9dcbx" Oct 13 17:40:14 crc kubenswrapper[4720]: I1013 17:40:14.532708 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/887aa549-67e8-4d03-acba-dede202496db-scripts\") pod \"cinder-db-sync-9dcbx\" (UID: \"887aa549-67e8-4d03-acba-dede202496db\") " pod="openstack/cinder-db-sync-9dcbx" Oct 13 17:40:14 crc kubenswrapper[4720]: I1013 17:40:14.532731 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/887aa549-67e8-4d03-acba-dede202496db-db-sync-config-data\") pod \"cinder-db-sync-9dcbx\" (UID: \"887aa549-67e8-4d03-acba-dede202496db\") " pod="openstack/cinder-db-sync-9dcbx" Oct 13 17:40:14 crc kubenswrapper[4720]: I1013 17:40:14.634388 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/887aa549-67e8-4d03-acba-dede202496db-etc-machine-id\") pod \"cinder-db-sync-9dcbx\" (UID: \"887aa549-67e8-4d03-acba-dede202496db\") " pod="openstack/cinder-db-sync-9dcbx" Oct 13 17:40:14 crc kubenswrapper[4720]: I1013 17:40:14.634479 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/887aa549-67e8-4d03-acba-dede202496db-scripts\") pod \"cinder-db-sync-9dcbx\" (UID: \"887aa549-67e8-4d03-acba-dede202496db\") " pod="openstack/cinder-db-sync-9dcbx" Oct 13 17:40:14 crc kubenswrapper[4720]: I1013 17:40:14.634507 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/887aa549-67e8-4d03-acba-dede202496db-db-sync-config-data\") pod \"cinder-db-sync-9dcbx\" (UID: \"887aa549-67e8-4d03-acba-dede202496db\") " pod="openstack/cinder-db-sync-9dcbx" Oct 13 17:40:14 crc kubenswrapper[4720]: I1013 17:40:14.634554 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4kgh\" (UniqueName: \"kubernetes.io/projected/887aa549-67e8-4d03-acba-dede202496db-kube-api-access-k4kgh\") pod \"cinder-db-sync-9dcbx\" (UID: \"887aa549-67e8-4d03-acba-dede202496db\") " pod="openstack/cinder-db-sync-9dcbx" Oct 13 17:40:14 crc kubenswrapper[4720]: I1013 17:40:14.634552 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/887aa549-67e8-4d03-acba-dede202496db-etc-machine-id\") pod \"cinder-db-sync-9dcbx\" (UID: \"887aa549-67e8-4d03-acba-dede202496db\") " pod="openstack/cinder-db-sync-9dcbx" Oct 13 17:40:14 crc kubenswrapper[4720]: I1013 17:40:14.634573 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/887aa549-67e8-4d03-acba-dede202496db-combined-ca-bundle\") pod \"cinder-db-sync-9dcbx\" (UID: \"887aa549-67e8-4d03-acba-dede202496db\") " pod="openstack/cinder-db-sync-9dcbx" Oct 13 17:40:14 crc kubenswrapper[4720]: I1013 17:40:14.634708 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/887aa549-67e8-4d03-acba-dede202496db-config-data\") pod \"cinder-db-sync-9dcbx\" (UID: \"887aa549-67e8-4d03-acba-dede202496db\") " pod="openstack/cinder-db-sync-9dcbx" Oct 13 17:40:14 crc kubenswrapper[4720]: I1013 17:40:14.640811 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/887aa549-67e8-4d03-acba-dede202496db-scripts\") pod \"cinder-db-sync-9dcbx\" (UID: \"887aa549-67e8-4d03-acba-dede202496db\") " pod="openstack/cinder-db-sync-9dcbx" Oct 13 17:40:14 crc kubenswrapper[4720]: I1013 17:40:14.641016 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/887aa549-67e8-4d03-acba-dede202496db-db-sync-config-data\") pod \"cinder-db-sync-9dcbx\" (UID: \"887aa549-67e8-4d03-acba-dede202496db\") " pod="openstack/cinder-db-sync-9dcbx" Oct 13 17:40:14 crc kubenswrapper[4720]: I1013 17:40:14.643383 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/887aa549-67e8-4d03-acba-dede202496db-combined-ca-bundle\") pod \"cinder-db-sync-9dcbx\" (UID: \"887aa549-67e8-4d03-acba-dede202496db\") " pod="openstack/cinder-db-sync-9dcbx" Oct 13 17:40:14 crc kubenswrapper[4720]: I1013 17:40:14.647369 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-fj7h4"] Oct 13 17:40:14 crc kubenswrapper[4720]: I1013 17:40:14.648765 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-fj7h4" Oct 13 17:40:14 crc kubenswrapper[4720]: I1013 17:40:14.657159 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-fj7h4"] Oct 13 17:40:14 crc kubenswrapper[4720]: I1013 17:40:14.657327 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Oct 13 17:40:14 crc kubenswrapper[4720]: I1013 17:40:14.657347 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-fj7nh" Oct 13 17:40:14 crc kubenswrapper[4720]: I1013 17:40:14.659029 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/887aa549-67e8-4d03-acba-dede202496db-config-data\") pod \"cinder-db-sync-9dcbx\" (UID: \"887aa549-67e8-4d03-acba-dede202496db\") " pod="openstack/cinder-db-sync-9dcbx" Oct 13 17:40:14 crc kubenswrapper[4720]: I1013 17:40:14.677698 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4kgh\" (UniqueName: \"kubernetes.io/projected/887aa549-67e8-4d03-acba-dede202496db-kube-api-access-k4kgh\") pod \"cinder-db-sync-9dcbx\" (UID: \"887aa549-67e8-4d03-acba-dede202496db\") " pod="openstack/cinder-db-sync-9dcbx" Oct 13 17:40:14 crc kubenswrapper[4720]: I1013 17:40:14.745820 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-gmms8"] Oct 13 17:40:14 crc kubenswrapper[4720]: I1013 17:40:14.749545 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-gmms8" Oct 13 17:40:14 crc kubenswrapper[4720]: I1013 17:40:14.753645 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-q7jmp" Oct 13 17:40:14 crc kubenswrapper[4720]: I1013 17:40:14.753783 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Oct 13 17:40:14 crc kubenswrapper[4720]: I1013 17:40:14.753838 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-gmms8"] Oct 13 17:40:14 crc kubenswrapper[4720]: I1013 17:40:14.753951 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Oct 13 17:40:14 crc kubenswrapper[4720]: I1013 17:40:14.841600 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwswq\" (UniqueName: \"kubernetes.io/projected/69895abb-fedb-4ef1-bd37-698c2384d0b0-kube-api-access-jwswq\") pod \"barbican-db-sync-fj7h4\" (UID: \"69895abb-fedb-4ef1-bd37-698c2384d0b0\") " pod="openstack/barbican-db-sync-fj7h4" Oct 13 17:40:14 crc kubenswrapper[4720]: I1013 17:40:14.841644 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/69895abb-fedb-4ef1-bd37-698c2384d0b0-db-sync-config-data\") pod \"barbican-db-sync-fj7h4\" (UID: \"69895abb-fedb-4ef1-bd37-698c2384d0b0\") " pod="openstack/barbican-db-sync-fj7h4" Oct 13 17:40:14 crc kubenswrapper[4720]: I1013 17:40:14.841754 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69895abb-fedb-4ef1-bd37-698c2384d0b0-combined-ca-bundle\") pod \"barbican-db-sync-fj7h4\" (UID: \"69895abb-fedb-4ef1-bd37-698c2384d0b0\") " pod="openstack/barbican-db-sync-fj7h4" Oct 13 17:40:14 crc kubenswrapper[4720]: I1013 17:40:14.942852 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2d841c9-2675-429d-9d02-09381a0d6f09-combined-ca-bundle\") pod \"neutron-db-sync-gmms8\" (UID: \"a2d841c9-2675-429d-9d02-09381a0d6f09\") " pod="openstack/neutron-db-sync-gmms8" Oct 13 17:40:14 crc kubenswrapper[4720]: I1013 17:40:14.942913 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a2d841c9-2675-429d-9d02-09381a0d6f09-config\") pod \"neutron-db-sync-gmms8\" (UID: \"a2d841c9-2675-429d-9d02-09381a0d6f09\") " pod="openstack/neutron-db-sync-gmms8" Oct 13 17:40:14 crc kubenswrapper[4720]: I1013 17:40:14.943585 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z4x4\" (UniqueName: \"kubernetes.io/projected/a2d841c9-2675-429d-9d02-09381a0d6f09-kube-api-access-4z4x4\") pod \"neutron-db-sync-gmms8\" (UID: \"a2d841c9-2675-429d-9d02-09381a0d6f09\") " pod="openstack/neutron-db-sync-gmms8" Oct 13 17:40:14 crc kubenswrapper[4720]: I1013 17:40:14.943741 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69895abb-fedb-4ef1-bd37-698c2384d0b0-combined-ca-bundle\") pod \"barbican-db-sync-fj7h4\" (UID: \"69895abb-fedb-4ef1-bd37-698c2384d0b0\") " pod="openstack/barbican-db-sync-fj7h4" Oct 13 17:40:14 crc kubenswrapper[4720]: I1013 17:40:14.943850 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwswq\" (UniqueName: \"kubernetes.io/projected/69895abb-fedb-4ef1-bd37-698c2384d0b0-kube-api-access-jwswq\") pod \"barbican-db-sync-fj7h4\" (UID: \"69895abb-fedb-4ef1-bd37-698c2384d0b0\") " pod="openstack/barbican-db-sync-fj7h4" Oct 13 17:40:14 crc kubenswrapper[4720]: I1013 17:40:14.943883 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/69895abb-fedb-4ef1-bd37-698c2384d0b0-db-sync-config-data\") pod \"barbican-db-sync-fj7h4\" (UID: \"69895abb-fedb-4ef1-bd37-698c2384d0b0\") " pod="openstack/barbican-db-sync-fj7h4" Oct 13 17:40:14 crc kubenswrapper[4720]: I1013 17:40:14.947414 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/69895abb-fedb-4ef1-bd37-698c2384d0b0-db-sync-config-data\") pod \"barbican-db-sync-fj7h4\" (UID: \"69895abb-fedb-4ef1-bd37-698c2384d0b0\") " pod="openstack/barbican-db-sync-fj7h4" Oct 13 17:40:14 crc kubenswrapper[4720]: I1013 17:40:14.955745 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69895abb-fedb-4ef1-bd37-698c2384d0b0-combined-ca-bundle\") pod \"barbican-db-sync-fj7h4\" (UID: \"69895abb-fedb-4ef1-bd37-698c2384d0b0\") " pod="openstack/barbican-db-sync-fj7h4" Oct 13 17:40:14 crc kubenswrapper[4720]: I1013 17:40:14.962581 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-9dcbx" Oct 13 17:40:14 crc kubenswrapper[4720]: I1013 17:40:14.963503 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwswq\" (UniqueName: \"kubernetes.io/projected/69895abb-fedb-4ef1-bd37-698c2384d0b0-kube-api-access-jwswq\") pod \"barbican-db-sync-fj7h4\" (UID: \"69895abb-fedb-4ef1-bd37-698c2384d0b0\") " pod="openstack/barbican-db-sync-fj7h4" Oct 13 17:40:15 crc kubenswrapper[4720]: I1013 17:40:15.045310 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-fj7h4" Oct 13 17:40:15 crc kubenswrapper[4720]: I1013 17:40:15.046285 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2d841c9-2675-429d-9d02-09381a0d6f09-combined-ca-bundle\") pod \"neutron-db-sync-gmms8\" (UID: \"a2d841c9-2675-429d-9d02-09381a0d6f09\") " pod="openstack/neutron-db-sync-gmms8" Oct 13 17:40:15 crc kubenswrapper[4720]: I1013 17:40:15.046330 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a2d841c9-2675-429d-9d02-09381a0d6f09-config\") pod \"neutron-db-sync-gmms8\" (UID: \"a2d841c9-2675-429d-9d02-09381a0d6f09\") " pod="openstack/neutron-db-sync-gmms8" Oct 13 17:40:15 crc kubenswrapper[4720]: I1013 17:40:15.046376 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4z4x4\" (UniqueName: \"kubernetes.io/projected/a2d841c9-2675-429d-9d02-09381a0d6f09-kube-api-access-4z4x4\") pod \"neutron-db-sync-gmms8\" (UID: \"a2d841c9-2675-429d-9d02-09381a0d6f09\") " pod="openstack/neutron-db-sync-gmms8" Oct 13 17:40:15 crc kubenswrapper[4720]: I1013 17:40:15.050652 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2d841c9-2675-429d-9d02-09381a0d6f09-combined-ca-bundle\") pod \"neutron-db-sync-gmms8\" (UID: \"a2d841c9-2675-429d-9d02-09381a0d6f09\") " pod="openstack/neutron-db-sync-gmms8" Oct 13 17:40:15 crc kubenswrapper[4720]: I1013 17:40:15.060941 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a2d841c9-2675-429d-9d02-09381a0d6f09-config\") pod \"neutron-db-sync-gmms8\" (UID: \"a2d841c9-2675-429d-9d02-09381a0d6f09\") " pod="openstack/neutron-db-sync-gmms8" Oct 13 17:40:15 crc kubenswrapper[4720]: I1013 17:40:15.069102 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4z4x4\" (UniqueName: \"kubernetes.io/projected/a2d841c9-2675-429d-9d02-09381a0d6f09-kube-api-access-4z4x4\") pod \"neutron-db-sync-gmms8\" (UID: \"a2d841c9-2675-429d-9d02-09381a0d6f09\") " pod="openstack/neutron-db-sync-gmms8" Oct 13 17:40:15 crc kubenswrapper[4720]: I1013 17:40:15.073401 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-gmms8" Oct 13 17:40:15 crc kubenswrapper[4720]: I1013 17:40:15.311541 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2z4fc" Oct 13 17:40:15 crc kubenswrapper[4720]: I1013 17:40:15.452569 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rvh4\" (UniqueName: \"kubernetes.io/projected/15d273d6-ce41-4aeb-88e1-42a1f9423737-kube-api-access-2rvh4\") pod \"15d273d6-ce41-4aeb-88e1-42a1f9423737\" (UID: \"15d273d6-ce41-4aeb-88e1-42a1f9423737\") " Oct 13 17:40:15 crc kubenswrapper[4720]: I1013 17:40:15.452796 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15d273d6-ce41-4aeb-88e1-42a1f9423737-combined-ca-bundle\") pod \"15d273d6-ce41-4aeb-88e1-42a1f9423737\" (UID: \"15d273d6-ce41-4aeb-88e1-42a1f9423737\") " Oct 13 17:40:15 crc kubenswrapper[4720]: I1013 17:40:15.452823 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15d273d6-ce41-4aeb-88e1-42a1f9423737-config-data\") pod \"15d273d6-ce41-4aeb-88e1-42a1f9423737\" (UID: \"15d273d6-ce41-4aeb-88e1-42a1f9423737\") " Oct 13 17:40:15 crc kubenswrapper[4720]: I1013 17:40:15.452854 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15d273d6-ce41-4aeb-88e1-42a1f9423737-scripts\") pod \"15d273d6-ce41-4aeb-88e1-42a1f9423737\" (UID: \"15d273d6-ce41-4aeb-88e1-42a1f9423737\") " Oct 13 17:40:15 crc kubenswrapper[4720]: I1013 17:40:15.452995 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/15d273d6-ce41-4aeb-88e1-42a1f9423737-credential-keys\") pod \"15d273d6-ce41-4aeb-88e1-42a1f9423737\" (UID: \"15d273d6-ce41-4aeb-88e1-42a1f9423737\") " Oct 13 17:40:15 crc kubenswrapper[4720]: I1013 17:40:15.453241 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/15d273d6-ce41-4aeb-88e1-42a1f9423737-fernet-keys\") pod \"15d273d6-ce41-4aeb-88e1-42a1f9423737\" (UID: \"15d273d6-ce41-4aeb-88e1-42a1f9423737\") " Oct 13 17:40:15 crc kubenswrapper[4720]: I1013 17:40:15.459739 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15d273d6-ce41-4aeb-88e1-42a1f9423737-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "15d273d6-ce41-4aeb-88e1-42a1f9423737" (UID: "15d273d6-ce41-4aeb-88e1-42a1f9423737"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:40:15 crc kubenswrapper[4720]: I1013 17:40:15.461554 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15d273d6-ce41-4aeb-88e1-42a1f9423737-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "15d273d6-ce41-4aeb-88e1-42a1f9423737" (UID: "15d273d6-ce41-4aeb-88e1-42a1f9423737"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:40:15 crc kubenswrapper[4720]: I1013 17:40:15.461688 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15d273d6-ce41-4aeb-88e1-42a1f9423737-kube-api-access-2rvh4" (OuterVolumeSpecName: "kube-api-access-2rvh4") pod "15d273d6-ce41-4aeb-88e1-42a1f9423737" (UID: "15d273d6-ce41-4aeb-88e1-42a1f9423737"). InnerVolumeSpecName "kube-api-access-2rvh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:40:15 crc kubenswrapper[4720]: I1013 17:40:15.476479 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15d273d6-ce41-4aeb-88e1-42a1f9423737-scripts" (OuterVolumeSpecName: "scripts") pod "15d273d6-ce41-4aeb-88e1-42a1f9423737" (UID: "15d273d6-ce41-4aeb-88e1-42a1f9423737"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:40:15 crc kubenswrapper[4720]: I1013 17:40:15.518683 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15d273d6-ce41-4aeb-88e1-42a1f9423737-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "15d273d6-ce41-4aeb-88e1-42a1f9423737" (UID: "15d273d6-ce41-4aeb-88e1-42a1f9423737"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:40:15 crc kubenswrapper[4720]: I1013 17:40:15.520263 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15d273d6-ce41-4aeb-88e1-42a1f9423737-config-data" (OuterVolumeSpecName: "config-data") pod "15d273d6-ce41-4aeb-88e1-42a1f9423737" (UID: "15d273d6-ce41-4aeb-88e1-42a1f9423737"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:40:15 crc kubenswrapper[4720]: I1013 17:40:15.557710 4720 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/15d273d6-ce41-4aeb-88e1-42a1f9423737-credential-keys\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:15 crc kubenswrapper[4720]: I1013 17:40:15.557741 4720 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/15d273d6-ce41-4aeb-88e1-42a1f9423737-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:15 crc kubenswrapper[4720]: I1013 17:40:15.557751 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rvh4\" (UniqueName: \"kubernetes.io/projected/15d273d6-ce41-4aeb-88e1-42a1f9423737-kube-api-access-2rvh4\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:15 crc kubenswrapper[4720]: I1013 17:40:15.557762 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15d273d6-ce41-4aeb-88e1-42a1f9423737-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:15 crc kubenswrapper[4720]: I1013 17:40:15.557769 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15d273d6-ce41-4aeb-88e1-42a1f9423737-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:15 crc kubenswrapper[4720]: I1013 17:40:15.557777 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15d273d6-ce41-4aeb-88e1-42a1f9423737-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:15 crc kubenswrapper[4720]: I1013 17:40:15.571808 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2z4fc" event={"ID":"15d273d6-ce41-4aeb-88e1-42a1f9423737","Type":"ContainerDied","Data":"df36b53fdae7087a001b1d299f510a8dc73dd7d5d2174fea030c0c7fbeeaf738"} Oct 13 17:40:15 crc kubenswrapper[4720]: I1013 17:40:15.571843 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df36b53fdae7087a001b1d299f510a8dc73dd7d5d2174fea030c0c7fbeeaf738" Oct 13 17:40:15 crc kubenswrapper[4720]: I1013 17:40:15.571856 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2z4fc" Oct 13 17:40:15 crc kubenswrapper[4720]: I1013 17:40:15.805265 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-9dcbx"] Oct 13 17:40:15 crc kubenswrapper[4720]: I1013 17:40:15.911021 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-gmms8"] Oct 13 17:40:15 crc kubenswrapper[4720]: I1013 17:40:15.998913 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-fj7h4"] Oct 13 17:40:16 crc kubenswrapper[4720]: I1013 17:40:16.498458 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5744dfc665-n6ts6"] Oct 13 17:40:16 crc kubenswrapper[4720]: E1013 17:40:16.499031 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15d273d6-ce41-4aeb-88e1-42a1f9423737" containerName="keystone-bootstrap" Oct 13 17:40:16 crc kubenswrapper[4720]: I1013 17:40:16.499042 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="15d273d6-ce41-4aeb-88e1-42a1f9423737" containerName="keystone-bootstrap" Oct 13 17:40:16 crc kubenswrapper[4720]: I1013 17:40:16.499213 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="15d273d6-ce41-4aeb-88e1-42a1f9423737" containerName="keystone-bootstrap" Oct 13 17:40:16 crc kubenswrapper[4720]: I1013 17:40:16.499745 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5744dfc665-n6ts6" Oct 13 17:40:16 crc kubenswrapper[4720]: I1013 17:40:16.501951 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 13 17:40:16 crc kubenswrapper[4720]: I1013 17:40:16.502267 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 13 17:40:16 crc kubenswrapper[4720]: I1013 17:40:16.502452 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 13 17:40:16 crc kubenswrapper[4720]: I1013 17:40:16.502503 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Oct 13 17:40:16 crc kubenswrapper[4720]: I1013 17:40:16.502597 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Oct 13 17:40:16 crc kubenswrapper[4720]: I1013 17:40:16.504562 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-gnxbs" Oct 13 17:40:16 crc kubenswrapper[4720]: I1013 17:40:16.512952 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5744dfc665-n6ts6"] Oct 13 17:40:16 crc kubenswrapper[4720]: I1013 17:40:16.585432 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5de24778-de7e-4b2b-bf60-24ae857c2ed9-internal-tls-certs\") pod \"keystone-5744dfc665-n6ts6\" (UID: \"5de24778-de7e-4b2b-bf60-24ae857c2ed9\") " pod="openstack/keystone-5744dfc665-n6ts6" Oct 13 17:40:16 crc kubenswrapper[4720]: I1013 17:40:16.585472 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5de24778-de7e-4b2b-bf60-24ae857c2ed9-credential-keys\") pod \"keystone-5744dfc665-n6ts6\" (UID: \"5de24778-de7e-4b2b-bf60-24ae857c2ed9\") " pod="openstack/keystone-5744dfc665-n6ts6" Oct 13 17:40:16 crc kubenswrapper[4720]: I1013 17:40:16.585503 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5de24778-de7e-4b2b-bf60-24ae857c2ed9-public-tls-certs\") pod \"keystone-5744dfc665-n6ts6\" (UID: \"5de24778-de7e-4b2b-bf60-24ae857c2ed9\") " pod="openstack/keystone-5744dfc665-n6ts6" Oct 13 17:40:16 crc kubenswrapper[4720]: I1013 17:40:16.585660 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5de24778-de7e-4b2b-bf60-24ae857c2ed9-scripts\") pod \"keystone-5744dfc665-n6ts6\" (UID: \"5de24778-de7e-4b2b-bf60-24ae857c2ed9\") " pod="openstack/keystone-5744dfc665-n6ts6" Oct 13 17:40:16 crc kubenswrapper[4720]: I1013 17:40:16.585682 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5de24778-de7e-4b2b-bf60-24ae857c2ed9-config-data\") pod \"keystone-5744dfc665-n6ts6\" (UID: \"5de24778-de7e-4b2b-bf60-24ae857c2ed9\") " pod="openstack/keystone-5744dfc665-n6ts6" Oct 13 17:40:16 crc kubenswrapper[4720]: I1013 17:40:16.585699 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7nnn\" (UniqueName: \"kubernetes.io/projected/5de24778-de7e-4b2b-bf60-24ae857c2ed9-kube-api-access-x7nnn\") pod \"keystone-5744dfc665-n6ts6\" (UID: \"5de24778-de7e-4b2b-bf60-24ae857c2ed9\") " pod="openstack/keystone-5744dfc665-n6ts6" Oct 13 17:40:16 crc kubenswrapper[4720]: I1013 17:40:16.585738 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5de24778-de7e-4b2b-bf60-24ae857c2ed9-combined-ca-bundle\") pod \"keystone-5744dfc665-n6ts6\" (UID: \"5de24778-de7e-4b2b-bf60-24ae857c2ed9\") " pod="openstack/keystone-5744dfc665-n6ts6" Oct 13 17:40:16 crc kubenswrapper[4720]: I1013 17:40:16.585759 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5de24778-de7e-4b2b-bf60-24ae857c2ed9-fernet-keys\") pod \"keystone-5744dfc665-n6ts6\" (UID: \"5de24778-de7e-4b2b-bf60-24ae857c2ed9\") " pod="openstack/keystone-5744dfc665-n6ts6" Oct 13 17:40:16 crc kubenswrapper[4720]: I1013 17:40:16.592292 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-gmms8" event={"ID":"a2d841c9-2675-429d-9d02-09381a0d6f09","Type":"ContainerStarted","Data":"d55f1b63f3bfa6e9b56a3a991d6ddfd7117843ddcf1a45ec16a86f8b0efbc569"} Oct 13 17:40:16 crc kubenswrapper[4720]: I1013 17:40:16.592336 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-gmms8" event={"ID":"a2d841c9-2675-429d-9d02-09381a0d6f09","Type":"ContainerStarted","Data":"dea809c8cc76628b6558ba5db2966919ebe207668620a93d1228b3a1699239e0"} Oct 13 17:40:16 crc kubenswrapper[4720]: I1013 17:40:16.594318 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-fj7h4" event={"ID":"69895abb-fedb-4ef1-bd37-698c2384d0b0","Type":"ContainerStarted","Data":"56177a61e8dd9a42da9499c5d2cddefffbbf15eced12976c271411b51fe9e3da"} Oct 13 17:40:16 crc kubenswrapper[4720]: I1013 17:40:16.595972 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-9dcbx" event={"ID":"887aa549-67e8-4d03-acba-dede202496db","Type":"ContainerStarted","Data":"753803f2ccf2091aec2cc7376c5358acdb20bf32868ec9ab15ca0d54e7d6b818"} Oct 13 17:40:16 crc kubenswrapper[4720]: I1013 17:40:16.597618 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d3623f2-ec27-4ad6-8cb4-553cb0527e15","Type":"ContainerStarted","Data":"828aebbcbe710f896ea2cef6a641aed91297652c0bdec01666e3cf684a4aece0"} Oct 13 17:40:16 crc kubenswrapper[4720]: I1013 17:40:16.603801 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"972b98ca-7012-422d-8839-a196b6a3b919","Type":"ContainerStarted","Data":"cf8d040f3b31142b2e3557ad717515b1227bb986bdfd43dc7fcc2ab6a3a20d90"} Oct 13 17:40:16 crc kubenswrapper[4720]: I1013 17:40:16.606375 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-gmms8" podStartSLOduration=2.60636543 podStartE2EDuration="2.60636543s" podCreationTimestamp="2025-10-13 17:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:40:16.60478322 +0000 UTC m=+962.062033342" watchObservedRunningTime="2025-10-13 17:40:16.60636543 +0000 UTC m=+962.063615562" Oct 13 17:40:16 crc kubenswrapper[4720]: I1013 17:40:16.687570 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5de24778-de7e-4b2b-bf60-24ae857c2ed9-combined-ca-bundle\") pod \"keystone-5744dfc665-n6ts6\" (UID: \"5de24778-de7e-4b2b-bf60-24ae857c2ed9\") " pod="openstack/keystone-5744dfc665-n6ts6" Oct 13 17:40:16 crc kubenswrapper[4720]: I1013 17:40:16.687618 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5de24778-de7e-4b2b-bf60-24ae857c2ed9-fernet-keys\") pod \"keystone-5744dfc665-n6ts6\" (UID: \"5de24778-de7e-4b2b-bf60-24ae857c2ed9\") " pod="openstack/keystone-5744dfc665-n6ts6" Oct 13 17:40:16 crc kubenswrapper[4720]: I1013 17:40:16.687658 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5de24778-de7e-4b2b-bf60-24ae857c2ed9-internal-tls-certs\") pod \"keystone-5744dfc665-n6ts6\" (UID: \"5de24778-de7e-4b2b-bf60-24ae857c2ed9\") " pod="openstack/keystone-5744dfc665-n6ts6" Oct 13 17:40:16 crc kubenswrapper[4720]: I1013 17:40:16.687678 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5de24778-de7e-4b2b-bf60-24ae857c2ed9-credential-keys\") pod \"keystone-5744dfc665-n6ts6\" (UID: \"5de24778-de7e-4b2b-bf60-24ae857c2ed9\") " pod="openstack/keystone-5744dfc665-n6ts6" Oct 13 17:40:16 crc kubenswrapper[4720]: I1013 17:40:16.687722 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5de24778-de7e-4b2b-bf60-24ae857c2ed9-public-tls-certs\") pod \"keystone-5744dfc665-n6ts6\" (UID: \"5de24778-de7e-4b2b-bf60-24ae857c2ed9\") " pod="openstack/keystone-5744dfc665-n6ts6" Oct 13 17:40:16 crc kubenswrapper[4720]: I1013 17:40:16.687804 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5de24778-de7e-4b2b-bf60-24ae857c2ed9-scripts\") pod \"keystone-5744dfc665-n6ts6\" (UID: \"5de24778-de7e-4b2b-bf60-24ae857c2ed9\") " pod="openstack/keystone-5744dfc665-n6ts6" Oct 13 17:40:16 crc kubenswrapper[4720]: I1013 17:40:16.687823 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5de24778-de7e-4b2b-bf60-24ae857c2ed9-config-data\") pod \"keystone-5744dfc665-n6ts6\" (UID: \"5de24778-de7e-4b2b-bf60-24ae857c2ed9\") " pod="openstack/keystone-5744dfc665-n6ts6" Oct 13 17:40:16 crc kubenswrapper[4720]: I1013 17:40:16.687844 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7nnn\" (UniqueName: \"kubernetes.io/projected/5de24778-de7e-4b2b-bf60-24ae857c2ed9-kube-api-access-x7nnn\") pod \"keystone-5744dfc665-n6ts6\" (UID: \"5de24778-de7e-4b2b-bf60-24ae857c2ed9\") " pod="openstack/keystone-5744dfc665-n6ts6" Oct 13 17:40:16 crc kubenswrapper[4720]: I1013 17:40:16.693401 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5de24778-de7e-4b2b-bf60-24ae857c2ed9-combined-ca-bundle\") pod \"keystone-5744dfc665-n6ts6\" (UID: \"5de24778-de7e-4b2b-bf60-24ae857c2ed9\") " pod="openstack/keystone-5744dfc665-n6ts6" Oct 13 17:40:16 crc kubenswrapper[4720]: I1013 17:40:16.693795 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5de24778-de7e-4b2b-bf60-24ae857c2ed9-credential-keys\") pod \"keystone-5744dfc665-n6ts6\" (UID: \"5de24778-de7e-4b2b-bf60-24ae857c2ed9\") " pod="openstack/keystone-5744dfc665-n6ts6" Oct 13 17:40:16 crc kubenswrapper[4720]: I1013 17:40:16.693977 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5de24778-de7e-4b2b-bf60-24ae857c2ed9-internal-tls-certs\") pod \"keystone-5744dfc665-n6ts6\" (UID: \"5de24778-de7e-4b2b-bf60-24ae857c2ed9\") " pod="openstack/keystone-5744dfc665-n6ts6" Oct 13 17:40:16 crc kubenswrapper[4720]: I1013 17:40:16.701946 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5de24778-de7e-4b2b-bf60-24ae857c2ed9-fernet-keys\") pod \"keystone-5744dfc665-n6ts6\" (UID: \"5de24778-de7e-4b2b-bf60-24ae857c2ed9\") " pod="openstack/keystone-5744dfc665-n6ts6" Oct 13 17:40:16 crc kubenswrapper[4720]: I1013 17:40:16.703174 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5de24778-de7e-4b2b-bf60-24ae857c2ed9-public-tls-certs\") pod \"keystone-5744dfc665-n6ts6\" (UID: \"5de24778-de7e-4b2b-bf60-24ae857c2ed9\") " pod="openstack/keystone-5744dfc665-n6ts6" Oct 13 17:40:16 crc kubenswrapper[4720]: I1013 17:40:16.703551 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5de24778-de7e-4b2b-bf60-24ae857c2ed9-scripts\") pod \"keystone-5744dfc665-n6ts6\" (UID: \"5de24778-de7e-4b2b-bf60-24ae857c2ed9\") " pod="openstack/keystone-5744dfc665-n6ts6" Oct 13 17:40:16 crc kubenswrapper[4720]: I1013 17:40:16.706410 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5de24778-de7e-4b2b-bf60-24ae857c2ed9-config-data\") pod \"keystone-5744dfc665-n6ts6\" (UID: \"5de24778-de7e-4b2b-bf60-24ae857c2ed9\") " pod="openstack/keystone-5744dfc665-n6ts6" Oct 13 17:40:16 crc kubenswrapper[4720]: I1013 17:40:16.709897 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7nnn\" (UniqueName: \"kubernetes.io/projected/5de24778-de7e-4b2b-bf60-24ae857c2ed9-kube-api-access-x7nnn\") pod \"keystone-5744dfc665-n6ts6\" (UID: \"5de24778-de7e-4b2b-bf60-24ae857c2ed9\") " pod="openstack/keystone-5744dfc665-n6ts6" Oct 13 17:40:16 crc kubenswrapper[4720]: I1013 17:40:16.815764 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5744dfc665-n6ts6" Oct 13 17:40:17 crc kubenswrapper[4720]: I1013 17:40:17.308276 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5744dfc665-n6ts6"] Oct 13 17:40:17 crc kubenswrapper[4720]: I1013 17:40:17.640410 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"972b98ca-7012-422d-8839-a196b6a3b919","Type":"ContainerStarted","Data":"dd2f993ad3263814f13966b6e376fc39bbfd7f89fbbfb6c77b97572e6786fed7"} Oct 13 17:40:17 crc kubenswrapper[4720]: I1013 17:40:17.643486 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5744dfc665-n6ts6" event={"ID":"5de24778-de7e-4b2b-bf60-24ae857c2ed9","Type":"ContainerStarted","Data":"30fa0b44d7c12eb9a6fe71dfbb86afa924a2ec0d9a8b48b53f35b425dcf65423"} Oct 13 17:40:17 crc kubenswrapper[4720]: I1013 17:40:17.673892 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.673873298 podStartE2EDuration="7.673873298s" podCreationTimestamp="2025-10-13 17:40:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:40:17.662638139 +0000 UTC m=+963.119888271" watchObservedRunningTime="2025-10-13 17:40:17.673873298 +0000 UTC m=+963.131123430" Oct 13 17:40:18 crc kubenswrapper[4720]: I1013 17:40:18.139946 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 13 17:40:18 crc kubenswrapper[4720]: I1013 17:40:18.139993 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 13 17:40:18 crc kubenswrapper[4720]: I1013 17:40:18.170348 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 13 17:40:18 crc kubenswrapper[4720]: I1013 17:40:18.198030 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 13 17:40:18 crc kubenswrapper[4720]: I1013 17:40:18.688211 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5744dfc665-n6ts6" event={"ID":"5de24778-de7e-4b2b-bf60-24ae857c2ed9","Type":"ContainerStarted","Data":"f19ea9395e7df57ef770b9c5af2e8e8d912696108c5b4cd6e312e5020766907f"} Oct 13 17:40:18 crc kubenswrapper[4720]: I1013 17:40:18.688747 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 13 17:40:18 crc kubenswrapper[4720]: I1013 17:40:18.689712 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 13 17:40:18 crc kubenswrapper[4720]: I1013 17:40:18.709829 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5744dfc665-n6ts6" podStartSLOduration=2.709804044 podStartE2EDuration="2.709804044s" podCreationTimestamp="2025-10-13 17:40:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:40:18.706867148 +0000 UTC m=+964.164117280" watchObservedRunningTime="2025-10-13 17:40:18.709804044 +0000 UTC m=+964.167054176" Oct 13 17:40:19 crc kubenswrapper[4720]: I1013 17:40:19.701229 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-5744dfc665-n6ts6" Oct 13 17:40:20 crc kubenswrapper[4720]: I1013 17:40:20.711695 4720 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 17:40:20 crc kubenswrapper[4720]: I1013 17:40:20.711742 4720 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 17:40:20 crc kubenswrapper[4720]: I1013 17:40:20.892522 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 13 17:40:20 crc kubenswrapper[4720]: I1013 17:40:20.892566 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 13 17:40:20 crc kubenswrapper[4720]: I1013 17:40:20.900060 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 13 17:40:20 crc kubenswrapper[4720]: I1013 17:40:20.902660 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 13 17:40:20 crc kubenswrapper[4720]: I1013 17:40:20.937027 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 13 17:40:20 crc kubenswrapper[4720]: I1013 17:40:20.940765 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 13 17:40:21 crc kubenswrapper[4720]: I1013 17:40:21.722615 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 13 17:40:21 crc kubenswrapper[4720]: I1013 17:40:21.722674 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 13 17:40:22 crc kubenswrapper[4720]: I1013 17:40:22.090847 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7df8489788-ntn24" podUID="139c2e02-2c20-4a21-a5c0-753c6003473b" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Oct 13 17:40:22 crc kubenswrapper[4720]: I1013 17:40:22.201560 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7984dcc5d8-8c2ss" podUID="27768d75-429c-45c3-bf03-98527e94fe63" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Oct 13 17:40:23 crc kubenswrapper[4720]: I1013 17:40:23.580683 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 13 17:40:23 crc kubenswrapper[4720]: I1013 17:40:23.585795 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 13 17:40:31 crc kubenswrapper[4720]: I1013 17:40:31.825062 4720 generic.go:334] "Generic (PLEG): container finished" podID="a2d841c9-2675-429d-9d02-09381a0d6f09" containerID="d55f1b63f3bfa6e9b56a3a991d6ddfd7117843ddcf1a45ec16a86f8b0efbc569" exitCode=0 Oct 13 17:40:31 crc kubenswrapper[4720]: I1013 17:40:31.825464 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-gmms8" event={"ID":"a2d841c9-2675-429d-9d02-09381a0d6f09","Type":"ContainerDied","Data":"d55f1b63f3bfa6e9b56a3a991d6ddfd7117843ddcf1a45ec16a86f8b0efbc569"} Oct 13 17:40:33 crc kubenswrapper[4720]: I1013 17:40:33.848750 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7df8489788-ntn24" Oct 13 17:40:34 crc kubenswrapper[4720]: I1013 17:40:34.158402 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7984dcc5d8-8c2ss" Oct 13 17:40:35 crc kubenswrapper[4720]: I1013 17:40:35.672446 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-7df8489788-ntn24" Oct 13 17:40:35 crc kubenswrapper[4720]: E1013 17:40:35.754874 4720 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Oct 13 17:40:35 crc kubenswrapper[4720]: E1013 17:40:35.755138 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k4kgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-9dcbx_openstack(887aa549-67e8-4d03-acba-dede202496db): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 13 17:40:35 crc kubenswrapper[4720]: E1013 17:40:35.756438 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-9dcbx" podUID="887aa549-67e8-4d03-acba-dede202496db" Oct 13 17:40:35 crc kubenswrapper[4720]: I1013 17:40:35.813975 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-7984dcc5d8-8c2ss" Oct 13 17:40:35 crc kubenswrapper[4720]: I1013 17:40:35.864426 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7df8489788-ntn24"] Oct 13 17:40:35 crc kubenswrapper[4720]: I1013 17:40:35.872161 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7df8489788-ntn24" podUID="139c2e02-2c20-4a21-a5c0-753c6003473b" containerName="horizon" containerID="cri-o://b16a8f5b551805315d0632eef1b1948aa4930fa2db29ab6d2fe042db2c832eb1" gracePeriod=30 Oct 13 17:40:35 crc kubenswrapper[4720]: I1013 17:40:35.872317 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7df8489788-ntn24" podUID="139c2e02-2c20-4a21-a5c0-753c6003473b" containerName="horizon-log" containerID="cri-o://10c21702a32bc872627030d27e43d3321339dec03c70d87460aa37a2d88b5a48" gracePeriod=30 Oct 13 17:40:35 crc kubenswrapper[4720]: E1013 17:40:35.876109 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-9dcbx" podUID="887aa549-67e8-4d03-acba-dede202496db" Oct 13 17:40:36 crc kubenswrapper[4720]: E1013 17:40:36.258357 4720 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Oct 13 17:40:36 crc kubenswrapper[4720]: E1013 17:40:36.258550 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vnndp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-2htwh_openstack(444e35b8-1d2a-4d83-be6c-2184ae0e3110): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 13 17:40:36 crc kubenswrapper[4720]: E1013 17:40:36.260525 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-2htwh" podUID="444e35b8-1d2a-4d83-be6c-2184ae0e3110" Oct 13 17:40:36 crc kubenswrapper[4720]: E1013 17:40:36.295169 4720 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24:latest" Oct 13 17:40:36 crc kubenswrapper[4720]: E1013 17:40:36.295338 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4bth4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(6d3623f2-ec27-4ad6-8cb4-553cb0527e15): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 13 17:40:36 crc kubenswrapper[4720]: E1013 17:40:36.296555 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="6d3623f2-ec27-4ad6-8cb4-553cb0527e15" Oct 13 17:40:36 crc kubenswrapper[4720]: I1013 17:40:36.473860 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-gmms8" Oct 13 17:40:36 crc kubenswrapper[4720]: I1013 17:40:36.481263 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2d841c9-2675-429d-9d02-09381a0d6f09-combined-ca-bundle\") pod \"a2d841c9-2675-429d-9d02-09381a0d6f09\" (UID: \"a2d841c9-2675-429d-9d02-09381a0d6f09\") " Oct 13 17:40:36 crc kubenswrapper[4720]: I1013 17:40:36.481522 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4z4x4\" (UniqueName: \"kubernetes.io/projected/a2d841c9-2675-429d-9d02-09381a0d6f09-kube-api-access-4z4x4\") pod \"a2d841c9-2675-429d-9d02-09381a0d6f09\" (UID: \"a2d841c9-2675-429d-9d02-09381a0d6f09\") " Oct 13 17:40:36 crc kubenswrapper[4720]: I1013 17:40:36.481657 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a2d841c9-2675-429d-9d02-09381a0d6f09-config\") pod \"a2d841c9-2675-429d-9d02-09381a0d6f09\" (UID: \"a2d841c9-2675-429d-9d02-09381a0d6f09\") " Oct 13 17:40:36 crc kubenswrapper[4720]: I1013 17:40:36.489529 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2d841c9-2675-429d-9d02-09381a0d6f09-kube-api-access-4z4x4" (OuterVolumeSpecName: "kube-api-access-4z4x4") pod "a2d841c9-2675-429d-9d02-09381a0d6f09" (UID: "a2d841c9-2675-429d-9d02-09381a0d6f09"). InnerVolumeSpecName "kube-api-access-4z4x4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:40:36 crc kubenswrapper[4720]: I1013 17:40:36.513007 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2d841c9-2675-429d-9d02-09381a0d6f09-config" (OuterVolumeSpecName: "config") pod "a2d841c9-2675-429d-9d02-09381a0d6f09" (UID: "a2d841c9-2675-429d-9d02-09381a0d6f09"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:40:36 crc kubenswrapper[4720]: I1013 17:40:36.522250 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2d841c9-2675-429d-9d02-09381a0d6f09-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a2d841c9-2675-429d-9d02-09381a0d6f09" (UID: "a2d841c9-2675-429d-9d02-09381a0d6f09"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:40:36 crc kubenswrapper[4720]: I1013 17:40:36.582932 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2d841c9-2675-429d-9d02-09381a0d6f09-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:36 crc kubenswrapper[4720]: I1013 17:40:36.583102 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4z4x4\" (UniqueName: \"kubernetes.io/projected/a2d841c9-2675-429d-9d02-09381a0d6f09-kube-api-access-4z4x4\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:36 crc kubenswrapper[4720]: I1013 17:40:36.583163 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/a2d841c9-2675-429d-9d02-09381a0d6f09-config\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:36 crc kubenswrapper[4720]: I1013 17:40:36.883263 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-fj7h4" event={"ID":"69895abb-fedb-4ef1-bd37-698c2384d0b0","Type":"ContainerStarted","Data":"b12eda37cb379a954f446cc232021f40074563481f329d306d32f89488d64fdb"} Oct 13 17:40:36 crc kubenswrapper[4720]: I1013 17:40:36.886160 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6d3623f2-ec27-4ad6-8cb4-553cb0527e15" containerName="ceilometer-central-agent" containerID="cri-o://0edb2dae877587f39ad2f0b1c67183bdd28566eb1418aee78cff788789e0dd9c" gracePeriod=30 Oct 13 17:40:36 crc kubenswrapper[4720]: I1013 17:40:36.886593 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-gmms8" Oct 13 17:40:36 crc kubenswrapper[4720]: I1013 17:40:36.888050 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6d3623f2-ec27-4ad6-8cb4-553cb0527e15" containerName="sg-core" containerID="cri-o://828aebbcbe710f896ea2cef6a641aed91297652c0bdec01666e3cf684a4aece0" gracePeriod=30 Oct 13 17:40:36 crc kubenswrapper[4720]: I1013 17:40:36.888181 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-gmms8" event={"ID":"a2d841c9-2675-429d-9d02-09381a0d6f09","Type":"ContainerDied","Data":"dea809c8cc76628b6558ba5db2966919ebe207668620a93d1228b3a1699239e0"} Oct 13 17:40:36 crc kubenswrapper[4720]: I1013 17:40:36.888240 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dea809c8cc76628b6558ba5db2966919ebe207668620a93d1228b3a1699239e0" Oct 13 17:40:36 crc kubenswrapper[4720]: I1013 17:40:36.888379 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6d3623f2-ec27-4ad6-8cb4-553cb0527e15" containerName="ceilometer-notification-agent" containerID="cri-o://f9e8de6048bec5a4704aa9fca47068f43b9d94f967db4928fc0e4d17006ef487" gracePeriod=30 Oct 13 17:40:36 crc kubenswrapper[4720]: I1013 17:40:36.916638 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-fj7h4" podStartSLOduration=2.662756601 podStartE2EDuration="22.916624032s" podCreationTimestamp="2025-10-13 17:40:14 +0000 UTC" firstStartedPulling="2025-10-13 17:40:16.010109808 +0000 UTC m=+961.467359950" lastFinishedPulling="2025-10-13 17:40:36.263977209 +0000 UTC m=+981.721227381" observedRunningTime="2025-10-13 17:40:36.91263481 +0000 UTC m=+982.369884972" watchObservedRunningTime="2025-10-13 17:40:36.916624032 +0000 UTC m=+982.373874154" Oct 13 17:40:37 crc kubenswrapper[4720]: E1013 17:40:37.629927 4720 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d3623f2_ec27_4ad6_8cb4_553cb0527e15.slice/crio-conmon-828aebbcbe710f896ea2cef6a641aed91297652c0bdec01666e3cf684a4aece0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2d841c9_2675_429d_9d02_09381a0d6f09.slice/crio-conmon-d55f1b63f3bfa6e9b56a3a991d6ddfd7117843ddcf1a45ec16a86f8b0efbc569.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d3623f2_ec27_4ad6_8cb4_553cb0527e15.slice/crio-828aebbcbe710f896ea2cef6a641aed91297652c0bdec01666e3cf684a4aece0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02414a49_0001_4f13_97cb_641937473fb6.slice/crio-conmon-376eab0bc0716b64ad3f2fd2a966e08f673cb6d62ce53f42923577d22092150f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d3623f2_ec27_4ad6_8cb4_553cb0527e15.slice/crio-conmon-0edb2dae877587f39ad2f0b1c67183bdd28566eb1418aee78cff788789e0dd9c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2d841c9_2675_429d_9d02_09381a0d6f09.slice/crio-d55f1b63f3bfa6e9b56a3a991d6ddfd7117843ddcf1a45ec16a86f8b0efbc569.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02414a49_0001_4f13_97cb_641937473fb6.slice/crio-376eab0bc0716b64ad3f2fd2a966e08f673cb6d62ce53f42923577d22092150f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d3623f2_ec27_4ad6_8cb4_553cb0527e15.slice/crio-0edb2dae877587f39ad2f0b1c67183bdd28566eb1418aee78cff788789e0dd9c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2d841c9_2675_429d_9d02_09381a0d6f09.slice/crio-dea809c8cc76628b6558ba5db2966919ebe207668620a93d1228b3a1699239e0\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02414a49_0001_4f13_97cb_641937473fb6.slice/crio-c14fb6faa8b00dd9e510c736f924ec85392019e51fc549152333fb08188c5cf7.scope\": RecentStats: unable to find data in memory cache]" Oct 13 17:40:37 crc kubenswrapper[4720]: I1013 17:40:37.688229 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-4w6tj"] Oct 13 17:40:37 crc kubenswrapper[4720]: E1013 17:40:37.688657 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2d841c9-2675-429d-9d02-09381a0d6f09" containerName="neutron-db-sync" Oct 13 17:40:37 crc kubenswrapper[4720]: I1013 17:40:37.688678 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2d841c9-2675-429d-9d02-09381a0d6f09" containerName="neutron-db-sync" Oct 13 17:40:37 crc kubenswrapper[4720]: I1013 17:40:37.697147 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2d841c9-2675-429d-9d02-09381a0d6f09" containerName="neutron-db-sync" Oct 13 17:40:37 crc kubenswrapper[4720]: I1013 17:40:37.704067 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-4w6tj" Oct 13 17:40:37 crc kubenswrapper[4720]: I1013 17:40:37.725083 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ea59751-3ee4-4404-99c7-06d02cec4f25-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-4w6tj\" (UID: \"0ea59751-3ee4-4404-99c7-06d02cec4f25\") " pod="openstack/dnsmasq-dns-84b966f6c9-4w6tj" Oct 13 17:40:37 crc kubenswrapper[4720]: I1013 17:40:37.725510 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wml7\" (UniqueName: \"kubernetes.io/projected/0ea59751-3ee4-4404-99c7-06d02cec4f25-kube-api-access-2wml7\") pod \"dnsmasq-dns-84b966f6c9-4w6tj\" (UID: \"0ea59751-3ee4-4404-99c7-06d02cec4f25\") " pod="openstack/dnsmasq-dns-84b966f6c9-4w6tj" Oct 13 17:40:37 crc kubenswrapper[4720]: I1013 17:40:37.725587 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ea59751-3ee4-4404-99c7-06d02cec4f25-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-4w6tj\" (UID: \"0ea59751-3ee4-4404-99c7-06d02cec4f25\") " pod="openstack/dnsmasq-dns-84b966f6c9-4w6tj" Oct 13 17:40:37 crc kubenswrapper[4720]: I1013 17:40:37.725801 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ea59751-3ee4-4404-99c7-06d02cec4f25-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-4w6tj\" (UID: \"0ea59751-3ee4-4404-99c7-06d02cec4f25\") " pod="openstack/dnsmasq-dns-84b966f6c9-4w6tj" Oct 13 17:40:37 crc kubenswrapper[4720]: I1013 17:40:37.726011 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ea59751-3ee4-4404-99c7-06d02cec4f25-config\") pod \"dnsmasq-dns-84b966f6c9-4w6tj\" (UID: \"0ea59751-3ee4-4404-99c7-06d02cec4f25\") " pod="openstack/dnsmasq-dns-84b966f6c9-4w6tj" Oct 13 17:40:37 crc kubenswrapper[4720]: I1013 17:40:37.726365 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ea59751-3ee4-4404-99c7-06d02cec4f25-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-4w6tj\" (UID: \"0ea59751-3ee4-4404-99c7-06d02cec4f25\") " pod="openstack/dnsmasq-dns-84b966f6c9-4w6tj" Oct 13 17:40:37 crc kubenswrapper[4720]: I1013 17:40:37.789850 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6579788cd4-czbf2"] Oct 13 17:40:37 crc kubenswrapper[4720]: I1013 17:40:37.791589 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6579788cd4-czbf2" Oct 13 17:40:37 crc kubenswrapper[4720]: I1013 17:40:37.793265 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-q7jmp" Oct 13 17:40:37 crc kubenswrapper[4720]: I1013 17:40:37.795528 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Oct 13 17:40:37 crc kubenswrapper[4720]: I1013 17:40:37.795818 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Oct 13 17:40:37 crc kubenswrapper[4720]: I1013 17:40:37.795983 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Oct 13 17:40:37 crc kubenswrapper[4720]: I1013 17:40:37.805500 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-4w6tj"] Oct 13 17:40:37 crc kubenswrapper[4720]: I1013 17:40:37.828323 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6579788cd4-czbf2"] Oct 13 17:40:37 crc kubenswrapper[4720]: I1013 17:40:37.845742 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z86hb\" (UniqueName: \"kubernetes.io/projected/9a12c67e-6055-4594-850a-61ac731d7a8d-kube-api-access-z86hb\") pod \"neutron-6579788cd4-czbf2\" (UID: \"9a12c67e-6055-4594-850a-61ac731d7a8d\") " pod="openstack/neutron-6579788cd4-czbf2" Oct 13 17:40:37 crc kubenswrapper[4720]: I1013 17:40:37.845814 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ea59751-3ee4-4404-99c7-06d02cec4f25-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-4w6tj\" (UID: \"0ea59751-3ee4-4404-99c7-06d02cec4f25\") " pod="openstack/dnsmasq-dns-84b966f6c9-4w6tj" Oct 13 17:40:37 crc kubenswrapper[4720]: I1013 17:40:37.845863 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ea59751-3ee4-4404-99c7-06d02cec4f25-config\") pod \"dnsmasq-dns-84b966f6c9-4w6tj\" (UID: \"0ea59751-3ee4-4404-99c7-06d02cec4f25\") " pod="openstack/dnsmasq-dns-84b966f6c9-4w6tj" Oct 13 17:40:37 crc kubenswrapper[4720]: I1013 17:40:37.845947 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ea59751-3ee4-4404-99c7-06d02cec4f25-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-4w6tj\" (UID: \"0ea59751-3ee4-4404-99c7-06d02cec4f25\") " pod="openstack/dnsmasq-dns-84b966f6c9-4w6tj" Oct 13 17:40:37 crc kubenswrapper[4720]: I1013 17:40:37.845982 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9a12c67e-6055-4594-850a-61ac731d7a8d-config\") pod \"neutron-6579788cd4-czbf2\" (UID: \"9a12c67e-6055-4594-850a-61ac731d7a8d\") " pod="openstack/neutron-6579788cd4-czbf2" Oct 13 17:40:37 crc kubenswrapper[4720]: I1013 17:40:37.846016 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9a12c67e-6055-4594-850a-61ac731d7a8d-httpd-config\") pod \"neutron-6579788cd4-czbf2\" (UID: \"9a12c67e-6055-4594-850a-61ac731d7a8d\") " pod="openstack/neutron-6579788cd4-czbf2" Oct 13 17:40:37 crc kubenswrapper[4720]: I1013 17:40:37.846039 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ea59751-3ee4-4404-99c7-06d02cec4f25-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-4w6tj\" (UID: \"0ea59751-3ee4-4404-99c7-06d02cec4f25\") " pod="openstack/dnsmasq-dns-84b966f6c9-4w6tj" Oct 13 17:40:37 crc kubenswrapper[4720]: I1013 17:40:37.846063 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wml7\" (UniqueName: \"kubernetes.io/projected/0ea59751-3ee4-4404-99c7-06d02cec4f25-kube-api-access-2wml7\") pod \"dnsmasq-dns-84b966f6c9-4w6tj\" (UID: \"0ea59751-3ee4-4404-99c7-06d02cec4f25\") " pod="openstack/dnsmasq-dns-84b966f6c9-4w6tj" Oct 13 17:40:37 crc kubenswrapper[4720]: I1013 17:40:37.846081 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ea59751-3ee4-4404-99c7-06d02cec4f25-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-4w6tj\" (UID: \"0ea59751-3ee4-4404-99c7-06d02cec4f25\") " pod="openstack/dnsmasq-dns-84b966f6c9-4w6tj" Oct 13 17:40:37 crc kubenswrapper[4720]: I1013 17:40:37.846131 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a12c67e-6055-4594-850a-61ac731d7a8d-combined-ca-bundle\") pod \"neutron-6579788cd4-czbf2\" (UID: \"9a12c67e-6055-4594-850a-61ac731d7a8d\") " pod="openstack/neutron-6579788cd4-czbf2" Oct 13 17:40:37 crc kubenswrapper[4720]: I1013 17:40:37.846360 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a12c67e-6055-4594-850a-61ac731d7a8d-ovndb-tls-certs\") pod \"neutron-6579788cd4-czbf2\" (UID: \"9a12c67e-6055-4594-850a-61ac731d7a8d\") " pod="openstack/neutron-6579788cd4-czbf2" Oct 13 17:40:37 crc kubenswrapper[4720]: I1013 17:40:37.846679 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ea59751-3ee4-4404-99c7-06d02cec4f25-config\") pod \"dnsmasq-dns-84b966f6c9-4w6tj\" (UID: \"0ea59751-3ee4-4404-99c7-06d02cec4f25\") " pod="openstack/dnsmasq-dns-84b966f6c9-4w6tj" Oct 13 17:40:37 crc kubenswrapper[4720]: I1013 17:40:37.848710 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ea59751-3ee4-4404-99c7-06d02cec4f25-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-4w6tj\" (UID: \"0ea59751-3ee4-4404-99c7-06d02cec4f25\") " pod="openstack/dnsmasq-dns-84b966f6c9-4w6tj" Oct 13 17:40:37 crc kubenswrapper[4720]: I1013 17:40:37.856146 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ea59751-3ee4-4404-99c7-06d02cec4f25-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-4w6tj\" (UID: \"0ea59751-3ee4-4404-99c7-06d02cec4f25\") " pod="openstack/dnsmasq-dns-84b966f6c9-4w6tj" Oct 13 17:40:37 crc kubenswrapper[4720]: I1013 17:40:37.857212 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ea59751-3ee4-4404-99c7-06d02cec4f25-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-4w6tj\" (UID: \"0ea59751-3ee4-4404-99c7-06d02cec4f25\") " pod="openstack/dnsmasq-dns-84b966f6c9-4w6tj" Oct 13 17:40:37 crc kubenswrapper[4720]: I1013 17:40:37.857875 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ea59751-3ee4-4404-99c7-06d02cec4f25-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-4w6tj\" (UID: \"0ea59751-3ee4-4404-99c7-06d02cec4f25\") " pod="openstack/dnsmasq-dns-84b966f6c9-4w6tj" Oct 13 17:40:37 crc kubenswrapper[4720]: I1013 17:40:37.867384 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wml7\" (UniqueName: \"kubernetes.io/projected/0ea59751-3ee4-4404-99c7-06d02cec4f25-kube-api-access-2wml7\") pod \"dnsmasq-dns-84b966f6c9-4w6tj\" (UID: \"0ea59751-3ee4-4404-99c7-06d02cec4f25\") " pod="openstack/dnsmasq-dns-84b966f6c9-4w6tj" Oct 13 17:40:37 crc kubenswrapper[4720]: I1013 17:40:37.909939 4720 generic.go:334] "Generic (PLEG): container finished" podID="6d3623f2-ec27-4ad6-8cb4-553cb0527e15" containerID="828aebbcbe710f896ea2cef6a641aed91297652c0bdec01666e3cf684a4aece0" exitCode=2 Oct 13 17:40:37 crc kubenswrapper[4720]: I1013 17:40:37.909972 4720 generic.go:334] "Generic (PLEG): container finished" podID="6d3623f2-ec27-4ad6-8cb4-553cb0527e15" containerID="0edb2dae877587f39ad2f0b1c67183bdd28566eb1418aee78cff788789e0dd9c" exitCode=0 Oct 13 17:40:37 crc kubenswrapper[4720]: I1013 17:40:37.910015 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d3623f2-ec27-4ad6-8cb4-553cb0527e15","Type":"ContainerDied","Data":"828aebbcbe710f896ea2cef6a641aed91297652c0bdec01666e3cf684a4aece0"} Oct 13 17:40:37 crc kubenswrapper[4720]: I1013 17:40:37.910043 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d3623f2-ec27-4ad6-8cb4-553cb0527e15","Type":"ContainerDied","Data":"0edb2dae877587f39ad2f0b1c67183bdd28566eb1418aee78cff788789e0dd9c"} Oct 13 17:40:37 crc kubenswrapper[4720]: I1013 17:40:37.928721 4720 generic.go:334] "Generic (PLEG): container finished" podID="02414a49-0001-4f13-97cb-641937473fb6" containerID="c14fb6faa8b00dd9e510c736f924ec85392019e51fc549152333fb08188c5cf7" exitCode=137 Oct 13 17:40:37 crc kubenswrapper[4720]: I1013 17:40:37.928754 4720 generic.go:334] "Generic (PLEG): container finished" podID="02414a49-0001-4f13-97cb-641937473fb6" containerID="376eab0bc0716b64ad3f2fd2a966e08f673cb6d62ce53f42923577d22092150f" exitCode=137 Oct 13 17:40:37 crc kubenswrapper[4720]: I1013 17:40:37.929446 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bf7d9f5bf-6khmt" event={"ID":"02414a49-0001-4f13-97cb-641937473fb6","Type":"ContainerDied","Data":"c14fb6faa8b00dd9e510c736f924ec85392019e51fc549152333fb08188c5cf7"} Oct 13 17:40:37 crc kubenswrapper[4720]: I1013 17:40:37.929481 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bf7d9f5bf-6khmt" event={"ID":"02414a49-0001-4f13-97cb-641937473fb6","Type":"ContainerDied","Data":"376eab0bc0716b64ad3f2fd2a966e08f673cb6d62ce53f42923577d22092150f"} Oct 13 17:40:37 crc kubenswrapper[4720]: I1013 17:40:37.929493 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bf7d9f5bf-6khmt" event={"ID":"02414a49-0001-4f13-97cb-641937473fb6","Type":"ContainerDied","Data":"019e17a4beba1065a350b97a47c64a8dc080c72b10f4c11fb15f29267226d39f"} Oct 13 17:40:37 crc kubenswrapper[4720]: I1013 17:40:37.929503 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="019e17a4beba1065a350b97a47c64a8dc080c72b10f4c11fb15f29267226d39f" Oct 13 17:40:37 crc kubenswrapper[4720]: I1013 17:40:37.947758 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9a12c67e-6055-4594-850a-61ac731d7a8d-httpd-config\") pod \"neutron-6579788cd4-czbf2\" (UID: \"9a12c67e-6055-4594-850a-61ac731d7a8d\") " pod="openstack/neutron-6579788cd4-czbf2" Oct 13 17:40:37 crc kubenswrapper[4720]: I1013 17:40:37.947828 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a12c67e-6055-4594-850a-61ac731d7a8d-combined-ca-bundle\") pod \"neutron-6579788cd4-czbf2\" (UID: \"9a12c67e-6055-4594-850a-61ac731d7a8d\") " pod="openstack/neutron-6579788cd4-czbf2" Oct 13 17:40:37 crc kubenswrapper[4720]: I1013 17:40:37.947859 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a12c67e-6055-4594-850a-61ac731d7a8d-ovndb-tls-certs\") pod \"neutron-6579788cd4-czbf2\" (UID: \"9a12c67e-6055-4594-850a-61ac731d7a8d\") " pod="openstack/neutron-6579788cd4-czbf2" Oct 13 17:40:37 crc kubenswrapper[4720]: I1013 17:40:37.947906 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z86hb\" (UniqueName: \"kubernetes.io/projected/9a12c67e-6055-4594-850a-61ac731d7a8d-kube-api-access-z86hb\") pod \"neutron-6579788cd4-czbf2\" (UID: \"9a12c67e-6055-4594-850a-61ac731d7a8d\") " pod="openstack/neutron-6579788cd4-czbf2" Oct 13 17:40:37 crc kubenswrapper[4720]: I1013 17:40:37.948000 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9a12c67e-6055-4594-850a-61ac731d7a8d-config\") pod \"neutron-6579788cd4-czbf2\" (UID: \"9a12c67e-6055-4594-850a-61ac731d7a8d\") " pod="openstack/neutron-6579788cd4-czbf2" Oct 13 17:40:37 crc kubenswrapper[4720]: I1013 17:40:37.951993 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/9a12c67e-6055-4594-850a-61ac731d7a8d-config\") pod \"neutron-6579788cd4-czbf2\" (UID: \"9a12c67e-6055-4594-850a-61ac731d7a8d\") " pod="openstack/neutron-6579788cd4-czbf2" Oct 13 17:40:37 crc kubenswrapper[4720]: I1013 17:40:37.952454 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7bf7d9f5bf-6khmt" Oct 13 17:40:37 crc kubenswrapper[4720]: I1013 17:40:37.960814 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a12c67e-6055-4594-850a-61ac731d7a8d-ovndb-tls-certs\") pod \"neutron-6579788cd4-czbf2\" (UID: \"9a12c67e-6055-4594-850a-61ac731d7a8d\") " pod="openstack/neutron-6579788cd4-czbf2" Oct 13 17:40:37 crc kubenswrapper[4720]: I1013 17:40:37.961107 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a12c67e-6055-4594-850a-61ac731d7a8d-combined-ca-bundle\") pod \"neutron-6579788cd4-czbf2\" (UID: \"9a12c67e-6055-4594-850a-61ac731d7a8d\") " pod="openstack/neutron-6579788cd4-czbf2" Oct 13 17:40:37 crc kubenswrapper[4720]: I1013 17:40:37.962442 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9a12c67e-6055-4594-850a-61ac731d7a8d-httpd-config\") pod \"neutron-6579788cd4-czbf2\" (UID: \"9a12c67e-6055-4594-850a-61ac731d7a8d\") " pod="openstack/neutron-6579788cd4-czbf2" Oct 13 17:40:37 crc kubenswrapper[4720]: I1013 17:40:37.970062 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z86hb\" (UniqueName: \"kubernetes.io/projected/9a12c67e-6055-4594-850a-61ac731d7a8d-kube-api-access-z86hb\") pod \"neutron-6579788cd4-czbf2\" (UID: \"9a12c67e-6055-4594-850a-61ac731d7a8d\") " pod="openstack/neutron-6579788cd4-czbf2" Oct 13 17:40:38 crc kubenswrapper[4720]: I1013 17:40:38.049093 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02414a49-0001-4f13-97cb-641937473fb6-scripts\") pod \"02414a49-0001-4f13-97cb-641937473fb6\" (UID: \"02414a49-0001-4f13-97cb-641937473fb6\") " Oct 13 17:40:38 crc kubenswrapper[4720]: I1013 17:40:38.049417 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02414a49-0001-4f13-97cb-641937473fb6-logs\") pod \"02414a49-0001-4f13-97cb-641937473fb6\" (UID: \"02414a49-0001-4f13-97cb-641937473fb6\") " Oct 13 17:40:38 crc kubenswrapper[4720]: I1013 17:40:38.049581 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbks6\" (UniqueName: \"kubernetes.io/projected/02414a49-0001-4f13-97cb-641937473fb6-kube-api-access-nbks6\") pod \"02414a49-0001-4f13-97cb-641937473fb6\" (UID: \"02414a49-0001-4f13-97cb-641937473fb6\") " Oct 13 17:40:38 crc kubenswrapper[4720]: I1013 17:40:38.049715 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02414a49-0001-4f13-97cb-641937473fb6-logs" (OuterVolumeSpecName: "logs") pod "02414a49-0001-4f13-97cb-641937473fb6" (UID: "02414a49-0001-4f13-97cb-641937473fb6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:40:38 crc kubenswrapper[4720]: I1013 17:40:38.050039 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/02414a49-0001-4f13-97cb-641937473fb6-config-data\") pod \"02414a49-0001-4f13-97cb-641937473fb6\" (UID: \"02414a49-0001-4f13-97cb-641937473fb6\") " Oct 13 17:40:38 crc kubenswrapper[4720]: I1013 17:40:38.050165 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/02414a49-0001-4f13-97cb-641937473fb6-horizon-secret-key\") pod \"02414a49-0001-4f13-97cb-641937473fb6\" (UID: \"02414a49-0001-4f13-97cb-641937473fb6\") " Oct 13 17:40:38 crc kubenswrapper[4720]: I1013 17:40:38.050753 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02414a49-0001-4f13-97cb-641937473fb6-logs\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:38 crc kubenswrapper[4720]: I1013 17:40:38.053037 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02414a49-0001-4f13-97cb-641937473fb6-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "02414a49-0001-4f13-97cb-641937473fb6" (UID: "02414a49-0001-4f13-97cb-641937473fb6"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:40:38 crc kubenswrapper[4720]: I1013 17:40:38.055557 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02414a49-0001-4f13-97cb-641937473fb6-kube-api-access-nbks6" (OuterVolumeSpecName: "kube-api-access-nbks6") pod "02414a49-0001-4f13-97cb-641937473fb6" (UID: "02414a49-0001-4f13-97cb-641937473fb6"). InnerVolumeSpecName "kube-api-access-nbks6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:40:38 crc kubenswrapper[4720]: I1013 17:40:38.076775 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02414a49-0001-4f13-97cb-641937473fb6-config-data" (OuterVolumeSpecName: "config-data") pod "02414a49-0001-4f13-97cb-641937473fb6" (UID: "02414a49-0001-4f13-97cb-641937473fb6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:40:38 crc kubenswrapper[4720]: I1013 17:40:38.076943 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02414a49-0001-4f13-97cb-641937473fb6-scripts" (OuterVolumeSpecName: "scripts") pod "02414a49-0001-4f13-97cb-641937473fb6" (UID: "02414a49-0001-4f13-97cb-641937473fb6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:40:38 crc kubenswrapper[4720]: I1013 17:40:38.079859 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-4w6tj" Oct 13 17:40:38 crc kubenswrapper[4720]: I1013 17:40:38.108014 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6579788cd4-czbf2" Oct 13 17:40:38 crc kubenswrapper[4720]: I1013 17:40:38.152834 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbks6\" (UniqueName: \"kubernetes.io/projected/02414a49-0001-4f13-97cb-641937473fb6-kube-api-access-nbks6\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:38 crc kubenswrapper[4720]: I1013 17:40:38.152880 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/02414a49-0001-4f13-97cb-641937473fb6-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:38 crc kubenswrapper[4720]: I1013 17:40:38.152910 4720 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/02414a49-0001-4f13-97cb-641937473fb6-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:38 crc kubenswrapper[4720]: I1013 17:40:38.152921 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02414a49-0001-4f13-97cb-641937473fb6-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:38 crc kubenswrapper[4720]: I1013 17:40:38.361182 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-4w6tj"] Oct 13 17:40:38 crc kubenswrapper[4720]: W1013 17:40:38.374805 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ea59751_3ee4_4404_99c7_06d02cec4f25.slice/crio-f78065522dec3a068b6763b3c9c05b8c966089facf5fceb1770ef511466e1918 WatchSource:0}: Error finding container f78065522dec3a068b6763b3c9c05b8c966089facf5fceb1770ef511466e1918: Status 404 returned error can't find the container with id f78065522dec3a068b6763b3c9c05b8c966089facf5fceb1770ef511466e1918 Oct 13 17:40:38 crc kubenswrapper[4720]: I1013 17:40:38.748575 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6579788cd4-czbf2"] Oct 13 17:40:38 crc kubenswrapper[4720]: I1013 17:40:38.940548 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6579788cd4-czbf2" event={"ID":"9a12c67e-6055-4594-850a-61ac731d7a8d","Type":"ContainerStarted","Data":"929b941fbabd15a3a11f8e9db999ac85bec2aac99787b7835b38aecf06b96d9b"} Oct 13 17:40:38 crc kubenswrapper[4720]: I1013 17:40:38.940608 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6579788cd4-czbf2" event={"ID":"9a12c67e-6055-4594-850a-61ac731d7a8d","Type":"ContainerStarted","Data":"a5648ae8657c25e667791994f276dbe90d8850aab8e2af589687e69887b37417"} Oct 13 17:40:38 crc kubenswrapper[4720]: I1013 17:40:38.944919 4720 generic.go:334] "Generic (PLEG): container finished" podID="0ea59751-3ee4-4404-99c7-06d02cec4f25" containerID="be0f0971613567094a3bb4aea84a21d6203c600885ecd9cd30e1418360503828" exitCode=0 Oct 13 17:40:38 crc kubenswrapper[4720]: I1013 17:40:38.944969 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-4w6tj" event={"ID":"0ea59751-3ee4-4404-99c7-06d02cec4f25","Type":"ContainerDied","Data":"be0f0971613567094a3bb4aea84a21d6203c600885ecd9cd30e1418360503828"} Oct 13 17:40:38 crc kubenswrapper[4720]: I1013 17:40:38.945016 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-4w6tj" event={"ID":"0ea59751-3ee4-4404-99c7-06d02cec4f25","Type":"ContainerStarted","Data":"f78065522dec3a068b6763b3c9c05b8c966089facf5fceb1770ef511466e1918"} Oct 13 17:40:38 crc kubenswrapper[4720]: I1013 17:40:38.945031 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7bf7d9f5bf-6khmt" Oct 13 17:40:39 crc kubenswrapper[4720]: I1013 17:40:39.003943 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7bf7d9f5bf-6khmt"] Oct 13 17:40:39 crc kubenswrapper[4720]: I1013 17:40:39.037032 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7bf7d9f5bf-6khmt"] Oct 13 17:40:39 crc kubenswrapper[4720]: I1013 17:40:39.183801 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02414a49-0001-4f13-97cb-641937473fb6" path="/var/lib/kubelet/pods/02414a49-0001-4f13-97cb-641937473fb6/volumes" Oct 13 17:40:39 crc kubenswrapper[4720]: I1013 17:40:39.954998 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6579788cd4-czbf2" event={"ID":"9a12c67e-6055-4594-850a-61ac731d7a8d","Type":"ContainerStarted","Data":"d6124ef920638a00efcc516961d520ffae3592268b676ed7b8bf8600cbd993de"} Oct 13 17:40:39 crc kubenswrapper[4720]: I1013 17:40:39.956369 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6579788cd4-czbf2" Oct 13 17:40:39 crc kubenswrapper[4720]: I1013 17:40:39.959742 4720 generic.go:334] "Generic (PLEG): container finished" podID="139c2e02-2c20-4a21-a5c0-753c6003473b" containerID="b16a8f5b551805315d0632eef1b1948aa4930fa2db29ab6d2fe042db2c832eb1" exitCode=0 Oct 13 17:40:39 crc kubenswrapper[4720]: I1013 17:40:39.959785 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7df8489788-ntn24" event={"ID":"139c2e02-2c20-4a21-a5c0-753c6003473b","Type":"ContainerDied","Data":"b16a8f5b551805315d0632eef1b1948aa4930fa2db29ab6d2fe042db2c832eb1"} Oct 13 17:40:39 crc kubenswrapper[4720]: I1013 17:40:39.961959 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-4w6tj" event={"ID":"0ea59751-3ee4-4404-99c7-06d02cec4f25","Type":"ContainerStarted","Data":"d26c25ef68a2035e0c52772c7d3c704b692d75fb89c945690a7cb19b87df2d23"} Oct 13 17:40:39 crc kubenswrapper[4720]: I1013 17:40:39.962662 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-84b966f6c9-4w6tj" Oct 13 17:40:39 crc kubenswrapper[4720]: I1013 17:40:39.964535 4720 generic.go:334] "Generic (PLEG): container finished" podID="69895abb-fedb-4ef1-bd37-698c2384d0b0" containerID="b12eda37cb379a954f446cc232021f40074563481f329d306d32f89488d64fdb" exitCode=0 Oct 13 17:40:39 crc kubenswrapper[4720]: I1013 17:40:39.964607 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-fj7h4" event={"ID":"69895abb-fedb-4ef1-bd37-698c2384d0b0","Type":"ContainerDied","Data":"b12eda37cb379a954f446cc232021f40074563481f329d306d32f89488d64fdb"} Oct 13 17:40:39 crc kubenswrapper[4720]: I1013 17:40:39.985454 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6579788cd4-czbf2" podStartSLOduration=2.985433516 podStartE2EDuration="2.985433516s" podCreationTimestamp="2025-10-13 17:40:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:40:39.983047324 +0000 UTC m=+985.440297456" watchObservedRunningTime="2025-10-13 17:40:39.985433516 +0000 UTC m=+985.442683658" Oct 13 17:40:40 crc kubenswrapper[4720]: I1013 17:40:40.011718 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-84b966f6c9-4w6tj" podStartSLOduration=3.011695261 podStartE2EDuration="3.011695261s" podCreationTimestamp="2025-10-13 17:40:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:40:40.002657559 +0000 UTC m=+985.459907711" watchObservedRunningTime="2025-10-13 17:40:40.011695261 +0000 UTC m=+985.468945403" Oct 13 17:40:40 crc kubenswrapper[4720]: I1013 17:40:40.205937 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7c7ddff655-r8ln9"] Oct 13 17:40:40 crc kubenswrapper[4720]: E1013 17:40:40.206273 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02414a49-0001-4f13-97cb-641937473fb6" containerName="horizon" Oct 13 17:40:40 crc kubenswrapper[4720]: I1013 17:40:40.206289 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="02414a49-0001-4f13-97cb-641937473fb6" containerName="horizon" Oct 13 17:40:40 crc kubenswrapper[4720]: E1013 17:40:40.206320 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02414a49-0001-4f13-97cb-641937473fb6" containerName="horizon-log" Oct 13 17:40:40 crc kubenswrapper[4720]: I1013 17:40:40.206326 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="02414a49-0001-4f13-97cb-641937473fb6" containerName="horizon-log" Oct 13 17:40:40 crc kubenswrapper[4720]: I1013 17:40:40.206465 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="02414a49-0001-4f13-97cb-641937473fb6" containerName="horizon-log" Oct 13 17:40:40 crc kubenswrapper[4720]: I1013 17:40:40.206488 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="02414a49-0001-4f13-97cb-641937473fb6" containerName="horizon" Oct 13 17:40:40 crc kubenswrapper[4720]: I1013 17:40:40.212561 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7c7ddff655-r8ln9" Oct 13 17:40:40 crc kubenswrapper[4720]: I1013 17:40:40.214609 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Oct 13 17:40:40 crc kubenswrapper[4720]: I1013 17:40:40.222407 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Oct 13 17:40:40 crc kubenswrapper[4720]: I1013 17:40:40.255981 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7c7ddff655-r8ln9"] Oct 13 17:40:40 crc kubenswrapper[4720]: I1013 17:40:40.392232 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/782a78a0-312e-4337-9397-9b476f51f7a8-public-tls-certs\") pod \"neutron-7c7ddff655-r8ln9\" (UID: \"782a78a0-312e-4337-9397-9b476f51f7a8\") " pod="openstack/neutron-7c7ddff655-r8ln9" Oct 13 17:40:40 crc kubenswrapper[4720]: I1013 17:40:40.392319 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/782a78a0-312e-4337-9397-9b476f51f7a8-combined-ca-bundle\") pod \"neutron-7c7ddff655-r8ln9\" (UID: \"782a78a0-312e-4337-9397-9b476f51f7a8\") " pod="openstack/neutron-7c7ddff655-r8ln9" Oct 13 17:40:40 crc kubenswrapper[4720]: I1013 17:40:40.392438 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhjw2\" (UniqueName: \"kubernetes.io/projected/782a78a0-312e-4337-9397-9b476f51f7a8-kube-api-access-mhjw2\") pod \"neutron-7c7ddff655-r8ln9\" (UID: \"782a78a0-312e-4337-9397-9b476f51f7a8\") " pod="openstack/neutron-7c7ddff655-r8ln9" Oct 13 17:40:40 crc kubenswrapper[4720]: I1013 17:40:40.392529 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/782a78a0-312e-4337-9397-9b476f51f7a8-httpd-config\") pod \"neutron-7c7ddff655-r8ln9\" (UID: \"782a78a0-312e-4337-9397-9b476f51f7a8\") " pod="openstack/neutron-7c7ddff655-r8ln9" Oct 13 17:40:40 crc kubenswrapper[4720]: I1013 17:40:40.392695 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/782a78a0-312e-4337-9397-9b476f51f7a8-internal-tls-certs\") pod \"neutron-7c7ddff655-r8ln9\" (UID: \"782a78a0-312e-4337-9397-9b476f51f7a8\") " pod="openstack/neutron-7c7ddff655-r8ln9" Oct 13 17:40:40 crc kubenswrapper[4720]: I1013 17:40:40.392897 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/782a78a0-312e-4337-9397-9b476f51f7a8-config\") pod \"neutron-7c7ddff655-r8ln9\" (UID: \"782a78a0-312e-4337-9397-9b476f51f7a8\") " pod="openstack/neutron-7c7ddff655-r8ln9" Oct 13 17:40:40 crc kubenswrapper[4720]: I1013 17:40:40.392973 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/782a78a0-312e-4337-9397-9b476f51f7a8-ovndb-tls-certs\") pod \"neutron-7c7ddff655-r8ln9\" (UID: \"782a78a0-312e-4337-9397-9b476f51f7a8\") " pod="openstack/neutron-7c7ddff655-r8ln9" Oct 13 17:40:40 crc kubenswrapper[4720]: I1013 17:40:40.494583 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/782a78a0-312e-4337-9397-9b476f51f7a8-httpd-config\") pod \"neutron-7c7ddff655-r8ln9\" (UID: \"782a78a0-312e-4337-9397-9b476f51f7a8\") " pod="openstack/neutron-7c7ddff655-r8ln9" Oct 13 17:40:40 crc kubenswrapper[4720]: I1013 17:40:40.496103 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/782a78a0-312e-4337-9397-9b476f51f7a8-internal-tls-certs\") pod \"neutron-7c7ddff655-r8ln9\" (UID: \"782a78a0-312e-4337-9397-9b476f51f7a8\") " pod="openstack/neutron-7c7ddff655-r8ln9" Oct 13 17:40:40 crc kubenswrapper[4720]: I1013 17:40:40.496162 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/782a78a0-312e-4337-9397-9b476f51f7a8-config\") pod \"neutron-7c7ddff655-r8ln9\" (UID: \"782a78a0-312e-4337-9397-9b476f51f7a8\") " pod="openstack/neutron-7c7ddff655-r8ln9" Oct 13 17:40:40 crc kubenswrapper[4720]: I1013 17:40:40.496277 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/782a78a0-312e-4337-9397-9b476f51f7a8-ovndb-tls-certs\") pod \"neutron-7c7ddff655-r8ln9\" (UID: \"782a78a0-312e-4337-9397-9b476f51f7a8\") " pod="openstack/neutron-7c7ddff655-r8ln9" Oct 13 17:40:40 crc kubenswrapper[4720]: I1013 17:40:40.496319 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/782a78a0-312e-4337-9397-9b476f51f7a8-public-tls-certs\") pod \"neutron-7c7ddff655-r8ln9\" (UID: \"782a78a0-312e-4337-9397-9b476f51f7a8\") " pod="openstack/neutron-7c7ddff655-r8ln9" Oct 13 17:40:40 crc kubenswrapper[4720]: I1013 17:40:40.496384 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/782a78a0-312e-4337-9397-9b476f51f7a8-combined-ca-bundle\") pod \"neutron-7c7ddff655-r8ln9\" (UID: \"782a78a0-312e-4337-9397-9b476f51f7a8\") " pod="openstack/neutron-7c7ddff655-r8ln9" Oct 13 17:40:40 crc kubenswrapper[4720]: I1013 17:40:40.496430 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhjw2\" (UniqueName: \"kubernetes.io/projected/782a78a0-312e-4337-9397-9b476f51f7a8-kube-api-access-mhjw2\") pod \"neutron-7c7ddff655-r8ln9\" (UID: \"782a78a0-312e-4337-9397-9b476f51f7a8\") " pod="openstack/neutron-7c7ddff655-r8ln9" Oct 13 17:40:40 crc kubenswrapper[4720]: I1013 17:40:40.502460 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/782a78a0-312e-4337-9397-9b476f51f7a8-httpd-config\") pod \"neutron-7c7ddff655-r8ln9\" (UID: \"782a78a0-312e-4337-9397-9b476f51f7a8\") " pod="openstack/neutron-7c7ddff655-r8ln9" Oct 13 17:40:40 crc kubenswrapper[4720]: I1013 17:40:40.503630 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/782a78a0-312e-4337-9397-9b476f51f7a8-public-tls-certs\") pod \"neutron-7c7ddff655-r8ln9\" (UID: \"782a78a0-312e-4337-9397-9b476f51f7a8\") " pod="openstack/neutron-7c7ddff655-r8ln9" Oct 13 17:40:40 crc kubenswrapper[4720]: I1013 17:40:40.504369 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/782a78a0-312e-4337-9397-9b476f51f7a8-config\") pod \"neutron-7c7ddff655-r8ln9\" (UID: \"782a78a0-312e-4337-9397-9b476f51f7a8\") " pod="openstack/neutron-7c7ddff655-r8ln9" Oct 13 17:40:40 crc kubenswrapper[4720]: I1013 17:40:40.505167 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/782a78a0-312e-4337-9397-9b476f51f7a8-ovndb-tls-certs\") pod \"neutron-7c7ddff655-r8ln9\" (UID: \"782a78a0-312e-4337-9397-9b476f51f7a8\") " pod="openstack/neutron-7c7ddff655-r8ln9" Oct 13 17:40:40 crc kubenswrapper[4720]: I1013 17:40:40.507900 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/782a78a0-312e-4337-9397-9b476f51f7a8-internal-tls-certs\") pod \"neutron-7c7ddff655-r8ln9\" (UID: \"782a78a0-312e-4337-9397-9b476f51f7a8\") " pod="openstack/neutron-7c7ddff655-r8ln9" Oct 13 17:40:40 crc kubenswrapper[4720]: I1013 17:40:40.509684 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/782a78a0-312e-4337-9397-9b476f51f7a8-combined-ca-bundle\") pod \"neutron-7c7ddff655-r8ln9\" (UID: \"782a78a0-312e-4337-9397-9b476f51f7a8\") " pod="openstack/neutron-7c7ddff655-r8ln9" Oct 13 17:40:40 crc kubenswrapper[4720]: I1013 17:40:40.525178 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhjw2\" (UniqueName: \"kubernetes.io/projected/782a78a0-312e-4337-9397-9b476f51f7a8-kube-api-access-mhjw2\") pod \"neutron-7c7ddff655-r8ln9\" (UID: \"782a78a0-312e-4337-9397-9b476f51f7a8\") " pod="openstack/neutron-7c7ddff655-r8ln9" Oct 13 17:40:40 crc kubenswrapper[4720]: I1013 17:40:40.548616 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7c7ddff655-r8ln9" Oct 13 17:40:41 crc kubenswrapper[4720]: I1013 17:40:41.181763 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7c7ddff655-r8ln9"] Oct 13 17:40:41 crc kubenswrapper[4720]: I1013 17:40:41.393913 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-fj7h4" Oct 13 17:40:41 crc kubenswrapper[4720]: I1013 17:40:41.522076 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69895abb-fedb-4ef1-bd37-698c2384d0b0-combined-ca-bundle\") pod \"69895abb-fedb-4ef1-bd37-698c2384d0b0\" (UID: \"69895abb-fedb-4ef1-bd37-698c2384d0b0\") " Oct 13 17:40:41 crc kubenswrapper[4720]: I1013 17:40:41.522337 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwswq\" (UniqueName: \"kubernetes.io/projected/69895abb-fedb-4ef1-bd37-698c2384d0b0-kube-api-access-jwswq\") pod \"69895abb-fedb-4ef1-bd37-698c2384d0b0\" (UID: \"69895abb-fedb-4ef1-bd37-698c2384d0b0\") " Oct 13 17:40:41 crc kubenswrapper[4720]: I1013 17:40:41.522462 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/69895abb-fedb-4ef1-bd37-698c2384d0b0-db-sync-config-data\") pod \"69895abb-fedb-4ef1-bd37-698c2384d0b0\" (UID: \"69895abb-fedb-4ef1-bd37-698c2384d0b0\") " Oct 13 17:40:41 crc kubenswrapper[4720]: I1013 17:40:41.526595 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69895abb-fedb-4ef1-bd37-698c2384d0b0-kube-api-access-jwswq" (OuterVolumeSpecName: "kube-api-access-jwswq") pod "69895abb-fedb-4ef1-bd37-698c2384d0b0" (UID: "69895abb-fedb-4ef1-bd37-698c2384d0b0"). InnerVolumeSpecName "kube-api-access-jwswq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:40:41 crc kubenswrapper[4720]: I1013 17:40:41.528498 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69895abb-fedb-4ef1-bd37-698c2384d0b0-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "69895abb-fedb-4ef1-bd37-698c2384d0b0" (UID: "69895abb-fedb-4ef1-bd37-698c2384d0b0"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:40:41 crc kubenswrapper[4720]: I1013 17:40:41.559349 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69895abb-fedb-4ef1-bd37-698c2384d0b0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "69895abb-fedb-4ef1-bd37-698c2384d0b0" (UID: "69895abb-fedb-4ef1-bd37-698c2384d0b0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:40:41 crc kubenswrapper[4720]: I1013 17:40:41.624293 4720 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/69895abb-fedb-4ef1-bd37-698c2384d0b0-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:41 crc kubenswrapper[4720]: I1013 17:40:41.624327 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69895abb-fedb-4ef1-bd37-698c2384d0b0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:41 crc kubenswrapper[4720]: I1013 17:40:41.624340 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwswq\" (UniqueName: \"kubernetes.io/projected/69895abb-fedb-4ef1-bd37-698c2384d0b0-kube-api-access-jwswq\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:41 crc kubenswrapper[4720]: I1013 17:40:41.860908 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 17:40:41 crc kubenswrapper[4720]: I1013 17:40:41.983135 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c7ddff655-r8ln9" event={"ID":"782a78a0-312e-4337-9397-9b476f51f7a8","Type":"ContainerStarted","Data":"46aa40e9bb674ed63da12ef2dfa19f6f2d7e0006f4d6482ea875c2a5a844aec8"} Oct 13 17:40:41 crc kubenswrapper[4720]: I1013 17:40:41.983179 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c7ddff655-r8ln9" event={"ID":"782a78a0-312e-4337-9397-9b476f51f7a8","Type":"ContainerStarted","Data":"e8afc0f1108996f42a9cf0925f275cd9eb4d56523898df2fed9c442bda0626c8"} Oct 13 17:40:41 crc kubenswrapper[4720]: I1013 17:40:41.983204 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c7ddff655-r8ln9" event={"ID":"782a78a0-312e-4337-9397-9b476f51f7a8","Type":"ContainerStarted","Data":"8c0185da4b54b595409249dca8378608db851e5e03ccb06d74e8aae63822d967"} Oct 13 17:40:41 crc kubenswrapper[4720]: I1013 17:40:41.984212 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7c7ddff655-r8ln9" Oct 13 17:40:41 crc kubenswrapper[4720]: I1013 17:40:41.985706 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-fj7h4" event={"ID":"69895abb-fedb-4ef1-bd37-698c2384d0b0","Type":"ContainerDied","Data":"56177a61e8dd9a42da9499c5d2cddefffbbf15eced12976c271411b51fe9e3da"} Oct 13 17:40:41 crc kubenswrapper[4720]: I1013 17:40:41.985733 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56177a61e8dd9a42da9499c5d2cddefffbbf15eced12976c271411b51fe9e3da" Oct 13 17:40:41 crc kubenswrapper[4720]: I1013 17:40:41.985770 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-fj7h4" Oct 13 17:40:41 crc kubenswrapper[4720]: I1013 17:40:41.988102 4720 generic.go:334] "Generic (PLEG): container finished" podID="6d3623f2-ec27-4ad6-8cb4-553cb0527e15" containerID="f9e8de6048bec5a4704aa9fca47068f43b9d94f967db4928fc0e4d17006ef487" exitCode=0 Oct 13 17:40:41 crc kubenswrapper[4720]: I1013 17:40:41.988646 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 17:40:41 crc kubenswrapper[4720]: I1013 17:40:41.989213 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d3623f2-ec27-4ad6-8cb4-553cb0527e15","Type":"ContainerDied","Data":"f9e8de6048bec5a4704aa9fca47068f43b9d94f967db4928fc0e4d17006ef487"} Oct 13 17:40:41 crc kubenswrapper[4720]: I1013 17:40:41.989271 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d3623f2-ec27-4ad6-8cb4-553cb0527e15","Type":"ContainerDied","Data":"483488aa8542aa83f80a1ea3c1fb9658b9062604a295cb5d64c257d4231ab072"} Oct 13 17:40:41 crc kubenswrapper[4720]: I1013 17:40:41.989291 4720 scope.go:117] "RemoveContainer" containerID="828aebbcbe710f896ea2cef6a641aed91297652c0bdec01666e3cf684a4aece0" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.007774 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7c7ddff655-r8ln9" podStartSLOduration=2.007753732 podStartE2EDuration="2.007753732s" podCreationTimestamp="2025-10-13 17:40:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:40:42.003101223 +0000 UTC m=+987.460351355" watchObservedRunningTime="2025-10-13 17:40:42.007753732 +0000 UTC m=+987.465003864" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.012142 4720 scope.go:117] "RemoveContainer" containerID="f9e8de6048bec5a4704aa9fca47068f43b9d94f967db4928fc0e4d17006ef487" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.035138 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6d3623f2-ec27-4ad6-8cb4-553cb0527e15-sg-core-conf-yaml\") pod \"6d3623f2-ec27-4ad6-8cb4-553cb0527e15\" (UID: \"6d3623f2-ec27-4ad6-8cb4-553cb0527e15\") " Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.035275 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d3623f2-ec27-4ad6-8cb4-553cb0527e15-config-data\") pod \"6d3623f2-ec27-4ad6-8cb4-553cb0527e15\" (UID: \"6d3623f2-ec27-4ad6-8cb4-553cb0527e15\") " Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.035335 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d3623f2-ec27-4ad6-8cb4-553cb0527e15-combined-ca-bundle\") pod \"6d3623f2-ec27-4ad6-8cb4-553cb0527e15\" (UID: \"6d3623f2-ec27-4ad6-8cb4-553cb0527e15\") " Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.035404 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d3623f2-ec27-4ad6-8cb4-553cb0527e15-run-httpd\") pod \"6d3623f2-ec27-4ad6-8cb4-553cb0527e15\" (UID: \"6d3623f2-ec27-4ad6-8cb4-553cb0527e15\") " Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.035445 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bth4\" (UniqueName: \"kubernetes.io/projected/6d3623f2-ec27-4ad6-8cb4-553cb0527e15-kube-api-access-4bth4\") pod \"6d3623f2-ec27-4ad6-8cb4-553cb0527e15\" (UID: \"6d3623f2-ec27-4ad6-8cb4-553cb0527e15\") " Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.035541 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d3623f2-ec27-4ad6-8cb4-553cb0527e15-scripts\") pod \"6d3623f2-ec27-4ad6-8cb4-553cb0527e15\" (UID: \"6d3623f2-ec27-4ad6-8cb4-553cb0527e15\") " Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.035559 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d3623f2-ec27-4ad6-8cb4-553cb0527e15-log-httpd\") pod \"6d3623f2-ec27-4ad6-8cb4-553cb0527e15\" (UID: \"6d3623f2-ec27-4ad6-8cb4-553cb0527e15\") " Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.035933 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d3623f2-ec27-4ad6-8cb4-553cb0527e15-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6d3623f2-ec27-4ad6-8cb4-553cb0527e15" (UID: "6d3623f2-ec27-4ad6-8cb4-553cb0527e15"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.036064 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d3623f2-ec27-4ad6-8cb4-553cb0527e15-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6d3623f2-ec27-4ad6-8cb4-553cb0527e15" (UID: "6d3623f2-ec27-4ad6-8cb4-553cb0527e15"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.036532 4720 scope.go:117] "RemoveContainer" containerID="0edb2dae877587f39ad2f0b1c67183bdd28566eb1418aee78cff788789e0dd9c" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.048028 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d3623f2-ec27-4ad6-8cb4-553cb0527e15-scripts" (OuterVolumeSpecName: "scripts") pod "6d3623f2-ec27-4ad6-8cb4-553cb0527e15" (UID: "6d3623f2-ec27-4ad6-8cb4-553cb0527e15"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.048178 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d3623f2-ec27-4ad6-8cb4-553cb0527e15-kube-api-access-4bth4" (OuterVolumeSpecName: "kube-api-access-4bth4") pod "6d3623f2-ec27-4ad6-8cb4-553cb0527e15" (UID: "6d3623f2-ec27-4ad6-8cb4-553cb0527e15"). InnerVolumeSpecName "kube-api-access-4bth4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.063203 4720 scope.go:117] "RemoveContainer" containerID="828aebbcbe710f896ea2cef6a641aed91297652c0bdec01666e3cf684a4aece0" Oct 13 17:40:42 crc kubenswrapper[4720]: E1013 17:40:42.063635 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"828aebbcbe710f896ea2cef6a641aed91297652c0bdec01666e3cf684a4aece0\": container with ID starting with 828aebbcbe710f896ea2cef6a641aed91297652c0bdec01666e3cf684a4aece0 not found: ID does not exist" containerID="828aebbcbe710f896ea2cef6a641aed91297652c0bdec01666e3cf684a4aece0" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.063698 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"828aebbcbe710f896ea2cef6a641aed91297652c0bdec01666e3cf684a4aece0"} err="failed to get container status \"828aebbcbe710f896ea2cef6a641aed91297652c0bdec01666e3cf684a4aece0\": rpc error: code = NotFound desc = could not find container \"828aebbcbe710f896ea2cef6a641aed91297652c0bdec01666e3cf684a4aece0\": container with ID starting with 828aebbcbe710f896ea2cef6a641aed91297652c0bdec01666e3cf684a4aece0 not found: ID does not exist" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.063725 4720 scope.go:117] "RemoveContainer" containerID="f9e8de6048bec5a4704aa9fca47068f43b9d94f967db4928fc0e4d17006ef487" Oct 13 17:40:42 crc kubenswrapper[4720]: E1013 17:40:42.064071 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9e8de6048bec5a4704aa9fca47068f43b9d94f967db4928fc0e4d17006ef487\": container with ID starting with f9e8de6048bec5a4704aa9fca47068f43b9d94f967db4928fc0e4d17006ef487 not found: ID does not exist" containerID="f9e8de6048bec5a4704aa9fca47068f43b9d94f967db4928fc0e4d17006ef487" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.064133 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9e8de6048bec5a4704aa9fca47068f43b9d94f967db4928fc0e4d17006ef487"} err="failed to get container status \"f9e8de6048bec5a4704aa9fca47068f43b9d94f967db4928fc0e4d17006ef487\": rpc error: code = NotFound desc = could not find container \"f9e8de6048bec5a4704aa9fca47068f43b9d94f967db4928fc0e4d17006ef487\": container with ID starting with f9e8de6048bec5a4704aa9fca47068f43b9d94f967db4928fc0e4d17006ef487 not found: ID does not exist" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.064163 4720 scope.go:117] "RemoveContainer" containerID="0edb2dae877587f39ad2f0b1c67183bdd28566eb1418aee78cff788789e0dd9c" Oct 13 17:40:42 crc kubenswrapper[4720]: E1013 17:40:42.064498 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0edb2dae877587f39ad2f0b1c67183bdd28566eb1418aee78cff788789e0dd9c\": container with ID starting with 0edb2dae877587f39ad2f0b1c67183bdd28566eb1418aee78cff788789e0dd9c not found: ID does not exist" containerID="0edb2dae877587f39ad2f0b1c67183bdd28566eb1418aee78cff788789e0dd9c" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.064618 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0edb2dae877587f39ad2f0b1c67183bdd28566eb1418aee78cff788789e0dd9c"} err="failed to get container status \"0edb2dae877587f39ad2f0b1c67183bdd28566eb1418aee78cff788789e0dd9c\": rpc error: code = NotFound desc = could not find container \"0edb2dae877587f39ad2f0b1c67183bdd28566eb1418aee78cff788789e0dd9c\": container with ID starting with 0edb2dae877587f39ad2f0b1c67183bdd28566eb1418aee78cff788789e0dd9c not found: ID does not exist" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.070405 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d3623f2-ec27-4ad6-8cb4-553cb0527e15-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6d3623f2-ec27-4ad6-8cb4-553cb0527e15" (UID: "6d3623f2-ec27-4ad6-8cb4-553cb0527e15"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.089147 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7df8489788-ntn24" podUID="139c2e02-2c20-4a21-a5c0-753c6003473b" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.099498 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d3623f2-ec27-4ad6-8cb4-553cb0527e15-config-data" (OuterVolumeSpecName: "config-data") pod "6d3623f2-ec27-4ad6-8cb4-553cb0527e15" (UID: "6d3623f2-ec27-4ad6-8cb4-553cb0527e15"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.106088 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d3623f2-ec27-4ad6-8cb4-553cb0527e15-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6d3623f2-ec27-4ad6-8cb4-553cb0527e15" (UID: "6d3623f2-ec27-4ad6-8cb4-553cb0527e15"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.154812 4720 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d3623f2-ec27-4ad6-8cb4-553cb0527e15-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.154926 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4bth4\" (UniqueName: \"kubernetes.io/projected/6d3623f2-ec27-4ad6-8cb4-553cb0527e15-kube-api-access-4bth4\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.154939 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d3623f2-ec27-4ad6-8cb4-553cb0527e15-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.154948 4720 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d3623f2-ec27-4ad6-8cb4-553cb0527e15-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.154956 4720 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6d3623f2-ec27-4ad6-8cb4-553cb0527e15-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.154964 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d3623f2-ec27-4ad6-8cb4-553cb0527e15-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.154972 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d3623f2-ec27-4ad6-8cb4-553cb0527e15-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.244277 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-77879c6d6f-fqm88"] Oct 13 17:40:42 crc kubenswrapper[4720]: E1013 17:40:42.246401 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d3623f2-ec27-4ad6-8cb4-553cb0527e15" containerName="ceilometer-notification-agent" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.246547 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d3623f2-ec27-4ad6-8cb4-553cb0527e15" containerName="ceilometer-notification-agent" Oct 13 17:40:42 crc kubenswrapper[4720]: E1013 17:40:42.246652 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d3623f2-ec27-4ad6-8cb4-553cb0527e15" containerName="sg-core" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.246748 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d3623f2-ec27-4ad6-8cb4-553cb0527e15" containerName="sg-core" Oct 13 17:40:42 crc kubenswrapper[4720]: E1013 17:40:42.246864 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d3623f2-ec27-4ad6-8cb4-553cb0527e15" containerName="ceilometer-central-agent" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.246952 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d3623f2-ec27-4ad6-8cb4-553cb0527e15" containerName="ceilometer-central-agent" Oct 13 17:40:42 crc kubenswrapper[4720]: E1013 17:40:42.247070 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69895abb-fedb-4ef1-bd37-698c2384d0b0" containerName="barbican-db-sync" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.247145 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="69895abb-fedb-4ef1-bd37-698c2384d0b0" containerName="barbican-db-sync" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.247500 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="69895abb-fedb-4ef1-bd37-698c2384d0b0" containerName="barbican-db-sync" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.247654 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d3623f2-ec27-4ad6-8cb4-553cb0527e15" containerName="ceilometer-notification-agent" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.247752 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d3623f2-ec27-4ad6-8cb4-553cb0527e15" containerName="ceilometer-central-agent" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.247829 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d3623f2-ec27-4ad6-8cb4-553cb0527e15" containerName="sg-core" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.249219 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-77879c6d6f-fqm88" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.252142 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-fj7nh" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.252383 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.256110 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f39030ec-2975-459a-972e-4928cb31e15a-combined-ca-bundle\") pod \"barbican-worker-77879c6d6f-fqm88\" (UID: \"f39030ec-2975-459a-972e-4928cb31e15a\") " pod="openstack/barbican-worker-77879c6d6f-fqm88" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.256178 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f39030ec-2975-459a-972e-4928cb31e15a-config-data-custom\") pod \"barbican-worker-77879c6d6f-fqm88\" (UID: \"f39030ec-2975-459a-972e-4928cb31e15a\") " pod="openstack/barbican-worker-77879c6d6f-fqm88" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.256239 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f39030ec-2975-459a-972e-4928cb31e15a-logs\") pod \"barbican-worker-77879c6d6f-fqm88\" (UID: \"f39030ec-2975-459a-972e-4928cb31e15a\") " pod="openstack/barbican-worker-77879c6d6f-fqm88" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.256280 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f39030ec-2975-459a-972e-4928cb31e15a-config-data\") pod \"barbican-worker-77879c6d6f-fqm88\" (UID: \"f39030ec-2975-459a-972e-4928cb31e15a\") " pod="openstack/barbican-worker-77879c6d6f-fqm88" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.256357 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjg85\" (UniqueName: \"kubernetes.io/projected/f39030ec-2975-459a-972e-4928cb31e15a-kube-api-access-mjg85\") pod \"barbican-worker-77879c6d6f-fqm88\" (UID: \"f39030ec-2975-459a-972e-4928cb31e15a\") " pod="openstack/barbican-worker-77879c6d6f-fqm88" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.257882 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.265290 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-846978fd94-gg45m"] Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.266731 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-846978fd94-gg45m" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.268396 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.272015 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-77879c6d6f-fqm88"] Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.289553 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-846978fd94-gg45m"] Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.360066 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f39030ec-2975-459a-972e-4928cb31e15a-logs\") pod \"barbican-worker-77879c6d6f-fqm88\" (UID: \"f39030ec-2975-459a-972e-4928cb31e15a\") " pod="openstack/barbican-worker-77879c6d6f-fqm88" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.360126 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f39030ec-2975-459a-972e-4928cb31e15a-config-data\") pod \"barbican-worker-77879c6d6f-fqm88\" (UID: \"f39030ec-2975-459a-972e-4928cb31e15a\") " pod="openstack/barbican-worker-77879c6d6f-fqm88" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.360181 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjg85\" (UniqueName: \"kubernetes.io/projected/f39030ec-2975-459a-972e-4928cb31e15a-kube-api-access-mjg85\") pod \"barbican-worker-77879c6d6f-fqm88\" (UID: \"f39030ec-2975-459a-972e-4928cb31e15a\") " pod="openstack/barbican-worker-77879c6d6f-fqm88" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.360264 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f39030ec-2975-459a-972e-4928cb31e15a-combined-ca-bundle\") pod \"barbican-worker-77879c6d6f-fqm88\" (UID: \"f39030ec-2975-459a-972e-4928cb31e15a\") " pod="openstack/barbican-worker-77879c6d6f-fqm88" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.360293 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f39030ec-2975-459a-972e-4928cb31e15a-config-data-custom\") pod \"barbican-worker-77879c6d6f-fqm88\" (UID: \"f39030ec-2975-459a-972e-4928cb31e15a\") " pod="openstack/barbican-worker-77879c6d6f-fqm88" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.367622 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f39030ec-2975-459a-972e-4928cb31e15a-logs\") pod \"barbican-worker-77879c6d6f-fqm88\" (UID: \"f39030ec-2975-459a-972e-4928cb31e15a\") " pod="openstack/barbican-worker-77879c6d6f-fqm88" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.374576 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f39030ec-2975-459a-972e-4928cb31e15a-config-data\") pod \"barbican-worker-77879c6d6f-fqm88\" (UID: \"f39030ec-2975-459a-972e-4928cb31e15a\") " pod="openstack/barbican-worker-77879c6d6f-fqm88" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.382597 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f39030ec-2975-459a-972e-4928cb31e15a-config-data-custom\") pod \"barbican-worker-77879c6d6f-fqm88\" (UID: \"f39030ec-2975-459a-972e-4928cb31e15a\") " pod="openstack/barbican-worker-77879c6d6f-fqm88" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.387925 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f39030ec-2975-459a-972e-4928cb31e15a-combined-ca-bundle\") pod \"barbican-worker-77879c6d6f-fqm88\" (UID: \"f39030ec-2975-459a-972e-4928cb31e15a\") " pod="openstack/barbican-worker-77879c6d6f-fqm88" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.414814 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjg85\" (UniqueName: \"kubernetes.io/projected/f39030ec-2975-459a-972e-4928cb31e15a-kube-api-access-mjg85\") pod \"barbican-worker-77879c6d6f-fqm88\" (UID: \"f39030ec-2975-459a-972e-4928cb31e15a\") " pod="openstack/barbican-worker-77879c6d6f-fqm88" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.418595 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-4w6tj"] Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.446778 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.462019 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73bee848-defc-4a29-b1a2-a359359e3c67-combined-ca-bundle\") pod \"barbican-keystone-listener-846978fd94-gg45m\" (UID: \"73bee848-defc-4a29-b1a2-a359359e3c67\") " pod="openstack/barbican-keystone-listener-846978fd94-gg45m" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.462071 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g64mj\" (UniqueName: \"kubernetes.io/projected/73bee848-defc-4a29-b1a2-a359359e3c67-kube-api-access-g64mj\") pod \"barbican-keystone-listener-846978fd94-gg45m\" (UID: \"73bee848-defc-4a29-b1a2-a359359e3c67\") " pod="openstack/barbican-keystone-listener-846978fd94-gg45m" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.462138 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/73bee848-defc-4a29-b1a2-a359359e3c67-config-data-custom\") pod \"barbican-keystone-listener-846978fd94-gg45m\" (UID: \"73bee848-defc-4a29-b1a2-a359359e3c67\") " pod="openstack/barbican-keystone-listener-846978fd94-gg45m" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.462172 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73bee848-defc-4a29-b1a2-a359359e3c67-config-data\") pod \"barbican-keystone-listener-846978fd94-gg45m\" (UID: \"73bee848-defc-4a29-b1a2-a359359e3c67\") " pod="openstack/barbican-keystone-listener-846978fd94-gg45m" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.462228 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73bee848-defc-4a29-b1a2-a359359e3c67-logs\") pod \"barbican-keystone-listener-846978fd94-gg45m\" (UID: \"73bee848-defc-4a29-b1a2-a359359e3c67\") " pod="openstack/barbican-keystone-listener-846978fd94-gg45m" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.477121 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.521039 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-hrltm"] Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.524255 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-hrltm" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.565089 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/73bee848-defc-4a29-b1a2-a359359e3c67-config-data-custom\") pod \"barbican-keystone-listener-846978fd94-gg45m\" (UID: \"73bee848-defc-4a29-b1a2-a359359e3c67\") " pod="openstack/barbican-keystone-listener-846978fd94-gg45m" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.565347 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73bee848-defc-4a29-b1a2-a359359e3c67-config-data\") pod \"barbican-keystone-listener-846978fd94-gg45m\" (UID: \"73bee848-defc-4a29-b1a2-a359359e3c67\") " pod="openstack/barbican-keystone-listener-846978fd94-gg45m" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.565468 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73bee848-defc-4a29-b1a2-a359359e3c67-logs\") pod \"barbican-keystone-listener-846978fd94-gg45m\" (UID: \"73bee848-defc-4a29-b1a2-a359359e3c67\") " pod="openstack/barbican-keystone-listener-846978fd94-gg45m" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.565640 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73bee848-defc-4a29-b1a2-a359359e3c67-combined-ca-bundle\") pod \"barbican-keystone-listener-846978fd94-gg45m\" (UID: \"73bee848-defc-4a29-b1a2-a359359e3c67\") " pod="openstack/barbican-keystone-listener-846978fd94-gg45m" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.565718 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g64mj\" (UniqueName: \"kubernetes.io/projected/73bee848-defc-4a29-b1a2-a359359e3c67-kube-api-access-g64mj\") pod \"barbican-keystone-listener-846978fd94-gg45m\" (UID: \"73bee848-defc-4a29-b1a2-a359359e3c67\") " pod="openstack/barbican-keystone-listener-846978fd94-gg45m" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.565822 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-hrltm"] Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.566593 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/73bee848-defc-4a29-b1a2-a359359e3c67-logs\") pod \"barbican-keystone-listener-846978fd94-gg45m\" (UID: \"73bee848-defc-4a29-b1a2-a359359e3c67\") " pod="openstack/barbican-keystone-listener-846978fd94-gg45m" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.570800 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/73bee848-defc-4a29-b1a2-a359359e3c67-config-data-custom\") pod \"barbican-keystone-listener-846978fd94-gg45m\" (UID: \"73bee848-defc-4a29-b1a2-a359359e3c67\") " pod="openstack/barbican-keystone-listener-846978fd94-gg45m" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.571895 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73bee848-defc-4a29-b1a2-a359359e3c67-config-data\") pod \"barbican-keystone-listener-846978fd94-gg45m\" (UID: \"73bee848-defc-4a29-b1a2-a359359e3c67\") " pod="openstack/barbican-keystone-listener-846978fd94-gg45m" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.581861 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-77879c6d6f-fqm88" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.583791 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73bee848-defc-4a29-b1a2-a359359e3c67-combined-ca-bundle\") pod \"barbican-keystone-listener-846978fd94-gg45m\" (UID: \"73bee848-defc-4a29-b1a2-a359359e3c67\") " pod="openstack/barbican-keystone-listener-846978fd94-gg45m" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.593484 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g64mj\" (UniqueName: \"kubernetes.io/projected/73bee848-defc-4a29-b1a2-a359359e3c67-kube-api-access-g64mj\") pod \"barbican-keystone-listener-846978fd94-gg45m\" (UID: \"73bee848-defc-4a29-b1a2-a359359e3c67\") " pod="openstack/barbican-keystone-listener-846978fd94-gg45m" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.597148 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.610330 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.611696 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-846978fd94-gg45m" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.617557 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.617738 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.618390 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.625003 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7f8746d996-snqjj"] Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.626734 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7f8746d996-snqjj" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.630480 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.665720 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7f8746d996-snqjj"] Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.666820 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-hrltm\" (UID: \"aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b\") " pod="openstack/dnsmasq-dns-75c8ddd69c-hrltm" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.666878 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b-config\") pod \"dnsmasq-dns-75c8ddd69c-hrltm\" (UID: \"aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b\") " pod="openstack/dnsmasq-dns-75c8ddd69c-hrltm" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.666902 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-hrltm\" (UID: \"aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b\") " pod="openstack/dnsmasq-dns-75c8ddd69c-hrltm" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.666967 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-hrltm\" (UID: \"aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b\") " pod="openstack/dnsmasq-dns-75c8ddd69c-hrltm" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.666993 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-hrltm\" (UID: \"aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b\") " pod="openstack/dnsmasq-dns-75c8ddd69c-hrltm" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.667023 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgpfw\" (UniqueName: \"kubernetes.io/projected/aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b-kube-api-access-dgpfw\") pod \"dnsmasq-dns-75c8ddd69c-hrltm\" (UID: \"aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b\") " pod="openstack/dnsmasq-dns-75c8ddd69c-hrltm" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.770515 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-hrltm\" (UID: \"aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b\") " pod="openstack/dnsmasq-dns-75c8ddd69c-hrltm" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.770554 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83d7bdbc-2466-4272-bc8f-54afd0d7a9de-logs\") pod \"barbican-api-7f8746d996-snqjj\" (UID: \"83d7bdbc-2466-4272-bc8f-54afd0d7a9de\") " pod="openstack/barbican-api-7f8746d996-snqjj" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.770602 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/590fe630-a315-4fe6-b90d-298fbcc6619c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"590fe630-a315-4fe6-b90d-298fbcc6619c\") " pod="openstack/ceilometer-0" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.770629 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgpfw\" (UniqueName: \"kubernetes.io/projected/aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b-kube-api-access-dgpfw\") pod \"dnsmasq-dns-75c8ddd69c-hrltm\" (UID: \"aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b\") " pod="openstack/dnsmasq-dns-75c8ddd69c-hrltm" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.770683 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/590fe630-a315-4fe6-b90d-298fbcc6619c-config-data\") pod \"ceilometer-0\" (UID: \"590fe630-a315-4fe6-b90d-298fbcc6619c\") " pod="openstack/ceilometer-0" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.770715 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83d7bdbc-2466-4272-bc8f-54afd0d7a9de-config-data\") pod \"barbican-api-7f8746d996-snqjj\" (UID: \"83d7bdbc-2466-4272-bc8f-54afd0d7a9de\") " pod="openstack/barbican-api-7f8746d996-snqjj" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.770752 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-hrltm\" (UID: \"aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b\") " pod="openstack/dnsmasq-dns-75c8ddd69c-hrltm" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.770779 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmjm5\" (UniqueName: \"kubernetes.io/projected/83d7bdbc-2466-4272-bc8f-54afd0d7a9de-kube-api-access-tmjm5\") pod \"barbican-api-7f8746d996-snqjj\" (UID: \"83d7bdbc-2466-4272-bc8f-54afd0d7a9de\") " pod="openstack/barbican-api-7f8746d996-snqjj" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.770801 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b-config\") pod \"dnsmasq-dns-75c8ddd69c-hrltm\" (UID: \"aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b\") " pod="openstack/dnsmasq-dns-75c8ddd69c-hrltm" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.770833 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/590fe630-a315-4fe6-b90d-298fbcc6619c-run-httpd\") pod \"ceilometer-0\" (UID: \"590fe630-a315-4fe6-b90d-298fbcc6619c\") " pod="openstack/ceilometer-0" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.770854 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-hrltm\" (UID: \"aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b\") " pod="openstack/dnsmasq-dns-75c8ddd69c-hrltm" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.770870 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/590fe630-a315-4fe6-b90d-298fbcc6619c-scripts\") pod \"ceilometer-0\" (UID: \"590fe630-a315-4fe6-b90d-298fbcc6619c\") " pod="openstack/ceilometer-0" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.770903 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83d7bdbc-2466-4272-bc8f-54afd0d7a9de-combined-ca-bundle\") pod \"barbican-api-7f8746d996-snqjj\" (UID: \"83d7bdbc-2466-4272-bc8f-54afd0d7a9de\") " pod="openstack/barbican-api-7f8746d996-snqjj" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.770946 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/590fe630-a315-4fe6-b90d-298fbcc6619c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"590fe630-a315-4fe6-b90d-298fbcc6619c\") " pod="openstack/ceilometer-0" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.770980 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/590fe630-a315-4fe6-b90d-298fbcc6619c-log-httpd\") pod \"ceilometer-0\" (UID: \"590fe630-a315-4fe6-b90d-298fbcc6619c\") " pod="openstack/ceilometer-0" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.771012 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83d7bdbc-2466-4272-bc8f-54afd0d7a9de-config-data-custom\") pod \"barbican-api-7f8746d996-snqjj\" (UID: \"83d7bdbc-2466-4272-bc8f-54afd0d7a9de\") " pod="openstack/barbican-api-7f8746d996-snqjj" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.771031 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-hrltm\" (UID: \"aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b\") " pod="openstack/dnsmasq-dns-75c8ddd69c-hrltm" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.771066 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf5c5\" (UniqueName: \"kubernetes.io/projected/590fe630-a315-4fe6-b90d-298fbcc6619c-kube-api-access-vf5c5\") pod \"ceilometer-0\" (UID: \"590fe630-a315-4fe6-b90d-298fbcc6619c\") " pod="openstack/ceilometer-0" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.772259 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-hrltm\" (UID: \"aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b\") " pod="openstack/dnsmasq-dns-75c8ddd69c-hrltm" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.773032 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b-config\") pod \"dnsmasq-dns-75c8ddd69c-hrltm\" (UID: \"aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b\") " pod="openstack/dnsmasq-dns-75c8ddd69c-hrltm" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.774003 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-hrltm\" (UID: \"aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b\") " pod="openstack/dnsmasq-dns-75c8ddd69c-hrltm" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.781928 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-hrltm\" (UID: \"aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b\") " pod="openstack/dnsmasq-dns-75c8ddd69c-hrltm" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.785852 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-hrltm\" (UID: \"aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b\") " pod="openstack/dnsmasq-dns-75c8ddd69c-hrltm" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.804579 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgpfw\" (UniqueName: \"kubernetes.io/projected/aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b-kube-api-access-dgpfw\") pod \"dnsmasq-dns-75c8ddd69c-hrltm\" (UID: \"aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b\") " pod="openstack/dnsmasq-dns-75c8ddd69c-hrltm" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.883664 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/590fe630-a315-4fe6-b90d-298fbcc6619c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"590fe630-a315-4fe6-b90d-298fbcc6619c\") " pod="openstack/ceilometer-0" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.883701 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/590fe630-a315-4fe6-b90d-298fbcc6619c-log-httpd\") pod \"ceilometer-0\" (UID: \"590fe630-a315-4fe6-b90d-298fbcc6619c\") " pod="openstack/ceilometer-0" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.883736 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83d7bdbc-2466-4272-bc8f-54afd0d7a9de-config-data-custom\") pod \"barbican-api-7f8746d996-snqjj\" (UID: \"83d7bdbc-2466-4272-bc8f-54afd0d7a9de\") " pod="openstack/barbican-api-7f8746d996-snqjj" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.883763 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vf5c5\" (UniqueName: \"kubernetes.io/projected/590fe630-a315-4fe6-b90d-298fbcc6619c-kube-api-access-vf5c5\") pod \"ceilometer-0\" (UID: \"590fe630-a315-4fe6-b90d-298fbcc6619c\") " pod="openstack/ceilometer-0" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.883785 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83d7bdbc-2466-4272-bc8f-54afd0d7a9de-logs\") pod \"barbican-api-7f8746d996-snqjj\" (UID: \"83d7bdbc-2466-4272-bc8f-54afd0d7a9de\") " pod="openstack/barbican-api-7f8746d996-snqjj" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.883808 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/590fe630-a315-4fe6-b90d-298fbcc6619c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"590fe630-a315-4fe6-b90d-298fbcc6619c\") " pod="openstack/ceilometer-0" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.883846 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/590fe630-a315-4fe6-b90d-298fbcc6619c-config-data\") pod \"ceilometer-0\" (UID: \"590fe630-a315-4fe6-b90d-298fbcc6619c\") " pod="openstack/ceilometer-0" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.883873 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83d7bdbc-2466-4272-bc8f-54afd0d7a9de-config-data\") pod \"barbican-api-7f8746d996-snqjj\" (UID: \"83d7bdbc-2466-4272-bc8f-54afd0d7a9de\") " pod="openstack/barbican-api-7f8746d996-snqjj" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.883896 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmjm5\" (UniqueName: \"kubernetes.io/projected/83d7bdbc-2466-4272-bc8f-54afd0d7a9de-kube-api-access-tmjm5\") pod \"barbican-api-7f8746d996-snqjj\" (UID: \"83d7bdbc-2466-4272-bc8f-54afd0d7a9de\") " pod="openstack/barbican-api-7f8746d996-snqjj" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.883913 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/590fe630-a315-4fe6-b90d-298fbcc6619c-run-httpd\") pod \"ceilometer-0\" (UID: \"590fe630-a315-4fe6-b90d-298fbcc6619c\") " pod="openstack/ceilometer-0" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.883930 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/590fe630-a315-4fe6-b90d-298fbcc6619c-scripts\") pod \"ceilometer-0\" (UID: \"590fe630-a315-4fe6-b90d-298fbcc6619c\") " pod="openstack/ceilometer-0" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.883946 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83d7bdbc-2466-4272-bc8f-54afd0d7a9de-combined-ca-bundle\") pod \"barbican-api-7f8746d996-snqjj\" (UID: \"83d7bdbc-2466-4272-bc8f-54afd0d7a9de\") " pod="openstack/barbican-api-7f8746d996-snqjj" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.891691 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/590fe630-a315-4fe6-b90d-298fbcc6619c-run-httpd\") pod \"ceilometer-0\" (UID: \"590fe630-a315-4fe6-b90d-298fbcc6619c\") " pod="openstack/ceilometer-0" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.893924 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/590fe630-a315-4fe6-b90d-298fbcc6619c-log-httpd\") pod \"ceilometer-0\" (UID: \"590fe630-a315-4fe6-b90d-298fbcc6619c\") " pod="openstack/ceilometer-0" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.894222 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83d7bdbc-2466-4272-bc8f-54afd0d7a9de-logs\") pod \"barbican-api-7f8746d996-snqjj\" (UID: \"83d7bdbc-2466-4272-bc8f-54afd0d7a9de\") " pod="openstack/barbican-api-7f8746d996-snqjj" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.897993 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83d7bdbc-2466-4272-bc8f-54afd0d7a9de-config-data\") pod \"barbican-api-7f8746d996-snqjj\" (UID: \"83d7bdbc-2466-4272-bc8f-54afd0d7a9de\") " pod="openstack/barbican-api-7f8746d996-snqjj" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.898335 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/590fe630-a315-4fe6-b90d-298fbcc6619c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"590fe630-a315-4fe6-b90d-298fbcc6619c\") " pod="openstack/ceilometer-0" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.899784 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83d7bdbc-2466-4272-bc8f-54afd0d7a9de-combined-ca-bundle\") pod \"barbican-api-7f8746d996-snqjj\" (UID: \"83d7bdbc-2466-4272-bc8f-54afd0d7a9de\") " pod="openstack/barbican-api-7f8746d996-snqjj" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.901102 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83d7bdbc-2466-4272-bc8f-54afd0d7a9de-config-data-custom\") pod \"barbican-api-7f8746d996-snqjj\" (UID: \"83d7bdbc-2466-4272-bc8f-54afd0d7a9de\") " pod="openstack/barbican-api-7f8746d996-snqjj" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.907695 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/590fe630-a315-4fe6-b90d-298fbcc6619c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"590fe630-a315-4fe6-b90d-298fbcc6619c\") " pod="openstack/ceilometer-0" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.907943 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/590fe630-a315-4fe6-b90d-298fbcc6619c-config-data\") pod \"ceilometer-0\" (UID: \"590fe630-a315-4fe6-b90d-298fbcc6619c\") " pod="openstack/ceilometer-0" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.908125 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/590fe630-a315-4fe6-b90d-298fbcc6619c-scripts\") pod \"ceilometer-0\" (UID: \"590fe630-a315-4fe6-b90d-298fbcc6619c\") " pod="openstack/ceilometer-0" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.926806 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmjm5\" (UniqueName: \"kubernetes.io/projected/83d7bdbc-2466-4272-bc8f-54afd0d7a9de-kube-api-access-tmjm5\") pod \"barbican-api-7f8746d996-snqjj\" (UID: \"83d7bdbc-2466-4272-bc8f-54afd0d7a9de\") " pod="openstack/barbican-api-7f8746d996-snqjj" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.942749 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vf5c5\" (UniqueName: \"kubernetes.io/projected/590fe630-a315-4fe6-b90d-298fbcc6619c-kube-api-access-vf5c5\") pod \"ceilometer-0\" (UID: \"590fe630-a315-4fe6-b90d-298fbcc6619c\") " pod="openstack/ceilometer-0" Oct 13 17:40:42 crc kubenswrapper[4720]: I1013 17:40:42.960692 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-77879c6d6f-fqm88"] Oct 13 17:40:42 crc kubenswrapper[4720]: W1013 17:40:42.973297 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf39030ec_2975_459a_972e_4928cb31e15a.slice/crio-e23e88d61684c7b98e5e7daeb29115d6239363773bafe1988304a56e6fc2d942 WatchSource:0}: Error finding container e23e88d61684c7b98e5e7daeb29115d6239363773bafe1988304a56e6fc2d942: Status 404 returned error can't find the container with id e23e88d61684c7b98e5e7daeb29115d6239363773bafe1988304a56e6fc2d942 Oct 13 17:40:43 crc kubenswrapper[4720]: I1013 17:40:43.011302 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-84b966f6c9-4w6tj" podUID="0ea59751-3ee4-4404-99c7-06d02cec4f25" containerName="dnsmasq-dns" containerID="cri-o://d26c25ef68a2035e0c52772c7d3c704b692d75fb89c945690a7cb19b87df2d23" gracePeriod=10 Oct 13 17:40:43 crc kubenswrapper[4720]: I1013 17:40:43.011538 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-77879c6d6f-fqm88" event={"ID":"f39030ec-2975-459a-972e-4928cb31e15a","Type":"ContainerStarted","Data":"e23e88d61684c7b98e5e7daeb29115d6239363773bafe1988304a56e6fc2d942"} Oct 13 17:40:43 crc kubenswrapper[4720]: I1013 17:40:43.094218 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-hrltm" Oct 13 17:40:43 crc kubenswrapper[4720]: I1013 17:40:43.122437 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 17:40:43 crc kubenswrapper[4720]: I1013 17:40:43.133648 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7f8746d996-snqjj" Oct 13 17:40:43 crc kubenswrapper[4720]: I1013 17:40:43.190473 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d3623f2-ec27-4ad6-8cb4-553cb0527e15" path="/var/lib/kubelet/pods/6d3623f2-ec27-4ad6-8cb4-553cb0527e15/volumes" Oct 13 17:40:43 crc kubenswrapper[4720]: I1013 17:40:43.259803 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-846978fd94-gg45m"] Oct 13 17:40:43 crc kubenswrapper[4720]: I1013 17:40:43.485900 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-4w6tj" Oct 13 17:40:43 crc kubenswrapper[4720]: I1013 17:40:43.632402 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ea59751-3ee4-4404-99c7-06d02cec4f25-dns-svc\") pod \"0ea59751-3ee4-4404-99c7-06d02cec4f25\" (UID: \"0ea59751-3ee4-4404-99c7-06d02cec4f25\") " Oct 13 17:40:43 crc kubenswrapper[4720]: I1013 17:40:43.633626 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ea59751-3ee4-4404-99c7-06d02cec4f25-ovsdbserver-sb\") pod \"0ea59751-3ee4-4404-99c7-06d02cec4f25\" (UID: \"0ea59751-3ee4-4404-99c7-06d02cec4f25\") " Oct 13 17:40:43 crc kubenswrapper[4720]: I1013 17:40:43.633682 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ea59751-3ee4-4404-99c7-06d02cec4f25-dns-swift-storage-0\") pod \"0ea59751-3ee4-4404-99c7-06d02cec4f25\" (UID: \"0ea59751-3ee4-4404-99c7-06d02cec4f25\") " Oct 13 17:40:43 crc kubenswrapper[4720]: I1013 17:40:43.633766 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ea59751-3ee4-4404-99c7-06d02cec4f25-ovsdbserver-nb\") pod \"0ea59751-3ee4-4404-99c7-06d02cec4f25\" (UID: \"0ea59751-3ee4-4404-99c7-06d02cec4f25\") " Oct 13 17:40:43 crc kubenswrapper[4720]: I1013 17:40:43.633877 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wml7\" (UniqueName: \"kubernetes.io/projected/0ea59751-3ee4-4404-99c7-06d02cec4f25-kube-api-access-2wml7\") pod \"0ea59751-3ee4-4404-99c7-06d02cec4f25\" (UID: \"0ea59751-3ee4-4404-99c7-06d02cec4f25\") " Oct 13 17:40:43 crc kubenswrapper[4720]: I1013 17:40:43.633924 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ea59751-3ee4-4404-99c7-06d02cec4f25-config\") pod \"0ea59751-3ee4-4404-99c7-06d02cec4f25\" (UID: \"0ea59751-3ee4-4404-99c7-06d02cec4f25\") " Oct 13 17:40:43 crc kubenswrapper[4720]: I1013 17:40:43.688029 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ea59751-3ee4-4404-99c7-06d02cec4f25-kube-api-access-2wml7" (OuterVolumeSpecName: "kube-api-access-2wml7") pod "0ea59751-3ee4-4404-99c7-06d02cec4f25" (UID: "0ea59751-3ee4-4404-99c7-06d02cec4f25"). InnerVolumeSpecName "kube-api-access-2wml7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:40:43 crc kubenswrapper[4720]: I1013 17:40:43.688099 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-hrltm"] Oct 13 17:40:43 crc kubenswrapper[4720]: I1013 17:40:43.698381 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7f8746d996-snqjj"] Oct 13 17:40:43 crc kubenswrapper[4720]: I1013 17:40:43.703977 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 13 17:40:43 crc kubenswrapper[4720]: I1013 17:40:43.706834 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ea59751-3ee4-4404-99c7-06d02cec4f25-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0ea59751-3ee4-4404-99c7-06d02cec4f25" (UID: "0ea59751-3ee4-4404-99c7-06d02cec4f25"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:40:43 crc kubenswrapper[4720]: I1013 17:40:43.725692 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ea59751-3ee4-4404-99c7-06d02cec4f25-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0ea59751-3ee4-4404-99c7-06d02cec4f25" (UID: "0ea59751-3ee4-4404-99c7-06d02cec4f25"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:40:43 crc kubenswrapper[4720]: I1013 17:40:43.735806 4720 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ea59751-3ee4-4404-99c7-06d02cec4f25-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:43 crc kubenswrapper[4720]: I1013 17:40:43.735846 4720 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0ea59751-3ee4-4404-99c7-06d02cec4f25-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:43 crc kubenswrapper[4720]: I1013 17:40:43.735857 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wml7\" (UniqueName: \"kubernetes.io/projected/0ea59751-3ee4-4404-99c7-06d02cec4f25-kube-api-access-2wml7\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:43 crc kubenswrapper[4720]: I1013 17:40:43.757586 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ea59751-3ee4-4404-99c7-06d02cec4f25-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0ea59751-3ee4-4404-99c7-06d02cec4f25" (UID: "0ea59751-3ee4-4404-99c7-06d02cec4f25"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:40:43 crc kubenswrapper[4720]: I1013 17:40:43.757842 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ea59751-3ee4-4404-99c7-06d02cec4f25-config" (OuterVolumeSpecName: "config") pod "0ea59751-3ee4-4404-99c7-06d02cec4f25" (UID: "0ea59751-3ee4-4404-99c7-06d02cec4f25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:40:43 crc kubenswrapper[4720]: I1013 17:40:43.764606 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ea59751-3ee4-4404-99c7-06d02cec4f25-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0ea59751-3ee4-4404-99c7-06d02cec4f25" (UID: "0ea59751-3ee4-4404-99c7-06d02cec4f25"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:40:43 crc kubenswrapper[4720]: I1013 17:40:43.838379 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ea59751-3ee4-4404-99c7-06d02cec4f25-config\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:43 crc kubenswrapper[4720]: I1013 17:40:43.838637 4720 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ea59751-3ee4-4404-99c7-06d02cec4f25-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:43 crc kubenswrapper[4720]: I1013 17:40:43.838649 4720 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ea59751-3ee4-4404-99c7-06d02cec4f25-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:44 crc kubenswrapper[4720]: I1013 17:40:44.027597 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f8746d996-snqjj" event={"ID":"83d7bdbc-2466-4272-bc8f-54afd0d7a9de","Type":"ContainerStarted","Data":"7f8ea84272aafa74d27bad45c8ff76c2c1de12973ce50d45b6ec53e35650ab6e"} Oct 13 17:40:44 crc kubenswrapper[4720]: I1013 17:40:44.027651 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f8746d996-snqjj" event={"ID":"83d7bdbc-2466-4272-bc8f-54afd0d7a9de","Type":"ContainerStarted","Data":"9a9d954478fc3b362dfbdfdf5f64d7a752eb0c6d8976ac894da74bbb9e73987e"} Oct 13 17:40:44 crc kubenswrapper[4720]: I1013 17:40:44.029596 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-846978fd94-gg45m" event={"ID":"73bee848-defc-4a29-b1a2-a359359e3c67","Type":"ContainerStarted","Data":"30f9a6ca0805089bf54c473d7a3828a9bff2fbe39a1a53b4b6e3287bbfa3a060"} Oct 13 17:40:44 crc kubenswrapper[4720]: I1013 17:40:44.032404 4720 generic.go:334] "Generic (PLEG): container finished" podID="aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b" containerID="689014f2818eced872e2cb1a5d96a7a6d90597398b1856959b571927add9b4de" exitCode=0 Oct 13 17:40:44 crc kubenswrapper[4720]: I1013 17:40:44.032515 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-hrltm" event={"ID":"aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b","Type":"ContainerDied","Data":"689014f2818eced872e2cb1a5d96a7a6d90597398b1856959b571927add9b4de"} Oct 13 17:40:44 crc kubenswrapper[4720]: I1013 17:40:44.032562 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-hrltm" event={"ID":"aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b","Type":"ContainerStarted","Data":"66fb3a531bf25901fcd32716ff4c61ba56c11fb7412b6510fe681d399c2d2b9f"} Oct 13 17:40:44 crc kubenswrapper[4720]: I1013 17:40:44.045041 4720 generic.go:334] "Generic (PLEG): container finished" podID="0ea59751-3ee4-4404-99c7-06d02cec4f25" containerID="d26c25ef68a2035e0c52772c7d3c704b692d75fb89c945690a7cb19b87df2d23" exitCode=0 Oct 13 17:40:44 crc kubenswrapper[4720]: I1013 17:40:44.045257 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-4w6tj" Oct 13 17:40:44 crc kubenswrapper[4720]: I1013 17:40:44.045730 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-4w6tj" event={"ID":"0ea59751-3ee4-4404-99c7-06d02cec4f25","Type":"ContainerDied","Data":"d26c25ef68a2035e0c52772c7d3c704b692d75fb89c945690a7cb19b87df2d23"} Oct 13 17:40:44 crc kubenswrapper[4720]: I1013 17:40:44.045785 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-4w6tj" event={"ID":"0ea59751-3ee4-4404-99c7-06d02cec4f25","Type":"ContainerDied","Data":"f78065522dec3a068b6763b3c9c05b8c966089facf5fceb1770ef511466e1918"} Oct 13 17:40:44 crc kubenswrapper[4720]: I1013 17:40:44.045802 4720 scope.go:117] "RemoveContainer" containerID="d26c25ef68a2035e0c52772c7d3c704b692d75fb89c945690a7cb19b87df2d23" Oct 13 17:40:44 crc kubenswrapper[4720]: I1013 17:40:44.051364 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"590fe630-a315-4fe6-b90d-298fbcc6619c","Type":"ContainerStarted","Data":"a22bc0af46682798758d9d24e400020af0d86d0f1007873c81dfdc47e389455a"} Oct 13 17:40:44 crc kubenswrapper[4720]: I1013 17:40:44.114658 4720 scope.go:117] "RemoveContainer" containerID="be0f0971613567094a3bb4aea84a21d6203c600885ecd9cd30e1418360503828" Oct 13 17:40:44 crc kubenswrapper[4720]: I1013 17:40:44.129245 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-4w6tj"] Oct 13 17:40:44 crc kubenswrapper[4720]: I1013 17:40:44.137953 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-4w6tj"] Oct 13 17:40:44 crc kubenswrapper[4720]: I1013 17:40:44.165010 4720 scope.go:117] "RemoveContainer" containerID="d26c25ef68a2035e0c52772c7d3c704b692d75fb89c945690a7cb19b87df2d23" Oct 13 17:40:44 crc kubenswrapper[4720]: E1013 17:40:44.168865 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d26c25ef68a2035e0c52772c7d3c704b692d75fb89c945690a7cb19b87df2d23\": container with ID starting with d26c25ef68a2035e0c52772c7d3c704b692d75fb89c945690a7cb19b87df2d23 not found: ID does not exist" containerID="d26c25ef68a2035e0c52772c7d3c704b692d75fb89c945690a7cb19b87df2d23" Oct 13 17:40:44 crc kubenswrapper[4720]: I1013 17:40:44.168915 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d26c25ef68a2035e0c52772c7d3c704b692d75fb89c945690a7cb19b87df2d23"} err="failed to get container status \"d26c25ef68a2035e0c52772c7d3c704b692d75fb89c945690a7cb19b87df2d23\": rpc error: code = NotFound desc = could not find container \"d26c25ef68a2035e0c52772c7d3c704b692d75fb89c945690a7cb19b87df2d23\": container with ID starting with d26c25ef68a2035e0c52772c7d3c704b692d75fb89c945690a7cb19b87df2d23 not found: ID does not exist" Oct 13 17:40:44 crc kubenswrapper[4720]: I1013 17:40:44.168940 4720 scope.go:117] "RemoveContainer" containerID="be0f0971613567094a3bb4aea84a21d6203c600885ecd9cd30e1418360503828" Oct 13 17:40:44 crc kubenswrapper[4720]: E1013 17:40:44.169206 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be0f0971613567094a3bb4aea84a21d6203c600885ecd9cd30e1418360503828\": container with ID starting with be0f0971613567094a3bb4aea84a21d6203c600885ecd9cd30e1418360503828 not found: ID does not exist" containerID="be0f0971613567094a3bb4aea84a21d6203c600885ecd9cd30e1418360503828" Oct 13 17:40:44 crc kubenswrapper[4720]: I1013 17:40:44.169231 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be0f0971613567094a3bb4aea84a21d6203c600885ecd9cd30e1418360503828"} err="failed to get container status \"be0f0971613567094a3bb4aea84a21d6203c600885ecd9cd30e1418360503828\": rpc error: code = NotFound desc = could not find container \"be0f0971613567094a3bb4aea84a21d6203c600885ecd9cd30e1418360503828\": container with ID starting with be0f0971613567094a3bb4aea84a21d6203c600885ecd9cd30e1418360503828 not found: ID does not exist" Oct 13 17:40:44 crc kubenswrapper[4720]: I1013 17:40:44.742359 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-77544fcf9d-jwg9p"] Oct 13 17:40:44 crc kubenswrapper[4720]: E1013 17:40:44.743726 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ea59751-3ee4-4404-99c7-06d02cec4f25" containerName="dnsmasq-dns" Oct 13 17:40:44 crc kubenswrapper[4720]: I1013 17:40:44.743747 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ea59751-3ee4-4404-99c7-06d02cec4f25" containerName="dnsmasq-dns" Oct 13 17:40:44 crc kubenswrapper[4720]: E1013 17:40:44.743771 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ea59751-3ee4-4404-99c7-06d02cec4f25" containerName="init" Oct 13 17:40:44 crc kubenswrapper[4720]: I1013 17:40:44.743779 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ea59751-3ee4-4404-99c7-06d02cec4f25" containerName="init" Oct 13 17:40:44 crc kubenswrapper[4720]: I1013 17:40:44.744107 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ea59751-3ee4-4404-99c7-06d02cec4f25" containerName="dnsmasq-dns" Oct 13 17:40:44 crc kubenswrapper[4720]: I1013 17:40:44.745980 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-77544fcf9d-jwg9p" Oct 13 17:40:44 crc kubenswrapper[4720]: I1013 17:40:44.752212 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Oct 13 17:40:44 crc kubenswrapper[4720]: I1013 17:40:44.752403 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Oct 13 17:40:44 crc kubenswrapper[4720]: I1013 17:40:44.773336 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-77544fcf9d-jwg9p"] Oct 13 17:40:44 crc kubenswrapper[4720]: I1013 17:40:44.781661 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84cf65bc-1603-4782-9b88-15937c9c7c6f-combined-ca-bundle\") pod \"barbican-api-77544fcf9d-jwg9p\" (UID: \"84cf65bc-1603-4782-9b88-15937c9c7c6f\") " pod="openstack/barbican-api-77544fcf9d-jwg9p" Oct 13 17:40:44 crc kubenswrapper[4720]: I1013 17:40:44.781916 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/84cf65bc-1603-4782-9b88-15937c9c7c6f-internal-tls-certs\") pod \"barbican-api-77544fcf9d-jwg9p\" (UID: \"84cf65bc-1603-4782-9b88-15937c9c7c6f\") " pod="openstack/barbican-api-77544fcf9d-jwg9p" Oct 13 17:40:44 crc kubenswrapper[4720]: I1013 17:40:44.782061 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgqwf\" (UniqueName: \"kubernetes.io/projected/84cf65bc-1603-4782-9b88-15937c9c7c6f-kube-api-access-jgqwf\") pod \"barbican-api-77544fcf9d-jwg9p\" (UID: \"84cf65bc-1603-4782-9b88-15937c9c7c6f\") " pod="openstack/barbican-api-77544fcf9d-jwg9p" Oct 13 17:40:44 crc kubenswrapper[4720]: I1013 17:40:44.782166 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/84cf65bc-1603-4782-9b88-15937c9c7c6f-config-data-custom\") pod \"barbican-api-77544fcf9d-jwg9p\" (UID: \"84cf65bc-1603-4782-9b88-15937c9c7c6f\") " pod="openstack/barbican-api-77544fcf9d-jwg9p" Oct 13 17:40:44 crc kubenswrapper[4720]: I1013 17:40:44.782283 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84cf65bc-1603-4782-9b88-15937c9c7c6f-logs\") pod \"barbican-api-77544fcf9d-jwg9p\" (UID: \"84cf65bc-1603-4782-9b88-15937c9c7c6f\") " pod="openstack/barbican-api-77544fcf9d-jwg9p" Oct 13 17:40:44 crc kubenswrapper[4720]: I1013 17:40:44.782321 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/84cf65bc-1603-4782-9b88-15937c9c7c6f-public-tls-certs\") pod \"barbican-api-77544fcf9d-jwg9p\" (UID: \"84cf65bc-1603-4782-9b88-15937c9c7c6f\") " pod="openstack/barbican-api-77544fcf9d-jwg9p" Oct 13 17:40:44 crc kubenswrapper[4720]: I1013 17:40:44.782393 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84cf65bc-1603-4782-9b88-15937c9c7c6f-config-data\") pod \"barbican-api-77544fcf9d-jwg9p\" (UID: \"84cf65bc-1603-4782-9b88-15937c9c7c6f\") " pod="openstack/barbican-api-77544fcf9d-jwg9p" Oct 13 17:40:44 crc kubenswrapper[4720]: I1013 17:40:44.884063 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/84cf65bc-1603-4782-9b88-15937c9c7c6f-internal-tls-certs\") pod \"barbican-api-77544fcf9d-jwg9p\" (UID: \"84cf65bc-1603-4782-9b88-15937c9c7c6f\") " pod="openstack/barbican-api-77544fcf9d-jwg9p" Oct 13 17:40:44 crc kubenswrapper[4720]: I1013 17:40:44.884124 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgqwf\" (UniqueName: \"kubernetes.io/projected/84cf65bc-1603-4782-9b88-15937c9c7c6f-kube-api-access-jgqwf\") pod \"barbican-api-77544fcf9d-jwg9p\" (UID: \"84cf65bc-1603-4782-9b88-15937c9c7c6f\") " pod="openstack/barbican-api-77544fcf9d-jwg9p" Oct 13 17:40:44 crc kubenswrapper[4720]: I1013 17:40:44.884160 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/84cf65bc-1603-4782-9b88-15937c9c7c6f-config-data-custom\") pod \"barbican-api-77544fcf9d-jwg9p\" (UID: \"84cf65bc-1603-4782-9b88-15937c9c7c6f\") " pod="openstack/barbican-api-77544fcf9d-jwg9p" Oct 13 17:40:44 crc kubenswrapper[4720]: I1013 17:40:44.884204 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84cf65bc-1603-4782-9b88-15937c9c7c6f-logs\") pod \"barbican-api-77544fcf9d-jwg9p\" (UID: \"84cf65bc-1603-4782-9b88-15937c9c7c6f\") " pod="openstack/barbican-api-77544fcf9d-jwg9p" Oct 13 17:40:44 crc kubenswrapper[4720]: I1013 17:40:44.884224 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/84cf65bc-1603-4782-9b88-15937c9c7c6f-public-tls-certs\") pod \"barbican-api-77544fcf9d-jwg9p\" (UID: \"84cf65bc-1603-4782-9b88-15937c9c7c6f\") " pod="openstack/barbican-api-77544fcf9d-jwg9p" Oct 13 17:40:44 crc kubenswrapper[4720]: I1013 17:40:44.884251 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84cf65bc-1603-4782-9b88-15937c9c7c6f-config-data\") pod \"barbican-api-77544fcf9d-jwg9p\" (UID: \"84cf65bc-1603-4782-9b88-15937c9c7c6f\") " pod="openstack/barbican-api-77544fcf9d-jwg9p" Oct 13 17:40:44 crc kubenswrapper[4720]: I1013 17:40:44.884283 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84cf65bc-1603-4782-9b88-15937c9c7c6f-combined-ca-bundle\") pod \"barbican-api-77544fcf9d-jwg9p\" (UID: \"84cf65bc-1603-4782-9b88-15937c9c7c6f\") " pod="openstack/barbican-api-77544fcf9d-jwg9p" Oct 13 17:40:44 crc kubenswrapper[4720]: I1013 17:40:44.884730 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84cf65bc-1603-4782-9b88-15937c9c7c6f-logs\") pod \"barbican-api-77544fcf9d-jwg9p\" (UID: \"84cf65bc-1603-4782-9b88-15937c9c7c6f\") " pod="openstack/barbican-api-77544fcf9d-jwg9p" Oct 13 17:40:44 crc kubenswrapper[4720]: I1013 17:40:44.889953 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/84cf65bc-1603-4782-9b88-15937c9c7c6f-config-data-custom\") pod \"barbican-api-77544fcf9d-jwg9p\" (UID: \"84cf65bc-1603-4782-9b88-15937c9c7c6f\") " pod="openstack/barbican-api-77544fcf9d-jwg9p" Oct 13 17:40:44 crc kubenswrapper[4720]: I1013 17:40:44.890480 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/84cf65bc-1603-4782-9b88-15937c9c7c6f-public-tls-certs\") pod \"barbican-api-77544fcf9d-jwg9p\" (UID: \"84cf65bc-1603-4782-9b88-15937c9c7c6f\") " pod="openstack/barbican-api-77544fcf9d-jwg9p" Oct 13 17:40:44 crc kubenswrapper[4720]: I1013 17:40:44.891174 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84cf65bc-1603-4782-9b88-15937c9c7c6f-config-data\") pod \"barbican-api-77544fcf9d-jwg9p\" (UID: \"84cf65bc-1603-4782-9b88-15937c9c7c6f\") " pod="openstack/barbican-api-77544fcf9d-jwg9p" Oct 13 17:40:44 crc kubenswrapper[4720]: I1013 17:40:44.898308 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/84cf65bc-1603-4782-9b88-15937c9c7c6f-internal-tls-certs\") pod \"barbican-api-77544fcf9d-jwg9p\" (UID: \"84cf65bc-1603-4782-9b88-15937c9c7c6f\") " pod="openstack/barbican-api-77544fcf9d-jwg9p" Oct 13 17:40:44 crc kubenswrapper[4720]: I1013 17:40:44.902863 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgqwf\" (UniqueName: \"kubernetes.io/projected/84cf65bc-1603-4782-9b88-15937c9c7c6f-kube-api-access-jgqwf\") pod \"barbican-api-77544fcf9d-jwg9p\" (UID: \"84cf65bc-1603-4782-9b88-15937c9c7c6f\") " pod="openstack/barbican-api-77544fcf9d-jwg9p" Oct 13 17:40:44 crc kubenswrapper[4720]: I1013 17:40:44.903546 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84cf65bc-1603-4782-9b88-15937c9c7c6f-combined-ca-bundle\") pod \"barbican-api-77544fcf9d-jwg9p\" (UID: \"84cf65bc-1603-4782-9b88-15937c9c7c6f\") " pod="openstack/barbican-api-77544fcf9d-jwg9p" Oct 13 17:40:45 crc kubenswrapper[4720]: I1013 17:40:45.060486 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-hrltm" event={"ID":"aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b","Type":"ContainerStarted","Data":"f0ecab07f82314444b153181b1e3feb280a903152b88ad306f18918863152e52"} Oct 13 17:40:45 crc kubenswrapper[4720]: I1013 17:40:45.060841 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75c8ddd69c-hrltm" Oct 13 17:40:45 crc kubenswrapper[4720]: I1013 17:40:45.064491 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f8746d996-snqjj" event={"ID":"83d7bdbc-2466-4272-bc8f-54afd0d7a9de","Type":"ContainerStarted","Data":"5fd8435bb6ffbd686fa3db27072e910dd2107b054b171e9d0a0be01bae37acc0"} Oct 13 17:40:45 crc kubenswrapper[4720]: I1013 17:40:45.064664 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7f8746d996-snqjj" Oct 13 17:40:45 crc kubenswrapper[4720]: I1013 17:40:45.084990 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75c8ddd69c-hrltm" podStartSLOduration=3.084971402 podStartE2EDuration="3.084971402s" podCreationTimestamp="2025-10-13 17:40:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:40:45.080449736 +0000 UTC m=+990.537699868" watchObservedRunningTime="2025-10-13 17:40:45.084971402 +0000 UTC m=+990.542221544" Oct 13 17:40:45 crc kubenswrapper[4720]: I1013 17:40:45.101789 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7f8746d996-snqjj" podStartSLOduration=3.101774134 podStartE2EDuration="3.101774134s" podCreationTimestamp="2025-10-13 17:40:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:40:45.097497254 +0000 UTC m=+990.554747386" watchObservedRunningTime="2025-10-13 17:40:45.101774134 +0000 UTC m=+990.559024266" Oct 13 17:40:45 crc kubenswrapper[4720]: I1013 17:40:45.119851 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-77544fcf9d-jwg9p" Oct 13 17:40:45 crc kubenswrapper[4720]: I1013 17:40:45.193638 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ea59751-3ee4-4404-99c7-06d02cec4f25" path="/var/lib/kubelet/pods/0ea59751-3ee4-4404-99c7-06d02cec4f25/volumes" Oct 13 17:40:45 crc kubenswrapper[4720]: I1013 17:40:45.605271 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-77544fcf9d-jwg9p"] Oct 13 17:40:46 crc kubenswrapper[4720]: I1013 17:40:46.076763 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-77879c6d6f-fqm88" event={"ID":"f39030ec-2975-459a-972e-4928cb31e15a","Type":"ContainerStarted","Data":"583b27eac633f3ba3ee1a1726bc74f1953a9e74484734332462b264750aeac54"} Oct 13 17:40:46 crc kubenswrapper[4720]: I1013 17:40:46.077065 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-77879c6d6f-fqm88" event={"ID":"f39030ec-2975-459a-972e-4928cb31e15a","Type":"ContainerStarted","Data":"c1ff5e6d28b48b21d96bd07b2527a88c9113c175d73e4344c9a1d0f0fa053683"} Oct 13 17:40:46 crc kubenswrapper[4720]: I1013 17:40:46.079335 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-77544fcf9d-jwg9p" event={"ID":"84cf65bc-1603-4782-9b88-15937c9c7c6f","Type":"ContainerStarted","Data":"9849f8fdad0a809c78f1eca65460ef211a40b43eeda2d7cccf6473b2fc7d2199"} Oct 13 17:40:46 crc kubenswrapper[4720]: I1013 17:40:46.079370 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-77544fcf9d-jwg9p" event={"ID":"84cf65bc-1603-4782-9b88-15937c9c7c6f","Type":"ContainerStarted","Data":"ec4a03cd8714a19624321a0b8da379c07067e05bd1815619774ab8a1d3cccf1c"} Oct 13 17:40:46 crc kubenswrapper[4720]: I1013 17:40:46.081679 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"590fe630-a315-4fe6-b90d-298fbcc6619c","Type":"ContainerStarted","Data":"7b3bfde79f2808de4c0373d3f27221eaf79bf720f2a19f1fbb518930bb2b6084"} Oct 13 17:40:46 crc kubenswrapper[4720]: I1013 17:40:46.084385 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-846978fd94-gg45m" event={"ID":"73bee848-defc-4a29-b1a2-a359359e3c67","Type":"ContainerStarted","Data":"bfa7066234e25ed206e462369aff2fc3e9a6b2380fceb275fa75da34621ad6fc"} Oct 13 17:40:46 crc kubenswrapper[4720]: I1013 17:40:46.084413 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-846978fd94-gg45m" event={"ID":"73bee848-defc-4a29-b1a2-a359359e3c67","Type":"ContainerStarted","Data":"532d4c31c133edb2c2920ec8c126a1b55585ac7f23ad2452e919d7a3ee66e9ab"} Oct 13 17:40:46 crc kubenswrapper[4720]: I1013 17:40:46.084875 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7f8746d996-snqjj" Oct 13 17:40:46 crc kubenswrapper[4720]: I1013 17:40:46.099778 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-77879c6d6f-fqm88" podStartSLOduration=1.976489708 podStartE2EDuration="4.099757663s" podCreationTimestamp="2025-10-13 17:40:42 +0000 UTC" firstStartedPulling="2025-10-13 17:40:42.994880401 +0000 UTC m=+988.452130533" lastFinishedPulling="2025-10-13 17:40:45.118148356 +0000 UTC m=+990.575398488" observedRunningTime="2025-10-13 17:40:46.095486953 +0000 UTC m=+991.552737085" watchObservedRunningTime="2025-10-13 17:40:46.099757663 +0000 UTC m=+991.557007795" Oct 13 17:40:46 crc kubenswrapper[4720]: I1013 17:40:46.115543 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-846978fd94-gg45m" podStartSLOduration=2.296592155 podStartE2EDuration="4.115522658s" podCreationTimestamp="2025-10-13 17:40:42 +0000 UTC" firstStartedPulling="2025-10-13 17:40:43.300315531 +0000 UTC m=+988.757565663" lastFinishedPulling="2025-10-13 17:40:45.119246034 +0000 UTC m=+990.576496166" observedRunningTime="2025-10-13 17:40:46.114828211 +0000 UTC m=+991.572078343" watchObservedRunningTime="2025-10-13 17:40:46.115522658 +0000 UTC m=+991.572772790" Oct 13 17:40:47 crc kubenswrapper[4720]: I1013 17:40:47.094507 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-77544fcf9d-jwg9p" event={"ID":"84cf65bc-1603-4782-9b88-15937c9c7c6f","Type":"ContainerStarted","Data":"6ef7b5eab8d9bae4b4d0421ba7f3d26ceca28e7f11f08d641de8f3acb78f4301"} Oct 13 17:40:47 crc kubenswrapper[4720]: I1013 17:40:47.095486 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-77544fcf9d-jwg9p" Oct 13 17:40:47 crc kubenswrapper[4720]: I1013 17:40:47.095507 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-77544fcf9d-jwg9p" Oct 13 17:40:47 crc kubenswrapper[4720]: I1013 17:40:47.098355 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"590fe630-a315-4fe6-b90d-298fbcc6619c","Type":"ContainerStarted","Data":"e28570c9aab9f51b2eb41e6550e7cdfc518782eae4865b54b64c305b1c23e95c"} Oct 13 17:40:47 crc kubenswrapper[4720]: I1013 17:40:47.098407 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"590fe630-a315-4fe6-b90d-298fbcc6619c","Type":"ContainerStarted","Data":"6eb9fd564a8906abcb33a17c5fd02f869620b0bd5b6232ce86a3dbe00885e01a"} Oct 13 17:40:48 crc kubenswrapper[4720]: I1013 17:40:48.466012 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-5744dfc665-n6ts6" Oct 13 17:40:48 crc kubenswrapper[4720]: I1013 17:40:48.479743 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-77544fcf9d-jwg9p" podStartSLOduration=4.479723542 podStartE2EDuration="4.479723542s" podCreationTimestamp="2025-10-13 17:40:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:40:47.121070852 +0000 UTC m=+992.578320994" watchObservedRunningTime="2025-10-13 17:40:48.479723542 +0000 UTC m=+993.936973674" Oct 13 17:40:48 crc kubenswrapper[4720]: I1013 17:40:48.933816 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Oct 13 17:40:48 crc kubenswrapper[4720]: I1013 17:40:48.934981 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 13 17:40:48 crc kubenswrapper[4720]: I1013 17:40:48.938120 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Oct 13 17:40:48 crc kubenswrapper[4720]: I1013 17:40:48.938478 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-wqj26" Oct 13 17:40:48 crc kubenswrapper[4720]: I1013 17:40:48.940287 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Oct 13 17:40:48 crc kubenswrapper[4720]: I1013 17:40:48.942172 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 13 17:40:48 crc kubenswrapper[4720]: I1013 17:40:48.965719 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f87dfa54-2548-4cf1-ad02-c7663263650c-openstack-config-secret\") pod \"openstackclient\" (UID: \"f87dfa54-2548-4cf1-ad02-c7663263650c\") " pod="openstack/openstackclient" Oct 13 17:40:48 crc kubenswrapper[4720]: I1013 17:40:48.965795 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f87dfa54-2548-4cf1-ad02-c7663263650c-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f87dfa54-2548-4cf1-ad02-c7663263650c\") " pod="openstack/openstackclient" Oct 13 17:40:48 crc kubenswrapper[4720]: I1013 17:40:48.965840 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f87dfa54-2548-4cf1-ad02-c7663263650c-openstack-config\") pod \"openstackclient\" (UID: \"f87dfa54-2548-4cf1-ad02-c7663263650c\") " pod="openstack/openstackclient" Oct 13 17:40:48 crc kubenswrapper[4720]: I1013 17:40:48.965871 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hwzm\" (UniqueName: \"kubernetes.io/projected/f87dfa54-2548-4cf1-ad02-c7663263650c-kube-api-access-4hwzm\") pod \"openstackclient\" (UID: \"f87dfa54-2548-4cf1-ad02-c7663263650c\") " pod="openstack/openstackclient" Oct 13 17:40:49 crc kubenswrapper[4720]: I1013 17:40:49.067801 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f87dfa54-2548-4cf1-ad02-c7663263650c-openstack-config-secret\") pod \"openstackclient\" (UID: \"f87dfa54-2548-4cf1-ad02-c7663263650c\") " pod="openstack/openstackclient" Oct 13 17:40:49 crc kubenswrapper[4720]: I1013 17:40:49.067890 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f87dfa54-2548-4cf1-ad02-c7663263650c-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f87dfa54-2548-4cf1-ad02-c7663263650c\") " pod="openstack/openstackclient" Oct 13 17:40:49 crc kubenswrapper[4720]: I1013 17:40:49.067964 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f87dfa54-2548-4cf1-ad02-c7663263650c-openstack-config\") pod \"openstackclient\" (UID: \"f87dfa54-2548-4cf1-ad02-c7663263650c\") " pod="openstack/openstackclient" Oct 13 17:40:49 crc kubenswrapper[4720]: I1013 17:40:49.067999 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hwzm\" (UniqueName: \"kubernetes.io/projected/f87dfa54-2548-4cf1-ad02-c7663263650c-kube-api-access-4hwzm\") pod \"openstackclient\" (UID: \"f87dfa54-2548-4cf1-ad02-c7663263650c\") " pod="openstack/openstackclient" Oct 13 17:40:49 crc kubenswrapper[4720]: I1013 17:40:49.068989 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f87dfa54-2548-4cf1-ad02-c7663263650c-openstack-config\") pod \"openstackclient\" (UID: \"f87dfa54-2548-4cf1-ad02-c7663263650c\") " pod="openstack/openstackclient" Oct 13 17:40:49 crc kubenswrapper[4720]: I1013 17:40:49.074750 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f87dfa54-2548-4cf1-ad02-c7663263650c-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f87dfa54-2548-4cf1-ad02-c7663263650c\") " pod="openstack/openstackclient" Oct 13 17:40:49 crc kubenswrapper[4720]: I1013 17:40:49.076523 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f87dfa54-2548-4cf1-ad02-c7663263650c-openstack-config-secret\") pod \"openstackclient\" (UID: \"f87dfa54-2548-4cf1-ad02-c7663263650c\") " pod="openstack/openstackclient" Oct 13 17:40:49 crc kubenswrapper[4720]: I1013 17:40:49.094862 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hwzm\" (UniqueName: \"kubernetes.io/projected/f87dfa54-2548-4cf1-ad02-c7663263650c-kube-api-access-4hwzm\") pod \"openstackclient\" (UID: \"f87dfa54-2548-4cf1-ad02-c7663263650c\") " pod="openstack/openstackclient" Oct 13 17:40:49 crc kubenswrapper[4720]: I1013 17:40:49.125569 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"590fe630-a315-4fe6-b90d-298fbcc6619c","Type":"ContainerStarted","Data":"c047a3ae953a1fbf866b60ed26c57ce0ffe509cd034ae62ff29d56506356fdb2"} Oct 13 17:40:49 crc kubenswrapper[4720]: I1013 17:40:49.174347 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.942708772 podStartE2EDuration="7.174326825s" podCreationTimestamp="2025-10-13 17:40:42 +0000 UTC" firstStartedPulling="2025-10-13 17:40:43.709967622 +0000 UTC m=+989.167217754" lastFinishedPulling="2025-10-13 17:40:47.941585675 +0000 UTC m=+993.398835807" observedRunningTime="2025-10-13 17:40:49.162217014 +0000 UTC m=+994.619467166" watchObservedRunningTime="2025-10-13 17:40:49.174326825 +0000 UTC m=+994.631576957" Oct 13 17:40:49 crc kubenswrapper[4720]: E1013 17:40:49.178508 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-2htwh" podUID="444e35b8-1d2a-4d83-be6c-2184ae0e3110" Oct 13 17:40:49 crc kubenswrapper[4720]: I1013 17:40:49.255592 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 13 17:40:49 crc kubenswrapper[4720]: I1013 17:40:49.751848 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 13 17:40:49 crc kubenswrapper[4720]: I1013 17:40:49.793933 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7f8746d996-snqjj" Oct 13 17:40:50 crc kubenswrapper[4720]: I1013 17:40:50.134085 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"f87dfa54-2548-4cf1-ad02-c7663263650c","Type":"ContainerStarted","Data":"d846f63b1ecfb229fe45ac57b75a247c39296bdffa5bb21e5f09870cc6468709"} Oct 13 17:40:50 crc kubenswrapper[4720]: I1013 17:40:50.134417 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 13 17:40:51 crc kubenswrapper[4720]: I1013 17:40:51.389822 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7f8746d996-snqjj" Oct 13 17:40:51 crc kubenswrapper[4720]: I1013 17:40:51.693801 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-77544fcf9d-jwg9p" Oct 13 17:40:52 crc kubenswrapper[4720]: I1013 17:40:52.088455 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7df8489788-ntn24" podUID="139c2e02-2c20-4a21-a5c0-753c6003473b" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Oct 13 17:40:52 crc kubenswrapper[4720]: I1013 17:40:52.630333 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-694bd85589-jdgbb"] Oct 13 17:40:52 crc kubenswrapper[4720]: I1013 17:40:52.632453 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-694bd85589-jdgbb" Oct 13 17:40:52 crc kubenswrapper[4720]: I1013 17:40:52.635705 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Oct 13 17:40:52 crc kubenswrapper[4720]: I1013 17:40:52.635880 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Oct 13 17:40:52 crc kubenswrapper[4720]: I1013 17:40:52.635989 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Oct 13 17:40:52 crc kubenswrapper[4720]: I1013 17:40:52.637558 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-694bd85589-jdgbb"] Oct 13 17:40:52 crc kubenswrapper[4720]: I1013 17:40:52.740334 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d51a0725-9566-428f-a34b-3b0345774d1f-log-httpd\") pod \"swift-proxy-694bd85589-jdgbb\" (UID: \"d51a0725-9566-428f-a34b-3b0345774d1f\") " pod="openstack/swift-proxy-694bd85589-jdgbb" Oct 13 17:40:52 crc kubenswrapper[4720]: I1013 17:40:52.740402 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d51a0725-9566-428f-a34b-3b0345774d1f-run-httpd\") pod \"swift-proxy-694bd85589-jdgbb\" (UID: \"d51a0725-9566-428f-a34b-3b0345774d1f\") " pod="openstack/swift-proxy-694bd85589-jdgbb" Oct 13 17:40:52 crc kubenswrapper[4720]: I1013 17:40:52.740420 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d51a0725-9566-428f-a34b-3b0345774d1f-internal-tls-certs\") pod \"swift-proxy-694bd85589-jdgbb\" (UID: \"d51a0725-9566-428f-a34b-3b0345774d1f\") " pod="openstack/swift-proxy-694bd85589-jdgbb" Oct 13 17:40:52 crc kubenswrapper[4720]: I1013 17:40:52.740489 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d51a0725-9566-428f-a34b-3b0345774d1f-combined-ca-bundle\") pod \"swift-proxy-694bd85589-jdgbb\" (UID: \"d51a0725-9566-428f-a34b-3b0345774d1f\") " pod="openstack/swift-proxy-694bd85589-jdgbb" Oct 13 17:40:52 crc kubenswrapper[4720]: I1013 17:40:52.740511 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvrv8\" (UniqueName: \"kubernetes.io/projected/d51a0725-9566-428f-a34b-3b0345774d1f-kube-api-access-hvrv8\") pod \"swift-proxy-694bd85589-jdgbb\" (UID: \"d51a0725-9566-428f-a34b-3b0345774d1f\") " pod="openstack/swift-proxy-694bd85589-jdgbb" Oct 13 17:40:52 crc kubenswrapper[4720]: I1013 17:40:52.740567 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d51a0725-9566-428f-a34b-3b0345774d1f-public-tls-certs\") pod \"swift-proxy-694bd85589-jdgbb\" (UID: \"d51a0725-9566-428f-a34b-3b0345774d1f\") " pod="openstack/swift-proxy-694bd85589-jdgbb" Oct 13 17:40:52 crc kubenswrapper[4720]: I1013 17:40:52.740591 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d51a0725-9566-428f-a34b-3b0345774d1f-etc-swift\") pod \"swift-proxy-694bd85589-jdgbb\" (UID: \"d51a0725-9566-428f-a34b-3b0345774d1f\") " pod="openstack/swift-proxy-694bd85589-jdgbb" Oct 13 17:40:52 crc kubenswrapper[4720]: I1013 17:40:52.740609 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d51a0725-9566-428f-a34b-3b0345774d1f-config-data\") pod \"swift-proxy-694bd85589-jdgbb\" (UID: \"d51a0725-9566-428f-a34b-3b0345774d1f\") " pod="openstack/swift-proxy-694bd85589-jdgbb" Oct 13 17:40:52 crc kubenswrapper[4720]: I1013 17:40:52.841804 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvrv8\" (UniqueName: \"kubernetes.io/projected/d51a0725-9566-428f-a34b-3b0345774d1f-kube-api-access-hvrv8\") pod \"swift-proxy-694bd85589-jdgbb\" (UID: \"d51a0725-9566-428f-a34b-3b0345774d1f\") " pod="openstack/swift-proxy-694bd85589-jdgbb" Oct 13 17:40:52 crc kubenswrapper[4720]: I1013 17:40:52.841875 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d51a0725-9566-428f-a34b-3b0345774d1f-public-tls-certs\") pod \"swift-proxy-694bd85589-jdgbb\" (UID: \"d51a0725-9566-428f-a34b-3b0345774d1f\") " pod="openstack/swift-proxy-694bd85589-jdgbb" Oct 13 17:40:52 crc kubenswrapper[4720]: I1013 17:40:52.841907 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d51a0725-9566-428f-a34b-3b0345774d1f-etc-swift\") pod \"swift-proxy-694bd85589-jdgbb\" (UID: \"d51a0725-9566-428f-a34b-3b0345774d1f\") " pod="openstack/swift-proxy-694bd85589-jdgbb" Oct 13 17:40:52 crc kubenswrapper[4720]: I1013 17:40:52.841924 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d51a0725-9566-428f-a34b-3b0345774d1f-config-data\") pod \"swift-proxy-694bd85589-jdgbb\" (UID: \"d51a0725-9566-428f-a34b-3b0345774d1f\") " pod="openstack/swift-proxy-694bd85589-jdgbb" Oct 13 17:40:52 crc kubenswrapper[4720]: I1013 17:40:52.841959 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d51a0725-9566-428f-a34b-3b0345774d1f-log-httpd\") pod \"swift-proxy-694bd85589-jdgbb\" (UID: \"d51a0725-9566-428f-a34b-3b0345774d1f\") " pod="openstack/swift-proxy-694bd85589-jdgbb" Oct 13 17:40:52 crc kubenswrapper[4720]: I1013 17:40:52.841990 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d51a0725-9566-428f-a34b-3b0345774d1f-run-httpd\") pod \"swift-proxy-694bd85589-jdgbb\" (UID: \"d51a0725-9566-428f-a34b-3b0345774d1f\") " pod="openstack/swift-proxy-694bd85589-jdgbb" Oct 13 17:40:52 crc kubenswrapper[4720]: I1013 17:40:52.842008 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d51a0725-9566-428f-a34b-3b0345774d1f-internal-tls-certs\") pod \"swift-proxy-694bd85589-jdgbb\" (UID: \"d51a0725-9566-428f-a34b-3b0345774d1f\") " pod="openstack/swift-proxy-694bd85589-jdgbb" Oct 13 17:40:52 crc kubenswrapper[4720]: I1013 17:40:52.842062 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d51a0725-9566-428f-a34b-3b0345774d1f-combined-ca-bundle\") pod \"swift-proxy-694bd85589-jdgbb\" (UID: \"d51a0725-9566-428f-a34b-3b0345774d1f\") " pod="openstack/swift-proxy-694bd85589-jdgbb" Oct 13 17:40:52 crc kubenswrapper[4720]: I1013 17:40:52.842513 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d51a0725-9566-428f-a34b-3b0345774d1f-log-httpd\") pod \"swift-proxy-694bd85589-jdgbb\" (UID: \"d51a0725-9566-428f-a34b-3b0345774d1f\") " pod="openstack/swift-proxy-694bd85589-jdgbb" Oct 13 17:40:52 crc kubenswrapper[4720]: I1013 17:40:52.842677 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d51a0725-9566-428f-a34b-3b0345774d1f-run-httpd\") pod \"swift-proxy-694bd85589-jdgbb\" (UID: \"d51a0725-9566-428f-a34b-3b0345774d1f\") " pod="openstack/swift-proxy-694bd85589-jdgbb" Oct 13 17:40:52 crc kubenswrapper[4720]: I1013 17:40:52.851925 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d51a0725-9566-428f-a34b-3b0345774d1f-public-tls-certs\") pod \"swift-proxy-694bd85589-jdgbb\" (UID: \"d51a0725-9566-428f-a34b-3b0345774d1f\") " pod="openstack/swift-proxy-694bd85589-jdgbb" Oct 13 17:40:52 crc kubenswrapper[4720]: I1013 17:40:52.852124 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d51a0725-9566-428f-a34b-3b0345774d1f-config-data\") pod \"swift-proxy-694bd85589-jdgbb\" (UID: \"d51a0725-9566-428f-a34b-3b0345774d1f\") " pod="openstack/swift-proxy-694bd85589-jdgbb" Oct 13 17:40:52 crc kubenswrapper[4720]: I1013 17:40:52.858933 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d51a0725-9566-428f-a34b-3b0345774d1f-combined-ca-bundle\") pod \"swift-proxy-694bd85589-jdgbb\" (UID: \"d51a0725-9566-428f-a34b-3b0345774d1f\") " pod="openstack/swift-proxy-694bd85589-jdgbb" Oct 13 17:40:52 crc kubenswrapper[4720]: I1013 17:40:52.860305 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d51a0725-9566-428f-a34b-3b0345774d1f-etc-swift\") pod \"swift-proxy-694bd85589-jdgbb\" (UID: \"d51a0725-9566-428f-a34b-3b0345774d1f\") " pod="openstack/swift-proxy-694bd85589-jdgbb" Oct 13 17:40:52 crc kubenswrapper[4720]: I1013 17:40:52.864218 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvrv8\" (UniqueName: \"kubernetes.io/projected/d51a0725-9566-428f-a34b-3b0345774d1f-kube-api-access-hvrv8\") pod \"swift-proxy-694bd85589-jdgbb\" (UID: \"d51a0725-9566-428f-a34b-3b0345774d1f\") " pod="openstack/swift-proxy-694bd85589-jdgbb" Oct 13 17:40:52 crc kubenswrapper[4720]: I1013 17:40:52.865646 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d51a0725-9566-428f-a34b-3b0345774d1f-internal-tls-certs\") pod \"swift-proxy-694bd85589-jdgbb\" (UID: \"d51a0725-9566-428f-a34b-3b0345774d1f\") " pod="openstack/swift-proxy-694bd85589-jdgbb" Oct 13 17:40:52 crc kubenswrapper[4720]: I1013 17:40:52.957316 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-694bd85589-jdgbb" Oct 13 17:40:53 crc kubenswrapper[4720]: I1013 17:40:53.095337 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-75c8ddd69c-hrltm" Oct 13 17:40:53 crc kubenswrapper[4720]: I1013 17:40:53.214003 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-jdqmt"] Oct 13 17:40:53 crc kubenswrapper[4720]: I1013 17:40:53.214050 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-9dcbx" event={"ID":"887aa549-67e8-4d03-acba-dede202496db","Type":"ContainerStarted","Data":"78769fb9f2065139b88842f2335a23af9000cff6a3514aad858132138aa5cbf2"} Oct 13 17:40:53 crc kubenswrapper[4720]: I1013 17:40:53.218695 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8b5c85b87-jdqmt" podUID="35fb0ae5-1450-4f85-90f5-16a18667a582" containerName="dnsmasq-dns" containerID="cri-o://a050cdb202e55c8bf384a2da4137a41a8dc8220e8658429b0ec88fb3768b3fe1" gracePeriod=10 Oct 13 17:40:53 crc kubenswrapper[4720]: I1013 17:40:53.232617 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-77544fcf9d-jwg9p" Oct 13 17:40:53 crc kubenswrapper[4720]: I1013 17:40:53.256282 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-9dcbx" podStartSLOduration=3.48197014 podStartE2EDuration="39.256259636s" podCreationTimestamp="2025-10-13 17:40:14 +0000 UTC" firstStartedPulling="2025-10-13 17:40:15.804884257 +0000 UTC m=+961.262134390" lastFinishedPulling="2025-10-13 17:40:51.579173754 +0000 UTC m=+997.036423886" observedRunningTime="2025-10-13 17:40:53.239930796 +0000 UTC m=+998.697180928" watchObservedRunningTime="2025-10-13 17:40:53.256259636 +0000 UTC m=+998.713509768" Oct 13 17:40:53 crc kubenswrapper[4720]: I1013 17:40:53.328962 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7f8746d996-snqjj"] Oct 13 17:40:53 crc kubenswrapper[4720]: I1013 17:40:53.329167 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7f8746d996-snqjj" podUID="83d7bdbc-2466-4272-bc8f-54afd0d7a9de" containerName="barbican-api-log" containerID="cri-o://7f8ea84272aafa74d27bad45c8ff76c2c1de12973ce50d45b6ec53e35650ab6e" gracePeriod=30 Oct 13 17:40:53 crc kubenswrapper[4720]: I1013 17:40:53.329651 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7f8746d996-snqjj" podUID="83d7bdbc-2466-4272-bc8f-54afd0d7a9de" containerName="barbican-api" containerID="cri-o://5fd8435bb6ffbd686fa3db27072e910dd2107b054b171e9d0a0be01bae37acc0" gracePeriod=30 Oct 13 17:40:53 crc kubenswrapper[4720]: I1013 17:40:53.345353 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-7f8746d996-snqjj" podUID="83d7bdbc-2466-4272-bc8f-54afd0d7a9de" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": EOF" Oct 13 17:40:53 crc kubenswrapper[4720]: I1013 17:40:53.345789 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-7f8746d996-snqjj" podUID="83d7bdbc-2466-4272-bc8f-54afd0d7a9de" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": EOF" Oct 13 17:40:53 crc kubenswrapper[4720]: I1013 17:40:53.348384 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7f8746d996-snqjj" podUID="83d7bdbc-2466-4272-bc8f-54afd0d7a9de" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": EOF" Oct 13 17:40:53 crc kubenswrapper[4720]: I1013 17:40:53.348809 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7f8746d996-snqjj" podUID="83d7bdbc-2466-4272-bc8f-54afd0d7a9de" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": EOF" Oct 13 17:40:53 crc kubenswrapper[4720]: I1013 17:40:53.751616 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-694bd85589-jdgbb"] Oct 13 17:40:53 crc kubenswrapper[4720]: I1013 17:40:53.766818 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-jdqmt" Oct 13 17:40:53 crc kubenswrapper[4720]: I1013 17:40:53.892273 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35fb0ae5-1450-4f85-90f5-16a18667a582-config\") pod \"35fb0ae5-1450-4f85-90f5-16a18667a582\" (UID: \"35fb0ae5-1450-4f85-90f5-16a18667a582\") " Oct 13 17:40:53 crc kubenswrapper[4720]: I1013 17:40:53.892558 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/35fb0ae5-1450-4f85-90f5-16a18667a582-ovsdbserver-sb\") pod \"35fb0ae5-1450-4f85-90f5-16a18667a582\" (UID: \"35fb0ae5-1450-4f85-90f5-16a18667a582\") " Oct 13 17:40:53 crc kubenswrapper[4720]: I1013 17:40:53.892611 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/35fb0ae5-1450-4f85-90f5-16a18667a582-ovsdbserver-nb\") pod \"35fb0ae5-1450-4f85-90f5-16a18667a582\" (UID: \"35fb0ae5-1450-4f85-90f5-16a18667a582\") " Oct 13 17:40:53 crc kubenswrapper[4720]: I1013 17:40:53.892693 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lnqx\" (UniqueName: \"kubernetes.io/projected/35fb0ae5-1450-4f85-90f5-16a18667a582-kube-api-access-8lnqx\") pod \"35fb0ae5-1450-4f85-90f5-16a18667a582\" (UID: \"35fb0ae5-1450-4f85-90f5-16a18667a582\") " Oct 13 17:40:53 crc kubenswrapper[4720]: I1013 17:40:53.892715 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35fb0ae5-1450-4f85-90f5-16a18667a582-dns-svc\") pod \"35fb0ae5-1450-4f85-90f5-16a18667a582\" (UID: \"35fb0ae5-1450-4f85-90f5-16a18667a582\") " Oct 13 17:40:53 crc kubenswrapper[4720]: I1013 17:40:53.892732 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/35fb0ae5-1450-4f85-90f5-16a18667a582-dns-swift-storage-0\") pod \"35fb0ae5-1450-4f85-90f5-16a18667a582\" (UID: \"35fb0ae5-1450-4f85-90f5-16a18667a582\") " Oct 13 17:40:53 crc kubenswrapper[4720]: I1013 17:40:53.903563 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35fb0ae5-1450-4f85-90f5-16a18667a582-kube-api-access-8lnqx" (OuterVolumeSpecName: "kube-api-access-8lnqx") pod "35fb0ae5-1450-4f85-90f5-16a18667a582" (UID: "35fb0ae5-1450-4f85-90f5-16a18667a582"). InnerVolumeSpecName "kube-api-access-8lnqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:40:53 crc kubenswrapper[4720]: I1013 17:40:53.950159 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35fb0ae5-1450-4f85-90f5-16a18667a582-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "35fb0ae5-1450-4f85-90f5-16a18667a582" (UID: "35fb0ae5-1450-4f85-90f5-16a18667a582"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:40:53 crc kubenswrapper[4720]: I1013 17:40:53.975497 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35fb0ae5-1450-4f85-90f5-16a18667a582-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "35fb0ae5-1450-4f85-90f5-16a18667a582" (UID: "35fb0ae5-1450-4f85-90f5-16a18667a582"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:40:53 crc kubenswrapper[4720]: I1013 17:40:53.980456 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35fb0ae5-1450-4f85-90f5-16a18667a582-config" (OuterVolumeSpecName: "config") pod "35fb0ae5-1450-4f85-90f5-16a18667a582" (UID: "35fb0ae5-1450-4f85-90f5-16a18667a582"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:40:53 crc kubenswrapper[4720]: I1013 17:40:53.994621 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8lnqx\" (UniqueName: \"kubernetes.io/projected/35fb0ae5-1450-4f85-90f5-16a18667a582-kube-api-access-8lnqx\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:53 crc kubenswrapper[4720]: I1013 17:40:53.994652 4720 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/35fb0ae5-1450-4f85-90f5-16a18667a582-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:53 crc kubenswrapper[4720]: I1013 17:40:53.994663 4720 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/35fb0ae5-1450-4f85-90f5-16a18667a582-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:53 crc kubenswrapper[4720]: I1013 17:40:53.994672 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35fb0ae5-1450-4f85-90f5-16a18667a582-config\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:54 crc kubenswrapper[4720]: I1013 17:40:54.014531 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35fb0ae5-1450-4f85-90f5-16a18667a582-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "35fb0ae5-1450-4f85-90f5-16a18667a582" (UID: "35fb0ae5-1450-4f85-90f5-16a18667a582"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:40:54 crc kubenswrapper[4720]: I1013 17:40:54.022751 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35fb0ae5-1450-4f85-90f5-16a18667a582-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "35fb0ae5-1450-4f85-90f5-16a18667a582" (UID: "35fb0ae5-1450-4f85-90f5-16a18667a582"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:40:54 crc kubenswrapper[4720]: I1013 17:40:54.096016 4720 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/35fb0ae5-1450-4f85-90f5-16a18667a582-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:54 crc kubenswrapper[4720]: I1013 17:40:54.096055 4720 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/35fb0ae5-1450-4f85-90f5-16a18667a582-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 13 17:40:54 crc kubenswrapper[4720]: I1013 17:40:54.212382 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-694bd85589-jdgbb" event={"ID":"d51a0725-9566-428f-a34b-3b0345774d1f","Type":"ContainerStarted","Data":"9a04e4cab87bee48530ecc015ee3940284062345ff514646a356d1565fba16c2"} Oct 13 17:40:54 crc kubenswrapper[4720]: I1013 17:40:54.212783 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-694bd85589-jdgbb" event={"ID":"d51a0725-9566-428f-a34b-3b0345774d1f","Type":"ContainerStarted","Data":"6f61596d994943abbe7de6c0a68f0c43dd39f7999ea711e44bfd9811a8a3d3f1"} Oct 13 17:40:54 crc kubenswrapper[4720]: I1013 17:40:54.213997 4720 generic.go:334] "Generic (PLEG): container finished" podID="35fb0ae5-1450-4f85-90f5-16a18667a582" containerID="a050cdb202e55c8bf384a2da4137a41a8dc8220e8658429b0ec88fb3768b3fe1" exitCode=0 Oct 13 17:40:54 crc kubenswrapper[4720]: I1013 17:40:54.214044 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-jdqmt" event={"ID":"35fb0ae5-1450-4f85-90f5-16a18667a582","Type":"ContainerDied","Data":"a050cdb202e55c8bf384a2da4137a41a8dc8220e8658429b0ec88fb3768b3fe1"} Oct 13 17:40:54 crc kubenswrapper[4720]: I1013 17:40:54.214069 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-jdqmt" event={"ID":"35fb0ae5-1450-4f85-90f5-16a18667a582","Type":"ContainerDied","Data":"948cbf1221f3de5f661fa23b20847678cfcf01771e6811bda8bf16f75e252d85"} Oct 13 17:40:54 crc kubenswrapper[4720]: I1013 17:40:54.214085 4720 scope.go:117] "RemoveContainer" containerID="a050cdb202e55c8bf384a2da4137a41a8dc8220e8658429b0ec88fb3768b3fe1" Oct 13 17:40:54 crc kubenswrapper[4720]: I1013 17:40:54.214147 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-jdqmt" Oct 13 17:40:54 crc kubenswrapper[4720]: I1013 17:40:54.217159 4720 generic.go:334] "Generic (PLEG): container finished" podID="83d7bdbc-2466-4272-bc8f-54afd0d7a9de" containerID="7f8ea84272aafa74d27bad45c8ff76c2c1de12973ce50d45b6ec53e35650ab6e" exitCode=143 Oct 13 17:40:54 crc kubenswrapper[4720]: I1013 17:40:54.217185 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f8746d996-snqjj" event={"ID":"83d7bdbc-2466-4272-bc8f-54afd0d7a9de","Type":"ContainerDied","Data":"7f8ea84272aafa74d27bad45c8ff76c2c1de12973ce50d45b6ec53e35650ab6e"} Oct 13 17:40:54 crc kubenswrapper[4720]: I1013 17:40:54.264436 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-jdqmt"] Oct 13 17:40:54 crc kubenswrapper[4720]: I1013 17:40:54.268982 4720 scope.go:117] "RemoveContainer" containerID="30f6d59962ba89ea59abf1c7b6702b5153c7c63349c90d93aaad1fbc1bdf46a6" Oct 13 17:40:54 crc kubenswrapper[4720]: I1013 17:40:54.270315 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-jdqmt"] Oct 13 17:40:54 crc kubenswrapper[4720]: I1013 17:40:54.290703 4720 scope.go:117] "RemoveContainer" containerID="a050cdb202e55c8bf384a2da4137a41a8dc8220e8658429b0ec88fb3768b3fe1" Oct 13 17:40:54 crc kubenswrapper[4720]: E1013 17:40:54.291112 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a050cdb202e55c8bf384a2da4137a41a8dc8220e8658429b0ec88fb3768b3fe1\": container with ID starting with a050cdb202e55c8bf384a2da4137a41a8dc8220e8658429b0ec88fb3768b3fe1 not found: ID does not exist" containerID="a050cdb202e55c8bf384a2da4137a41a8dc8220e8658429b0ec88fb3768b3fe1" Oct 13 17:40:54 crc kubenswrapper[4720]: I1013 17:40:54.291146 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a050cdb202e55c8bf384a2da4137a41a8dc8220e8658429b0ec88fb3768b3fe1"} err="failed to get container status \"a050cdb202e55c8bf384a2da4137a41a8dc8220e8658429b0ec88fb3768b3fe1\": rpc error: code = NotFound desc = could not find container \"a050cdb202e55c8bf384a2da4137a41a8dc8220e8658429b0ec88fb3768b3fe1\": container with ID starting with a050cdb202e55c8bf384a2da4137a41a8dc8220e8658429b0ec88fb3768b3fe1 not found: ID does not exist" Oct 13 17:40:54 crc kubenswrapper[4720]: I1013 17:40:54.291168 4720 scope.go:117] "RemoveContainer" containerID="30f6d59962ba89ea59abf1c7b6702b5153c7c63349c90d93aaad1fbc1bdf46a6" Oct 13 17:40:54 crc kubenswrapper[4720]: E1013 17:40:54.291404 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30f6d59962ba89ea59abf1c7b6702b5153c7c63349c90d93aaad1fbc1bdf46a6\": container with ID starting with 30f6d59962ba89ea59abf1c7b6702b5153c7c63349c90d93aaad1fbc1bdf46a6 not found: ID does not exist" containerID="30f6d59962ba89ea59abf1c7b6702b5153c7c63349c90d93aaad1fbc1bdf46a6" Oct 13 17:40:54 crc kubenswrapper[4720]: I1013 17:40:54.291426 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30f6d59962ba89ea59abf1c7b6702b5153c7c63349c90d93aaad1fbc1bdf46a6"} err="failed to get container status \"30f6d59962ba89ea59abf1c7b6702b5153c7c63349c90d93aaad1fbc1bdf46a6\": rpc error: code = NotFound desc = could not find container \"30f6d59962ba89ea59abf1c7b6702b5153c7c63349c90d93aaad1fbc1bdf46a6\": container with ID starting with 30f6d59962ba89ea59abf1c7b6702b5153c7c63349c90d93aaad1fbc1bdf46a6 not found: ID does not exist" Oct 13 17:40:54 crc kubenswrapper[4720]: I1013 17:40:54.911843 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-9xpdq"] Oct 13 17:40:54 crc kubenswrapper[4720]: E1013 17:40:54.912181 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35fb0ae5-1450-4f85-90f5-16a18667a582" containerName="dnsmasq-dns" Oct 13 17:40:54 crc kubenswrapper[4720]: I1013 17:40:54.912305 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="35fb0ae5-1450-4f85-90f5-16a18667a582" containerName="dnsmasq-dns" Oct 13 17:40:54 crc kubenswrapper[4720]: E1013 17:40:54.912331 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35fb0ae5-1450-4f85-90f5-16a18667a582" containerName="init" Oct 13 17:40:54 crc kubenswrapper[4720]: I1013 17:40:54.912337 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="35fb0ae5-1450-4f85-90f5-16a18667a582" containerName="init" Oct 13 17:40:54 crc kubenswrapper[4720]: I1013 17:40:54.912540 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="35fb0ae5-1450-4f85-90f5-16a18667a582" containerName="dnsmasq-dns" Oct 13 17:40:54 crc kubenswrapper[4720]: I1013 17:40:54.913115 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-9xpdq" Oct 13 17:40:54 crc kubenswrapper[4720]: I1013 17:40:54.931729 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-9xpdq"] Oct 13 17:40:55 crc kubenswrapper[4720]: I1013 17:40:55.006248 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-9rbjx"] Oct 13 17:40:55 crc kubenswrapper[4720]: I1013 17:40:55.007483 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-9rbjx" Oct 13 17:40:55 crc kubenswrapper[4720]: I1013 17:40:55.011909 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwczl\" (UniqueName: \"kubernetes.io/projected/9259aed9-a7a7-4b45-baff-419fef6c83a3-kube-api-access-mwczl\") pod \"nova-api-db-create-9xpdq\" (UID: \"9259aed9-a7a7-4b45-baff-419fef6c83a3\") " pod="openstack/nova-api-db-create-9xpdq" Oct 13 17:40:55 crc kubenswrapper[4720]: I1013 17:40:55.012957 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-9rbjx"] Oct 13 17:40:55 crc kubenswrapper[4720]: I1013 17:40:55.099570 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-7qdd5"] Oct 13 17:40:55 crc kubenswrapper[4720]: I1013 17:40:55.101018 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7qdd5" Oct 13 17:40:55 crc kubenswrapper[4720]: I1013 17:40:55.111083 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-7qdd5"] Oct 13 17:40:55 crc kubenswrapper[4720]: I1013 17:40:55.113893 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwczl\" (UniqueName: \"kubernetes.io/projected/9259aed9-a7a7-4b45-baff-419fef6c83a3-kube-api-access-mwczl\") pod \"nova-api-db-create-9xpdq\" (UID: \"9259aed9-a7a7-4b45-baff-419fef6c83a3\") " pod="openstack/nova-api-db-create-9xpdq" Oct 13 17:40:55 crc kubenswrapper[4720]: I1013 17:40:55.113981 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn7x6\" (UniqueName: \"kubernetes.io/projected/d2353f5d-08a1-4365-b77b-321e29ade356-kube-api-access-wn7x6\") pod \"nova-cell0-db-create-9rbjx\" (UID: \"d2353f5d-08a1-4365-b77b-321e29ade356\") " pod="openstack/nova-cell0-db-create-9rbjx" Oct 13 17:40:55 crc kubenswrapper[4720]: I1013 17:40:55.137967 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwczl\" (UniqueName: \"kubernetes.io/projected/9259aed9-a7a7-4b45-baff-419fef6c83a3-kube-api-access-mwczl\") pod \"nova-api-db-create-9xpdq\" (UID: \"9259aed9-a7a7-4b45-baff-419fef6c83a3\") " pod="openstack/nova-api-db-create-9xpdq" Oct 13 17:40:55 crc kubenswrapper[4720]: I1013 17:40:55.185905 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35fb0ae5-1450-4f85-90f5-16a18667a582" path="/var/lib/kubelet/pods/35fb0ae5-1450-4f85-90f5-16a18667a582/volumes" Oct 13 17:40:55 crc kubenswrapper[4720]: I1013 17:40:55.215991 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd4mq\" (UniqueName: \"kubernetes.io/projected/4164e6d6-57c4-45f9-b835-5f8cdfbb5044-kube-api-access-gd4mq\") pod \"nova-cell1-db-create-7qdd5\" (UID: \"4164e6d6-57c4-45f9-b835-5f8cdfbb5044\") " pod="openstack/nova-cell1-db-create-7qdd5" Oct 13 17:40:55 crc kubenswrapper[4720]: I1013 17:40:55.216059 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn7x6\" (UniqueName: \"kubernetes.io/projected/d2353f5d-08a1-4365-b77b-321e29ade356-kube-api-access-wn7x6\") pod \"nova-cell0-db-create-9rbjx\" (UID: \"d2353f5d-08a1-4365-b77b-321e29ade356\") " pod="openstack/nova-cell0-db-create-9rbjx" Oct 13 17:40:55 crc kubenswrapper[4720]: I1013 17:40:55.232417 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-694bd85589-jdgbb" event={"ID":"d51a0725-9566-428f-a34b-3b0345774d1f","Type":"ContainerStarted","Data":"6439316351ad2286285a92a73d1044dc9501714c95b48eeac663e4962ef94c22"} Oct 13 17:40:55 crc kubenswrapper[4720]: I1013 17:40:55.233388 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-694bd85589-jdgbb" Oct 13 17:40:55 crc kubenswrapper[4720]: I1013 17:40:55.233416 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-694bd85589-jdgbb" Oct 13 17:40:55 crc kubenswrapper[4720]: I1013 17:40:55.245116 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn7x6\" (UniqueName: \"kubernetes.io/projected/d2353f5d-08a1-4365-b77b-321e29ade356-kube-api-access-wn7x6\") pod \"nova-cell0-db-create-9rbjx\" (UID: \"d2353f5d-08a1-4365-b77b-321e29ade356\") " pod="openstack/nova-cell0-db-create-9rbjx" Oct 13 17:40:55 crc kubenswrapper[4720]: I1013 17:40:55.251137 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-9xpdq" Oct 13 17:40:55 crc kubenswrapper[4720]: I1013 17:40:55.264135 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-694bd85589-jdgbb" podStartSLOduration=3.264114051 podStartE2EDuration="3.264114051s" podCreationTimestamp="2025-10-13 17:40:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:40:55.258787094 +0000 UTC m=+1000.716037226" watchObservedRunningTime="2025-10-13 17:40:55.264114051 +0000 UTC m=+1000.721364183" Oct 13 17:40:55 crc kubenswrapper[4720]: I1013 17:40:55.318535 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gd4mq\" (UniqueName: \"kubernetes.io/projected/4164e6d6-57c4-45f9-b835-5f8cdfbb5044-kube-api-access-gd4mq\") pod \"nova-cell1-db-create-7qdd5\" (UID: \"4164e6d6-57c4-45f9-b835-5f8cdfbb5044\") " pod="openstack/nova-cell1-db-create-7qdd5" Oct 13 17:40:55 crc kubenswrapper[4720]: I1013 17:40:55.338657 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-9rbjx" Oct 13 17:40:55 crc kubenswrapper[4720]: I1013 17:40:55.339440 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gd4mq\" (UniqueName: \"kubernetes.io/projected/4164e6d6-57c4-45f9-b835-5f8cdfbb5044-kube-api-access-gd4mq\") pod \"nova-cell1-db-create-7qdd5\" (UID: \"4164e6d6-57c4-45f9-b835-5f8cdfbb5044\") " pod="openstack/nova-cell1-db-create-7qdd5" Oct 13 17:40:55 crc kubenswrapper[4720]: I1013 17:40:55.423561 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7qdd5" Oct 13 17:40:55 crc kubenswrapper[4720]: I1013 17:40:55.773822 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-9xpdq"] Oct 13 17:40:56 crc kubenswrapper[4720]: I1013 17:40:56.058942 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-9rbjx"] Oct 13 17:40:56 crc kubenswrapper[4720]: I1013 17:40:56.234715 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 13 17:40:56 crc kubenswrapper[4720]: I1013 17:40:56.235037 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="590fe630-a315-4fe6-b90d-298fbcc6619c" containerName="ceilometer-central-agent" containerID="cri-o://7b3bfde79f2808de4c0373d3f27221eaf79bf720f2a19f1fbb518930bb2b6084" gracePeriod=30 Oct 13 17:40:56 crc kubenswrapper[4720]: I1013 17:40:56.235072 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="590fe630-a315-4fe6-b90d-298fbcc6619c" containerName="proxy-httpd" containerID="cri-o://c047a3ae953a1fbf866b60ed26c57ce0ffe509cd034ae62ff29d56506356fdb2" gracePeriod=30 Oct 13 17:40:56 crc kubenswrapper[4720]: I1013 17:40:56.235149 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="590fe630-a315-4fe6-b90d-298fbcc6619c" containerName="sg-core" containerID="cri-o://e28570c9aab9f51b2eb41e6550e7cdfc518782eae4865b54b64c305b1c23e95c" gracePeriod=30 Oct 13 17:40:56 crc kubenswrapper[4720]: I1013 17:40:56.235182 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="590fe630-a315-4fe6-b90d-298fbcc6619c" containerName="ceilometer-notification-agent" containerID="cri-o://6eb9fd564a8906abcb33a17c5fd02f869620b0bd5b6232ce86a3dbe00885e01a" gracePeriod=30 Oct 13 17:40:56 crc kubenswrapper[4720]: I1013 17:40:56.264830 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-9rbjx" event={"ID":"d2353f5d-08a1-4365-b77b-321e29ade356","Type":"ContainerStarted","Data":"8302c547e96f7db25ca2a96036cfbff907fff13c25fcfd8a5cdf776be6776fcb"} Oct 13 17:40:56 crc kubenswrapper[4720]: I1013 17:40:56.267170 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-7qdd5"] Oct 13 17:40:56 crc kubenswrapper[4720]: I1013 17:40:56.271652 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-9xpdq" event={"ID":"9259aed9-a7a7-4b45-baff-419fef6c83a3","Type":"ContainerStarted","Data":"32ab64b85d3c54428b46ecbbbba11664fa8982c4d4cf2a5c971eee79b23cf8ac"} Oct 13 17:40:56 crc kubenswrapper[4720]: I1013 17:40:56.271909 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-9xpdq" event={"ID":"9259aed9-a7a7-4b45-baff-419fef6c83a3","Type":"ContainerStarted","Data":"10b6620c4369686e9085ae3d8c5febb24e0f5af96f9a7b6f7f5cdf1ea45c471e"} Oct 13 17:40:57 crc kubenswrapper[4720]: I1013 17:40:57.281646 4720 generic.go:334] "Generic (PLEG): container finished" podID="4164e6d6-57c4-45f9-b835-5f8cdfbb5044" containerID="3defcbd6df54ceee0fd0568c4a93d350e5180a10ac4fbe28d11fde627ac1da9f" exitCode=0 Oct 13 17:40:57 crc kubenswrapper[4720]: I1013 17:40:57.281769 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-7qdd5" event={"ID":"4164e6d6-57c4-45f9-b835-5f8cdfbb5044","Type":"ContainerDied","Data":"3defcbd6df54ceee0fd0568c4a93d350e5180a10ac4fbe28d11fde627ac1da9f"} Oct 13 17:40:57 crc kubenswrapper[4720]: I1013 17:40:57.282000 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-7qdd5" event={"ID":"4164e6d6-57c4-45f9-b835-5f8cdfbb5044","Type":"ContainerStarted","Data":"66c164f8e4900625da847f4212622601f23dec44285db96fe6f456656a4443d1"} Oct 13 17:40:57 crc kubenswrapper[4720]: I1013 17:40:57.285744 4720 generic.go:334] "Generic (PLEG): container finished" podID="9259aed9-a7a7-4b45-baff-419fef6c83a3" containerID="32ab64b85d3c54428b46ecbbbba11664fa8982c4d4cf2a5c971eee79b23cf8ac" exitCode=0 Oct 13 17:40:57 crc kubenswrapper[4720]: I1013 17:40:57.285809 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-9xpdq" event={"ID":"9259aed9-a7a7-4b45-baff-419fef6c83a3","Type":"ContainerDied","Data":"32ab64b85d3c54428b46ecbbbba11664fa8982c4d4cf2a5c971eee79b23cf8ac"} Oct 13 17:40:57 crc kubenswrapper[4720]: I1013 17:40:57.288267 4720 generic.go:334] "Generic (PLEG): container finished" podID="590fe630-a315-4fe6-b90d-298fbcc6619c" containerID="c047a3ae953a1fbf866b60ed26c57ce0ffe509cd034ae62ff29d56506356fdb2" exitCode=0 Oct 13 17:40:57 crc kubenswrapper[4720]: I1013 17:40:57.288285 4720 generic.go:334] "Generic (PLEG): container finished" podID="590fe630-a315-4fe6-b90d-298fbcc6619c" containerID="e28570c9aab9f51b2eb41e6550e7cdfc518782eae4865b54b64c305b1c23e95c" exitCode=2 Oct 13 17:40:57 crc kubenswrapper[4720]: I1013 17:40:57.288294 4720 generic.go:334] "Generic (PLEG): container finished" podID="590fe630-a315-4fe6-b90d-298fbcc6619c" containerID="7b3bfde79f2808de4c0373d3f27221eaf79bf720f2a19f1fbb518930bb2b6084" exitCode=0 Oct 13 17:40:57 crc kubenswrapper[4720]: I1013 17:40:57.288321 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"590fe630-a315-4fe6-b90d-298fbcc6619c","Type":"ContainerDied","Data":"c047a3ae953a1fbf866b60ed26c57ce0ffe509cd034ae62ff29d56506356fdb2"} Oct 13 17:40:57 crc kubenswrapper[4720]: I1013 17:40:57.288335 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"590fe630-a315-4fe6-b90d-298fbcc6619c","Type":"ContainerDied","Data":"e28570c9aab9f51b2eb41e6550e7cdfc518782eae4865b54b64c305b1c23e95c"} Oct 13 17:40:57 crc kubenswrapper[4720]: I1013 17:40:57.288344 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"590fe630-a315-4fe6-b90d-298fbcc6619c","Type":"ContainerDied","Data":"7b3bfde79f2808de4c0373d3f27221eaf79bf720f2a19f1fbb518930bb2b6084"} Oct 13 17:40:57 crc kubenswrapper[4720]: I1013 17:40:57.289443 4720 generic.go:334] "Generic (PLEG): container finished" podID="d2353f5d-08a1-4365-b77b-321e29ade356" containerID="938f13af43552e262b225bb2e1a67c6189c6054eca9ba9bcae1659d5faa0085e" exitCode=0 Oct 13 17:40:57 crc kubenswrapper[4720]: I1013 17:40:57.289528 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-9rbjx" event={"ID":"d2353f5d-08a1-4365-b77b-321e29ade356","Type":"ContainerDied","Data":"938f13af43552e262b225bb2e1a67c6189c6054eca9ba9bcae1659d5faa0085e"} Oct 13 17:40:57 crc kubenswrapper[4720]: I1013 17:40:57.299868 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-9xpdq" podStartSLOduration=3.299854332 podStartE2EDuration="3.299854332s" podCreationTimestamp="2025-10-13 17:40:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:40:56.291978759 +0000 UTC m=+1001.749228891" watchObservedRunningTime="2025-10-13 17:40:57.299854332 +0000 UTC m=+1002.757104464" Oct 13 17:40:58 crc kubenswrapper[4720]: I1013 17:40:58.299989 4720 generic.go:334] "Generic (PLEG): container finished" podID="887aa549-67e8-4d03-acba-dede202496db" containerID="78769fb9f2065139b88842f2335a23af9000cff6a3514aad858132138aa5cbf2" exitCode=0 Oct 13 17:40:58 crc kubenswrapper[4720]: I1013 17:40:58.300048 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-9dcbx" event={"ID":"887aa549-67e8-4d03-acba-dede202496db","Type":"ContainerDied","Data":"78769fb9f2065139b88842f2335a23af9000cff6a3514aad858132138aa5cbf2"} Oct 13 17:40:58 crc kubenswrapper[4720]: I1013 17:40:58.823368 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7f8746d996-snqjj" podUID="83d7bdbc-2466-4272-bc8f-54afd0d7a9de" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": read tcp 10.217.0.2:57380->10.217.0.162:9311: read: connection reset by peer" Oct 13 17:40:58 crc kubenswrapper[4720]: I1013 17:40:58.823338 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7f8746d996-snqjj" podUID="83d7bdbc-2466-4272-bc8f-54afd0d7a9de" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.162:9311/healthcheck\": read tcp 10.217.0.2:57378->10.217.0.162:9311: read: connection reset by peer" Oct 13 17:40:59 crc kubenswrapper[4720]: I1013 17:40:59.315943 4720 generic.go:334] "Generic (PLEG): container finished" podID="590fe630-a315-4fe6-b90d-298fbcc6619c" containerID="6eb9fd564a8906abcb33a17c5fd02f869620b0bd5b6232ce86a3dbe00885e01a" exitCode=0 Oct 13 17:40:59 crc kubenswrapper[4720]: I1013 17:40:59.316038 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"590fe630-a315-4fe6-b90d-298fbcc6619c","Type":"ContainerDied","Data":"6eb9fd564a8906abcb33a17c5fd02f869620b0bd5b6232ce86a3dbe00885e01a"} Oct 13 17:40:59 crc kubenswrapper[4720]: I1013 17:40:59.320467 4720 generic.go:334] "Generic (PLEG): container finished" podID="83d7bdbc-2466-4272-bc8f-54afd0d7a9de" containerID="5fd8435bb6ffbd686fa3db27072e910dd2107b054b171e9d0a0be01bae37acc0" exitCode=0 Oct 13 17:40:59 crc kubenswrapper[4720]: I1013 17:40:59.320656 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f8746d996-snqjj" event={"ID":"83d7bdbc-2466-4272-bc8f-54afd0d7a9de","Type":"ContainerDied","Data":"5fd8435bb6ffbd686fa3db27072e910dd2107b054b171e9d0a0be01bae37acc0"} Oct 13 17:41:01 crc kubenswrapper[4720]: I1013 17:41:01.889590 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7qdd5" Oct 13 17:41:01 crc kubenswrapper[4720]: I1013 17:41:01.962962 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gd4mq\" (UniqueName: \"kubernetes.io/projected/4164e6d6-57c4-45f9-b835-5f8cdfbb5044-kube-api-access-gd4mq\") pod \"4164e6d6-57c4-45f9-b835-5f8cdfbb5044\" (UID: \"4164e6d6-57c4-45f9-b835-5f8cdfbb5044\") " Oct 13 17:41:01 crc kubenswrapper[4720]: I1013 17:41:01.977141 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4164e6d6-57c4-45f9-b835-5f8cdfbb5044-kube-api-access-gd4mq" (OuterVolumeSpecName: "kube-api-access-gd4mq") pod "4164e6d6-57c4-45f9-b835-5f8cdfbb5044" (UID: "4164e6d6-57c4-45f9-b835-5f8cdfbb5044"). InnerVolumeSpecName "kube-api-access-gd4mq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.035170 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-9dcbx" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.049326 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-9xpdq" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.067638 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4kgh\" (UniqueName: \"kubernetes.io/projected/887aa549-67e8-4d03-acba-dede202496db-kube-api-access-k4kgh\") pod \"887aa549-67e8-4d03-acba-dede202496db\" (UID: \"887aa549-67e8-4d03-acba-dede202496db\") " Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.067705 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/887aa549-67e8-4d03-acba-dede202496db-etc-machine-id\") pod \"887aa549-67e8-4d03-acba-dede202496db\" (UID: \"887aa549-67e8-4d03-acba-dede202496db\") " Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.067784 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/887aa549-67e8-4d03-acba-dede202496db-config-data\") pod \"887aa549-67e8-4d03-acba-dede202496db\" (UID: \"887aa549-67e8-4d03-acba-dede202496db\") " Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.067923 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/887aa549-67e8-4d03-acba-dede202496db-db-sync-config-data\") pod \"887aa549-67e8-4d03-acba-dede202496db\" (UID: \"887aa549-67e8-4d03-acba-dede202496db\") " Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.067952 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/887aa549-67e8-4d03-acba-dede202496db-scripts\") pod \"887aa549-67e8-4d03-acba-dede202496db\" (UID: \"887aa549-67e8-4d03-acba-dede202496db\") " Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.068050 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/887aa549-67e8-4d03-acba-dede202496db-combined-ca-bundle\") pod \"887aa549-67e8-4d03-acba-dede202496db\" (UID: \"887aa549-67e8-4d03-acba-dede202496db\") " Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.068151 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/887aa549-67e8-4d03-acba-dede202496db-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "887aa549-67e8-4d03-acba-dede202496db" (UID: "887aa549-67e8-4d03-acba-dede202496db"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.068484 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gd4mq\" (UniqueName: \"kubernetes.io/projected/4164e6d6-57c4-45f9-b835-5f8cdfbb5044-kube-api-access-gd4mq\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.068500 4720 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/887aa549-67e8-4d03-acba-dede202496db-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.073554 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/887aa549-67e8-4d03-acba-dede202496db-kube-api-access-k4kgh" (OuterVolumeSpecName: "kube-api-access-k4kgh") pod "887aa549-67e8-4d03-acba-dede202496db" (UID: "887aa549-67e8-4d03-acba-dede202496db"). InnerVolumeSpecName "kube-api-access-k4kgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.078657 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-9rbjx" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.084023 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/887aa549-67e8-4d03-acba-dede202496db-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "887aa549-67e8-4d03-acba-dede202496db" (UID: "887aa549-67e8-4d03-acba-dede202496db"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.088872 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7df8489788-ntn24" podUID="139c2e02-2c20-4a21-a5c0-753c6003473b" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.088974 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7df8489788-ntn24" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.098414 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/887aa549-67e8-4d03-acba-dede202496db-scripts" (OuterVolumeSpecName: "scripts") pod "887aa549-67e8-4d03-acba-dede202496db" (UID: "887aa549-67e8-4d03-acba-dede202496db"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.158650 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7f8746d996-snqjj" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.171691 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wn7x6\" (UniqueName: \"kubernetes.io/projected/d2353f5d-08a1-4365-b77b-321e29ade356-kube-api-access-wn7x6\") pod \"d2353f5d-08a1-4365-b77b-321e29ade356\" (UID: \"d2353f5d-08a1-4365-b77b-321e29ade356\") " Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.171868 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwczl\" (UniqueName: \"kubernetes.io/projected/9259aed9-a7a7-4b45-baff-419fef6c83a3-kube-api-access-mwczl\") pod \"9259aed9-a7a7-4b45-baff-419fef6c83a3\" (UID: \"9259aed9-a7a7-4b45-baff-419fef6c83a3\") " Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.172715 4720 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/887aa549-67e8-4d03-acba-dede202496db-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.172841 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/887aa549-67e8-4d03-acba-dede202496db-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.172855 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4kgh\" (UniqueName: \"kubernetes.io/projected/887aa549-67e8-4d03-acba-dede202496db-kube-api-access-k4kgh\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.178099 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9259aed9-a7a7-4b45-baff-419fef6c83a3-kube-api-access-mwczl" (OuterVolumeSpecName: "kube-api-access-mwczl") pod "9259aed9-a7a7-4b45-baff-419fef6c83a3" (UID: "9259aed9-a7a7-4b45-baff-419fef6c83a3"). InnerVolumeSpecName "kube-api-access-mwczl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.182210 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2353f5d-08a1-4365-b77b-321e29ade356-kube-api-access-wn7x6" (OuterVolumeSpecName: "kube-api-access-wn7x6") pod "d2353f5d-08a1-4365-b77b-321e29ade356" (UID: "d2353f5d-08a1-4365-b77b-321e29ade356"). InnerVolumeSpecName "kube-api-access-wn7x6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.198182 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.202968 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/887aa549-67e8-4d03-acba-dede202496db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "887aa549-67e8-4d03-acba-dede202496db" (UID: "887aa549-67e8-4d03-acba-dede202496db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.237590 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/887aa549-67e8-4d03-acba-dede202496db-config-data" (OuterVolumeSpecName: "config-data") pod "887aa549-67e8-4d03-acba-dede202496db" (UID: "887aa549-67e8-4d03-acba-dede202496db"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.274457 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/590fe630-a315-4fe6-b90d-298fbcc6619c-config-data\") pod \"590fe630-a315-4fe6-b90d-298fbcc6619c\" (UID: \"590fe630-a315-4fe6-b90d-298fbcc6619c\") " Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.274535 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83d7bdbc-2466-4272-bc8f-54afd0d7a9de-combined-ca-bundle\") pod \"83d7bdbc-2466-4272-bc8f-54afd0d7a9de\" (UID: \"83d7bdbc-2466-4272-bc8f-54afd0d7a9de\") " Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.274651 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmjm5\" (UniqueName: \"kubernetes.io/projected/83d7bdbc-2466-4272-bc8f-54afd0d7a9de-kube-api-access-tmjm5\") pod \"83d7bdbc-2466-4272-bc8f-54afd0d7a9de\" (UID: \"83d7bdbc-2466-4272-bc8f-54afd0d7a9de\") " Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.274673 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83d7bdbc-2466-4272-bc8f-54afd0d7a9de-config-data-custom\") pod \"83d7bdbc-2466-4272-bc8f-54afd0d7a9de\" (UID: \"83d7bdbc-2466-4272-bc8f-54afd0d7a9de\") " Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.274716 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/590fe630-a315-4fe6-b90d-298fbcc6619c-combined-ca-bundle\") pod \"590fe630-a315-4fe6-b90d-298fbcc6619c\" (UID: \"590fe630-a315-4fe6-b90d-298fbcc6619c\") " Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.274756 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/590fe630-a315-4fe6-b90d-298fbcc6619c-run-httpd\") pod \"590fe630-a315-4fe6-b90d-298fbcc6619c\" (UID: \"590fe630-a315-4fe6-b90d-298fbcc6619c\") " Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.274789 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/590fe630-a315-4fe6-b90d-298fbcc6619c-sg-core-conf-yaml\") pod \"590fe630-a315-4fe6-b90d-298fbcc6619c\" (UID: \"590fe630-a315-4fe6-b90d-298fbcc6619c\") " Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.274835 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vf5c5\" (UniqueName: \"kubernetes.io/projected/590fe630-a315-4fe6-b90d-298fbcc6619c-kube-api-access-vf5c5\") pod \"590fe630-a315-4fe6-b90d-298fbcc6619c\" (UID: \"590fe630-a315-4fe6-b90d-298fbcc6619c\") " Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.274889 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/590fe630-a315-4fe6-b90d-298fbcc6619c-scripts\") pod \"590fe630-a315-4fe6-b90d-298fbcc6619c\" (UID: \"590fe630-a315-4fe6-b90d-298fbcc6619c\") " Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.274969 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83d7bdbc-2466-4272-bc8f-54afd0d7a9de-logs\") pod \"83d7bdbc-2466-4272-bc8f-54afd0d7a9de\" (UID: \"83d7bdbc-2466-4272-bc8f-54afd0d7a9de\") " Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.274996 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/590fe630-a315-4fe6-b90d-298fbcc6619c-log-httpd\") pod \"590fe630-a315-4fe6-b90d-298fbcc6619c\" (UID: \"590fe630-a315-4fe6-b90d-298fbcc6619c\") " Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.275017 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83d7bdbc-2466-4272-bc8f-54afd0d7a9de-config-data\") pod \"83d7bdbc-2466-4272-bc8f-54afd0d7a9de\" (UID: \"83d7bdbc-2466-4272-bc8f-54afd0d7a9de\") " Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.275509 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/590fe630-a315-4fe6-b90d-298fbcc6619c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "590fe630-a315-4fe6-b90d-298fbcc6619c" (UID: "590fe630-a315-4fe6-b90d-298fbcc6619c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.275617 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwczl\" (UniqueName: \"kubernetes.io/projected/9259aed9-a7a7-4b45-baff-419fef6c83a3-kube-api-access-mwczl\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.275636 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/887aa549-67e8-4d03-acba-dede202496db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.275646 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wn7x6\" (UniqueName: \"kubernetes.io/projected/d2353f5d-08a1-4365-b77b-321e29ade356-kube-api-access-wn7x6\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.275725 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/887aa549-67e8-4d03-acba-dede202496db-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.278227 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/590fe630-a315-4fe6-b90d-298fbcc6619c-scripts" (OuterVolumeSpecName: "scripts") pod "590fe630-a315-4fe6-b90d-298fbcc6619c" (UID: "590fe630-a315-4fe6-b90d-298fbcc6619c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.278261 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/590fe630-a315-4fe6-b90d-298fbcc6619c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "590fe630-a315-4fe6-b90d-298fbcc6619c" (UID: "590fe630-a315-4fe6-b90d-298fbcc6619c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.279927 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83d7bdbc-2466-4272-bc8f-54afd0d7a9de-logs" (OuterVolumeSpecName: "logs") pod "83d7bdbc-2466-4272-bc8f-54afd0d7a9de" (UID: "83d7bdbc-2466-4272-bc8f-54afd0d7a9de"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.281575 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/590fe630-a315-4fe6-b90d-298fbcc6619c-kube-api-access-vf5c5" (OuterVolumeSpecName: "kube-api-access-vf5c5") pod "590fe630-a315-4fe6-b90d-298fbcc6619c" (UID: "590fe630-a315-4fe6-b90d-298fbcc6619c"). InnerVolumeSpecName "kube-api-access-vf5c5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.281855 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83d7bdbc-2466-4272-bc8f-54afd0d7a9de-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "83d7bdbc-2466-4272-bc8f-54afd0d7a9de" (UID: "83d7bdbc-2466-4272-bc8f-54afd0d7a9de"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.282738 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83d7bdbc-2466-4272-bc8f-54afd0d7a9de-kube-api-access-tmjm5" (OuterVolumeSpecName: "kube-api-access-tmjm5") pod "83d7bdbc-2466-4272-bc8f-54afd0d7a9de" (UID: "83d7bdbc-2466-4272-bc8f-54afd0d7a9de"). InnerVolumeSpecName "kube-api-access-tmjm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.301966 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83d7bdbc-2466-4272-bc8f-54afd0d7a9de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "83d7bdbc-2466-4272-bc8f-54afd0d7a9de" (UID: "83d7bdbc-2466-4272-bc8f-54afd0d7a9de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.308550 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/590fe630-a315-4fe6-b90d-298fbcc6619c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "590fe630-a315-4fe6-b90d-298fbcc6619c" (UID: "590fe630-a315-4fe6-b90d-298fbcc6619c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.326340 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83d7bdbc-2466-4272-bc8f-54afd0d7a9de-config-data" (OuterVolumeSpecName: "config-data") pod "83d7bdbc-2466-4272-bc8f-54afd0d7a9de" (UID: "83d7bdbc-2466-4272-bc8f-54afd0d7a9de"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.347666 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f8746d996-snqjj" event={"ID":"83d7bdbc-2466-4272-bc8f-54afd0d7a9de","Type":"ContainerDied","Data":"9a9d954478fc3b362dfbdfdf5f64d7a752eb0c6d8976ac894da74bbb9e73987e"} Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.347718 4720 scope.go:117] "RemoveContainer" containerID="5fd8435bb6ffbd686fa3db27072e910dd2107b054b171e9d0a0be01bae37acc0" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.347824 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7f8746d996-snqjj" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.352091 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-9dcbx" event={"ID":"887aa549-67e8-4d03-acba-dede202496db","Type":"ContainerDied","Data":"753803f2ccf2091aec2cc7376c5358acdb20bf32868ec9ab15ca0d54e7d6b818"} Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.352154 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="753803f2ccf2091aec2cc7376c5358acdb20bf32868ec9ab15ca0d54e7d6b818" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.352356 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-9dcbx" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.354954 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-9rbjx" event={"ID":"d2353f5d-08a1-4365-b77b-321e29ade356","Type":"ContainerDied","Data":"8302c547e96f7db25ca2a96036cfbff907fff13c25fcfd8a5cdf776be6776fcb"} Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.354986 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8302c547e96f7db25ca2a96036cfbff907fff13c25fcfd8a5cdf776be6776fcb" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.355030 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-9rbjx" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.357175 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-7qdd5" event={"ID":"4164e6d6-57c4-45f9-b835-5f8cdfbb5044","Type":"ContainerDied","Data":"66c164f8e4900625da847f4212622601f23dec44285db96fe6f456656a4443d1"} Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.357248 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66c164f8e4900625da847f4212622601f23dec44285db96fe6f456656a4443d1" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.357202 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7qdd5" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.373793 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2htwh" event={"ID":"444e35b8-1d2a-4d83-be6c-2184ae0e3110","Type":"ContainerStarted","Data":"ba6eb3aa77b3fdc8ec12a4d7e473fe166dc435b469d0dd0061cd6b618f7f5dfa"} Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.375582 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/590fe630-a315-4fe6-b90d-298fbcc6619c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "590fe630-a315-4fe6-b90d-298fbcc6619c" (UID: "590fe630-a315-4fe6-b90d-298fbcc6619c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.376900 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-9xpdq" event={"ID":"9259aed9-a7a7-4b45-baff-419fef6c83a3","Type":"ContainerDied","Data":"10b6620c4369686e9085ae3d8c5febb24e0f5af96f9a7b6f7f5cdf1ea45c471e"} Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.376930 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10b6620c4369686e9085ae3d8c5febb24e0f5af96f9a7b6f7f5cdf1ea45c471e" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.376941 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83d7bdbc-2466-4272-bc8f-54afd0d7a9de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.376960 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmjm5\" (UniqueName: \"kubernetes.io/projected/83d7bdbc-2466-4272-bc8f-54afd0d7a9de-kube-api-access-tmjm5\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.376970 4720 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83d7bdbc-2466-4272-bc8f-54afd0d7a9de-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.376978 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/590fe630-a315-4fe6-b90d-298fbcc6619c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.376987 4720 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/590fe630-a315-4fe6-b90d-298fbcc6619c-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.376996 4720 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/590fe630-a315-4fe6-b90d-298fbcc6619c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.377004 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vf5c5\" (UniqueName: \"kubernetes.io/projected/590fe630-a315-4fe6-b90d-298fbcc6619c-kube-api-access-vf5c5\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.377012 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/590fe630-a315-4fe6-b90d-298fbcc6619c-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.377022 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83d7bdbc-2466-4272-bc8f-54afd0d7a9de-logs\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.377029 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83d7bdbc-2466-4272-bc8f-54afd0d7a9de-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.377037 4720 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/590fe630-a315-4fe6-b90d-298fbcc6619c-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.377319 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-9xpdq" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.383960 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/590fe630-a315-4fe6-b90d-298fbcc6619c-config-data" (OuterVolumeSpecName: "config-data") pod "590fe630-a315-4fe6-b90d-298fbcc6619c" (UID: "590fe630-a315-4fe6-b90d-298fbcc6619c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.393106 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-2htwh" podStartSLOduration=2.10829691 podStartE2EDuration="1m13.393090386s" podCreationTimestamp="2025-10-13 17:39:49 +0000 UTC" firstStartedPulling="2025-10-13 17:39:50.625626631 +0000 UTC m=+936.082876763" lastFinishedPulling="2025-10-13 17:41:01.910420107 +0000 UTC m=+1007.367670239" observedRunningTime="2025-10-13 17:41:02.392796319 +0000 UTC m=+1007.850046451" watchObservedRunningTime="2025-10-13 17:41:02.393090386 +0000 UTC m=+1007.850340518" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.393341 4720 scope.go:117] "RemoveContainer" containerID="7f8ea84272aafa74d27bad45c8ff76c2c1de12973ce50d45b6ec53e35650ab6e" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.393511 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"590fe630-a315-4fe6-b90d-298fbcc6619c","Type":"ContainerDied","Data":"a22bc0af46682798758d9d24e400020af0d86d0f1007873c81dfdc47e389455a"} Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.393582 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.397974 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"f87dfa54-2548-4cf1-ad02-c7663263650c","Type":"ContainerStarted","Data":"d2b50a4646eb9aad7794be806b9211f201987d313d9c881662ef6724ddbcea37"} Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.415565 4720 scope.go:117] "RemoveContainer" containerID="c047a3ae953a1fbf866b60ed26c57ce0ffe509cd034ae62ff29d56506356fdb2" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.446109 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7f8746d996-snqjj"] Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.452086 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7f8746d996-snqjj"] Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.457334 4720 scope.go:117] "RemoveContainer" containerID="e28570c9aab9f51b2eb41e6550e7cdfc518782eae4865b54b64c305b1c23e95c" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.469098 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.352883221 podStartE2EDuration="14.469081522s" podCreationTimestamp="2025-10-13 17:40:48 +0000 UTC" firstStartedPulling="2025-10-13 17:40:49.760320883 +0000 UTC m=+995.217571015" lastFinishedPulling="2025-10-13 17:41:01.876519184 +0000 UTC m=+1007.333769316" observedRunningTime="2025-10-13 17:41:02.459733421 +0000 UTC m=+1007.916983553" watchObservedRunningTime="2025-10-13 17:41:02.469081522 +0000 UTC m=+1007.926331654" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.487919 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/590fe630-a315-4fe6-b90d-298fbcc6619c-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.501320 4720 scope.go:117] "RemoveContainer" containerID="6eb9fd564a8906abcb33a17c5fd02f869620b0bd5b6232ce86a3dbe00885e01a" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.504684 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.527521 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.534864 4720 scope.go:117] "RemoveContainer" containerID="7b3bfde79f2808de4c0373d3f27221eaf79bf720f2a19f1fbb518930bb2b6084" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.539035 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 13 17:41:02 crc kubenswrapper[4720]: E1013 17:41:02.539526 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="590fe630-a315-4fe6-b90d-298fbcc6619c" containerName="sg-core" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.539551 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="590fe630-a315-4fe6-b90d-298fbcc6619c" containerName="sg-core" Oct 13 17:41:02 crc kubenswrapper[4720]: E1013 17:41:02.539571 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83d7bdbc-2466-4272-bc8f-54afd0d7a9de" containerName="barbican-api" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.539579 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="83d7bdbc-2466-4272-bc8f-54afd0d7a9de" containerName="barbican-api" Oct 13 17:41:02 crc kubenswrapper[4720]: E1013 17:41:02.539597 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83d7bdbc-2466-4272-bc8f-54afd0d7a9de" containerName="barbican-api-log" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.539606 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="83d7bdbc-2466-4272-bc8f-54afd0d7a9de" containerName="barbican-api-log" Oct 13 17:41:02 crc kubenswrapper[4720]: E1013 17:41:02.539626 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="590fe630-a315-4fe6-b90d-298fbcc6619c" containerName="ceilometer-central-agent" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.539634 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="590fe630-a315-4fe6-b90d-298fbcc6619c" containerName="ceilometer-central-agent" Oct 13 17:41:02 crc kubenswrapper[4720]: E1013 17:41:02.539648 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="887aa549-67e8-4d03-acba-dede202496db" containerName="cinder-db-sync" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.539657 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="887aa549-67e8-4d03-acba-dede202496db" containerName="cinder-db-sync" Oct 13 17:41:02 crc kubenswrapper[4720]: E1013 17:41:02.539668 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="590fe630-a315-4fe6-b90d-298fbcc6619c" containerName="proxy-httpd" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.539675 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="590fe630-a315-4fe6-b90d-298fbcc6619c" containerName="proxy-httpd" Oct 13 17:41:02 crc kubenswrapper[4720]: E1013 17:41:02.539690 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4164e6d6-57c4-45f9-b835-5f8cdfbb5044" containerName="mariadb-database-create" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.539697 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="4164e6d6-57c4-45f9-b835-5f8cdfbb5044" containerName="mariadb-database-create" Oct 13 17:41:02 crc kubenswrapper[4720]: E1013 17:41:02.539712 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2353f5d-08a1-4365-b77b-321e29ade356" containerName="mariadb-database-create" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.539719 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2353f5d-08a1-4365-b77b-321e29ade356" containerName="mariadb-database-create" Oct 13 17:41:02 crc kubenswrapper[4720]: E1013 17:41:02.539733 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9259aed9-a7a7-4b45-baff-419fef6c83a3" containerName="mariadb-database-create" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.539740 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="9259aed9-a7a7-4b45-baff-419fef6c83a3" containerName="mariadb-database-create" Oct 13 17:41:02 crc kubenswrapper[4720]: E1013 17:41:02.539758 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="590fe630-a315-4fe6-b90d-298fbcc6619c" containerName="ceilometer-notification-agent" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.539765 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="590fe630-a315-4fe6-b90d-298fbcc6619c" containerName="ceilometer-notification-agent" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.540087 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="9259aed9-a7a7-4b45-baff-419fef6c83a3" containerName="mariadb-database-create" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.540104 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="83d7bdbc-2466-4272-bc8f-54afd0d7a9de" containerName="barbican-api" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.540115 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2353f5d-08a1-4365-b77b-321e29ade356" containerName="mariadb-database-create" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.540134 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="887aa549-67e8-4d03-acba-dede202496db" containerName="cinder-db-sync" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.540147 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="590fe630-a315-4fe6-b90d-298fbcc6619c" containerName="proxy-httpd" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.540160 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="4164e6d6-57c4-45f9-b835-5f8cdfbb5044" containerName="mariadb-database-create" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.540174 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="83d7bdbc-2466-4272-bc8f-54afd0d7a9de" containerName="barbican-api-log" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.540201 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="590fe630-a315-4fe6-b90d-298fbcc6619c" containerName="ceilometer-notification-agent" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.540218 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="590fe630-a315-4fe6-b90d-298fbcc6619c" containerName="ceilometer-central-agent" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.540228 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="590fe630-a315-4fe6-b90d-298fbcc6619c" containerName="sg-core" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.542208 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.544320 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.544545 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.556850 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.589048 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/458b7aa1-4ce6-4ecf-9e25-207256ca2a57-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"458b7aa1-4ce6-4ecf-9e25-207256ca2a57\") " pod="openstack/ceilometer-0" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.589111 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/458b7aa1-4ce6-4ecf-9e25-207256ca2a57-log-httpd\") pod \"ceilometer-0\" (UID: \"458b7aa1-4ce6-4ecf-9e25-207256ca2a57\") " pod="openstack/ceilometer-0" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.589139 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/458b7aa1-4ce6-4ecf-9e25-207256ca2a57-config-data\") pod \"ceilometer-0\" (UID: \"458b7aa1-4ce6-4ecf-9e25-207256ca2a57\") " pod="openstack/ceilometer-0" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.589170 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/458b7aa1-4ce6-4ecf-9e25-207256ca2a57-run-httpd\") pod \"ceilometer-0\" (UID: \"458b7aa1-4ce6-4ecf-9e25-207256ca2a57\") " pod="openstack/ceilometer-0" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.589204 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/458b7aa1-4ce6-4ecf-9e25-207256ca2a57-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"458b7aa1-4ce6-4ecf-9e25-207256ca2a57\") " pod="openstack/ceilometer-0" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.589250 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xr8b9\" (UniqueName: \"kubernetes.io/projected/458b7aa1-4ce6-4ecf-9e25-207256ca2a57-kube-api-access-xr8b9\") pod \"ceilometer-0\" (UID: \"458b7aa1-4ce6-4ecf-9e25-207256ca2a57\") " pod="openstack/ceilometer-0" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.589298 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/458b7aa1-4ce6-4ecf-9e25-207256ca2a57-scripts\") pod \"ceilometer-0\" (UID: \"458b7aa1-4ce6-4ecf-9e25-207256ca2a57\") " pod="openstack/ceilometer-0" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.690566 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/458b7aa1-4ce6-4ecf-9e25-207256ca2a57-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"458b7aa1-4ce6-4ecf-9e25-207256ca2a57\") " pod="openstack/ceilometer-0" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.690654 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/458b7aa1-4ce6-4ecf-9e25-207256ca2a57-log-httpd\") pod \"ceilometer-0\" (UID: \"458b7aa1-4ce6-4ecf-9e25-207256ca2a57\") " pod="openstack/ceilometer-0" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.690690 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/458b7aa1-4ce6-4ecf-9e25-207256ca2a57-config-data\") pod \"ceilometer-0\" (UID: \"458b7aa1-4ce6-4ecf-9e25-207256ca2a57\") " pod="openstack/ceilometer-0" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.690735 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/458b7aa1-4ce6-4ecf-9e25-207256ca2a57-run-httpd\") pod \"ceilometer-0\" (UID: \"458b7aa1-4ce6-4ecf-9e25-207256ca2a57\") " pod="openstack/ceilometer-0" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.690772 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/458b7aa1-4ce6-4ecf-9e25-207256ca2a57-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"458b7aa1-4ce6-4ecf-9e25-207256ca2a57\") " pod="openstack/ceilometer-0" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.690816 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr8b9\" (UniqueName: \"kubernetes.io/projected/458b7aa1-4ce6-4ecf-9e25-207256ca2a57-kube-api-access-xr8b9\") pod \"ceilometer-0\" (UID: \"458b7aa1-4ce6-4ecf-9e25-207256ca2a57\") " pod="openstack/ceilometer-0" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.690888 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/458b7aa1-4ce6-4ecf-9e25-207256ca2a57-scripts\") pod \"ceilometer-0\" (UID: \"458b7aa1-4ce6-4ecf-9e25-207256ca2a57\") " pod="openstack/ceilometer-0" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.691265 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/458b7aa1-4ce6-4ecf-9e25-207256ca2a57-run-httpd\") pod \"ceilometer-0\" (UID: \"458b7aa1-4ce6-4ecf-9e25-207256ca2a57\") " pod="openstack/ceilometer-0" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.691319 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/458b7aa1-4ce6-4ecf-9e25-207256ca2a57-log-httpd\") pod \"ceilometer-0\" (UID: \"458b7aa1-4ce6-4ecf-9e25-207256ca2a57\") " pod="openstack/ceilometer-0" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.695118 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/458b7aa1-4ce6-4ecf-9e25-207256ca2a57-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"458b7aa1-4ce6-4ecf-9e25-207256ca2a57\") " pod="openstack/ceilometer-0" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.695292 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/458b7aa1-4ce6-4ecf-9e25-207256ca2a57-config-data\") pod \"ceilometer-0\" (UID: \"458b7aa1-4ce6-4ecf-9e25-207256ca2a57\") " pod="openstack/ceilometer-0" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.697662 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/458b7aa1-4ce6-4ecf-9e25-207256ca2a57-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"458b7aa1-4ce6-4ecf-9e25-207256ca2a57\") " pod="openstack/ceilometer-0" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.699798 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/458b7aa1-4ce6-4ecf-9e25-207256ca2a57-scripts\") pod \"ceilometer-0\" (UID: \"458b7aa1-4ce6-4ecf-9e25-207256ca2a57\") " pod="openstack/ceilometer-0" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.707672 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xr8b9\" (UniqueName: \"kubernetes.io/projected/458b7aa1-4ce6-4ecf-9e25-207256ca2a57-kube-api-access-xr8b9\") pod \"ceilometer-0\" (UID: \"458b7aa1-4ce6-4ecf-9e25-207256ca2a57\") " pod="openstack/ceilometer-0" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.870228 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.969515 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-694bd85589-jdgbb" Oct 13 17:41:02 crc kubenswrapper[4720]: I1013 17:41:02.972320 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-694bd85589-jdgbb" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.189080 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="590fe630-a315-4fe6-b90d-298fbcc6619c" path="/var/lib/kubelet/pods/590fe630-a315-4fe6-b90d-298fbcc6619c/volumes" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.190180 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83d7bdbc-2466-4272-bc8f-54afd0d7a9de" path="/var/lib/kubelet/pods/83d7bdbc-2466-4272-bc8f-54afd0d7a9de/volumes" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.279123 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.281886 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.286606 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-rv5gz" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.286983 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.287146 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.287264 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.310172 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.336832 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-bm4z9"] Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.338344 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-bm4z9" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.367973 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-bm4z9"] Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.383885 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 13 17:41:03 crc kubenswrapper[4720]: W1013 17:41:03.390532 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod458b7aa1_4ce6_4ecf_9e25_207256ca2a57.slice/crio-711a2755b373354f7b7bf2707f7190f1666fda3277faac2e59eb7e0a1e17924a WatchSource:0}: Error finding container 711a2755b373354f7b7bf2707f7190f1666fda3277faac2e59eb7e0a1e17924a: Status 404 returned error can't find the container with id 711a2755b373354f7b7bf2707f7190f1666fda3277faac2e59eb7e0a1e17924a Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.411954 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/75f08079-c609-4da3-81c4-7c761992396a-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-bm4z9\" (UID: \"75f08079-c609-4da3-81c4-7c761992396a\") " pod="openstack/dnsmasq-dns-5784cf869f-bm4z9" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.412057 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/580a1451-b144-42d0-b400-1bf271b17c7b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"580a1451-b144-42d0-b400-1bf271b17c7b\") " pod="openstack/cinder-scheduler-0" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.412114 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/580a1451-b144-42d0-b400-1bf271b17c7b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"580a1451-b144-42d0-b400-1bf271b17c7b\") " pod="openstack/cinder-scheduler-0" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.412143 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75f08079-c609-4da3-81c4-7c761992396a-dns-svc\") pod \"dnsmasq-dns-5784cf869f-bm4z9\" (UID: \"75f08079-c609-4da3-81c4-7c761992396a\") " pod="openstack/dnsmasq-dns-5784cf869f-bm4z9" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.412212 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/580a1451-b144-42d0-b400-1bf271b17c7b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"580a1451-b144-42d0-b400-1bf271b17c7b\") " pod="openstack/cinder-scheduler-0" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.412248 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75f08079-c609-4da3-81c4-7c761992396a-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-bm4z9\" (UID: \"75f08079-c609-4da3-81c4-7c761992396a\") " pod="openstack/dnsmasq-dns-5784cf869f-bm4z9" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.412341 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsg4x\" (UniqueName: \"kubernetes.io/projected/580a1451-b144-42d0-b400-1bf271b17c7b-kube-api-access-gsg4x\") pod \"cinder-scheduler-0\" (UID: \"580a1451-b144-42d0-b400-1bf271b17c7b\") " pod="openstack/cinder-scheduler-0" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.412443 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6s4b\" (UniqueName: \"kubernetes.io/projected/75f08079-c609-4da3-81c4-7c761992396a-kube-api-access-g6s4b\") pod \"dnsmasq-dns-5784cf869f-bm4z9\" (UID: \"75f08079-c609-4da3-81c4-7c761992396a\") " pod="openstack/dnsmasq-dns-5784cf869f-bm4z9" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.412481 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/580a1451-b144-42d0-b400-1bf271b17c7b-config-data\") pod \"cinder-scheduler-0\" (UID: \"580a1451-b144-42d0-b400-1bf271b17c7b\") " pod="openstack/cinder-scheduler-0" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.412541 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/75f08079-c609-4da3-81c4-7c761992396a-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-bm4z9\" (UID: \"75f08079-c609-4da3-81c4-7c761992396a\") " pod="openstack/dnsmasq-dns-5784cf869f-bm4z9" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.412565 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/580a1451-b144-42d0-b400-1bf271b17c7b-scripts\") pod \"cinder-scheduler-0\" (UID: \"580a1451-b144-42d0-b400-1bf271b17c7b\") " pod="openstack/cinder-scheduler-0" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.412659 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75f08079-c609-4da3-81c4-7c761992396a-config\") pod \"dnsmasq-dns-5784cf869f-bm4z9\" (UID: \"75f08079-c609-4da3-81c4-7c761992396a\") " pod="openstack/dnsmasq-dns-5784cf869f-bm4z9" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.415442 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"458b7aa1-4ce6-4ecf-9e25-207256ca2a57","Type":"ContainerStarted","Data":"711a2755b373354f7b7bf2707f7190f1666fda3277faac2e59eb7e0a1e17924a"} Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.469616 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.471138 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.476260 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.483467 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.514028 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/75f08079-c609-4da3-81c4-7c761992396a-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-bm4z9\" (UID: \"75f08079-c609-4da3-81c4-7c761992396a\") " pod="openstack/dnsmasq-dns-5784cf869f-bm4z9" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.514088 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/580a1451-b144-42d0-b400-1bf271b17c7b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"580a1451-b144-42d0-b400-1bf271b17c7b\") " pod="openstack/cinder-scheduler-0" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.514121 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/580a1451-b144-42d0-b400-1bf271b17c7b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"580a1451-b144-42d0-b400-1bf271b17c7b\") " pod="openstack/cinder-scheduler-0" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.514152 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75f08079-c609-4da3-81c4-7c761992396a-dns-svc\") pod \"dnsmasq-dns-5784cf869f-bm4z9\" (UID: \"75f08079-c609-4da3-81c4-7c761992396a\") " pod="openstack/dnsmasq-dns-5784cf869f-bm4z9" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.514177 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/580a1451-b144-42d0-b400-1bf271b17c7b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"580a1451-b144-42d0-b400-1bf271b17c7b\") " pod="openstack/cinder-scheduler-0" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.514220 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75f08079-c609-4da3-81c4-7c761992396a-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-bm4z9\" (UID: \"75f08079-c609-4da3-81c4-7c761992396a\") " pod="openstack/dnsmasq-dns-5784cf869f-bm4z9" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.514243 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/580a1451-b144-42d0-b400-1bf271b17c7b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"580a1451-b144-42d0-b400-1bf271b17c7b\") " pod="openstack/cinder-scheduler-0" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.514272 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsg4x\" (UniqueName: \"kubernetes.io/projected/580a1451-b144-42d0-b400-1bf271b17c7b-kube-api-access-gsg4x\") pod \"cinder-scheduler-0\" (UID: \"580a1451-b144-42d0-b400-1bf271b17c7b\") " pod="openstack/cinder-scheduler-0" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.514398 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6s4b\" (UniqueName: \"kubernetes.io/projected/75f08079-c609-4da3-81c4-7c761992396a-kube-api-access-g6s4b\") pod \"dnsmasq-dns-5784cf869f-bm4z9\" (UID: \"75f08079-c609-4da3-81c4-7c761992396a\") " pod="openstack/dnsmasq-dns-5784cf869f-bm4z9" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.514425 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/580a1451-b144-42d0-b400-1bf271b17c7b-config-data\") pod \"cinder-scheduler-0\" (UID: \"580a1451-b144-42d0-b400-1bf271b17c7b\") " pod="openstack/cinder-scheduler-0" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.514465 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/580a1451-b144-42d0-b400-1bf271b17c7b-scripts\") pod \"cinder-scheduler-0\" (UID: \"580a1451-b144-42d0-b400-1bf271b17c7b\") " pod="openstack/cinder-scheduler-0" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.514483 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/75f08079-c609-4da3-81c4-7c761992396a-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-bm4z9\" (UID: \"75f08079-c609-4da3-81c4-7c761992396a\") " pod="openstack/dnsmasq-dns-5784cf869f-bm4z9" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.514990 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/75f08079-c609-4da3-81c4-7c761992396a-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-bm4z9\" (UID: \"75f08079-c609-4da3-81c4-7c761992396a\") " pod="openstack/dnsmasq-dns-5784cf869f-bm4z9" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.514998 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75f08079-c609-4da3-81c4-7c761992396a-config\") pod \"dnsmasq-dns-5784cf869f-bm4z9\" (UID: \"75f08079-c609-4da3-81c4-7c761992396a\") " pod="openstack/dnsmasq-dns-5784cf869f-bm4z9" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.515141 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75f08079-c609-4da3-81c4-7c761992396a-dns-svc\") pod \"dnsmasq-dns-5784cf869f-bm4z9\" (UID: \"75f08079-c609-4da3-81c4-7c761992396a\") " pod="openstack/dnsmasq-dns-5784cf869f-bm4z9" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.515159 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75f08079-c609-4da3-81c4-7c761992396a-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-bm4z9\" (UID: \"75f08079-c609-4da3-81c4-7c761992396a\") " pod="openstack/dnsmasq-dns-5784cf869f-bm4z9" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.515785 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75f08079-c609-4da3-81c4-7c761992396a-config\") pod \"dnsmasq-dns-5784cf869f-bm4z9\" (UID: \"75f08079-c609-4da3-81c4-7c761992396a\") " pod="openstack/dnsmasq-dns-5784cf869f-bm4z9" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.516358 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/75f08079-c609-4da3-81c4-7c761992396a-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-bm4z9\" (UID: \"75f08079-c609-4da3-81c4-7c761992396a\") " pod="openstack/dnsmasq-dns-5784cf869f-bm4z9" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.523267 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/580a1451-b144-42d0-b400-1bf271b17c7b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"580a1451-b144-42d0-b400-1bf271b17c7b\") " pod="openstack/cinder-scheduler-0" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.524922 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/580a1451-b144-42d0-b400-1bf271b17c7b-scripts\") pod \"cinder-scheduler-0\" (UID: \"580a1451-b144-42d0-b400-1bf271b17c7b\") " pod="openstack/cinder-scheduler-0" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.529122 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/580a1451-b144-42d0-b400-1bf271b17c7b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"580a1451-b144-42d0-b400-1bf271b17c7b\") " pod="openstack/cinder-scheduler-0" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.532856 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/580a1451-b144-42d0-b400-1bf271b17c7b-config-data\") pod \"cinder-scheduler-0\" (UID: \"580a1451-b144-42d0-b400-1bf271b17c7b\") " pod="openstack/cinder-scheduler-0" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.535773 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsg4x\" (UniqueName: \"kubernetes.io/projected/580a1451-b144-42d0-b400-1bf271b17c7b-kube-api-access-gsg4x\") pod \"cinder-scheduler-0\" (UID: \"580a1451-b144-42d0-b400-1bf271b17c7b\") " pod="openstack/cinder-scheduler-0" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.535962 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6s4b\" (UniqueName: \"kubernetes.io/projected/75f08079-c609-4da3-81c4-7c761992396a-kube-api-access-g6s4b\") pod \"dnsmasq-dns-5784cf869f-bm4z9\" (UID: \"75f08079-c609-4da3-81c4-7c761992396a\") " pod="openstack/dnsmasq-dns-5784cf869f-bm4z9" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.616712 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.621599 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf41e1e8-36d7-4805-b824-44322d841e38-logs\") pod \"cinder-api-0\" (UID: \"bf41e1e8-36d7-4805-b824-44322d841e38\") " pod="openstack/cinder-api-0" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.621743 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bf41e1e8-36d7-4805-b824-44322d841e38-config-data-custom\") pod \"cinder-api-0\" (UID: \"bf41e1e8-36d7-4805-b824-44322d841e38\") " pod="openstack/cinder-api-0" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.621775 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf41e1e8-36d7-4805-b824-44322d841e38-scripts\") pod \"cinder-api-0\" (UID: \"bf41e1e8-36d7-4805-b824-44322d841e38\") " pod="openstack/cinder-api-0" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.621873 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhvtk\" (UniqueName: \"kubernetes.io/projected/bf41e1e8-36d7-4805-b824-44322d841e38-kube-api-access-mhvtk\") pod \"cinder-api-0\" (UID: \"bf41e1e8-36d7-4805-b824-44322d841e38\") " pod="openstack/cinder-api-0" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.622145 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bf41e1e8-36d7-4805-b824-44322d841e38-etc-machine-id\") pod \"cinder-api-0\" (UID: \"bf41e1e8-36d7-4805-b824-44322d841e38\") " pod="openstack/cinder-api-0" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.622460 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf41e1e8-36d7-4805-b824-44322d841e38-config-data\") pod \"cinder-api-0\" (UID: \"bf41e1e8-36d7-4805-b824-44322d841e38\") " pod="openstack/cinder-api-0" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.622518 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf41e1e8-36d7-4805-b824-44322d841e38-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"bf41e1e8-36d7-4805-b824-44322d841e38\") " pod="openstack/cinder-api-0" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.664786 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-bm4z9" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.725017 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bf41e1e8-36d7-4805-b824-44322d841e38-config-data-custom\") pod \"cinder-api-0\" (UID: \"bf41e1e8-36d7-4805-b824-44322d841e38\") " pod="openstack/cinder-api-0" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.725339 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf41e1e8-36d7-4805-b824-44322d841e38-scripts\") pod \"cinder-api-0\" (UID: \"bf41e1e8-36d7-4805-b824-44322d841e38\") " pod="openstack/cinder-api-0" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.725390 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhvtk\" (UniqueName: \"kubernetes.io/projected/bf41e1e8-36d7-4805-b824-44322d841e38-kube-api-access-mhvtk\") pod \"cinder-api-0\" (UID: \"bf41e1e8-36d7-4805-b824-44322d841e38\") " pod="openstack/cinder-api-0" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.725450 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bf41e1e8-36d7-4805-b824-44322d841e38-etc-machine-id\") pod \"cinder-api-0\" (UID: \"bf41e1e8-36d7-4805-b824-44322d841e38\") " pod="openstack/cinder-api-0" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.725491 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf41e1e8-36d7-4805-b824-44322d841e38-config-data\") pod \"cinder-api-0\" (UID: \"bf41e1e8-36d7-4805-b824-44322d841e38\") " pod="openstack/cinder-api-0" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.725981 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf41e1e8-36d7-4805-b824-44322d841e38-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"bf41e1e8-36d7-4805-b824-44322d841e38\") " pod="openstack/cinder-api-0" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.726182 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf41e1e8-36d7-4805-b824-44322d841e38-logs\") pod \"cinder-api-0\" (UID: \"bf41e1e8-36d7-4805-b824-44322d841e38\") " pod="openstack/cinder-api-0" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.726642 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf41e1e8-36d7-4805-b824-44322d841e38-logs\") pod \"cinder-api-0\" (UID: \"bf41e1e8-36d7-4805-b824-44322d841e38\") " pod="openstack/cinder-api-0" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.728849 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bf41e1e8-36d7-4805-b824-44322d841e38-etc-machine-id\") pod \"cinder-api-0\" (UID: \"bf41e1e8-36d7-4805-b824-44322d841e38\") " pod="openstack/cinder-api-0" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.731796 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bf41e1e8-36d7-4805-b824-44322d841e38-config-data-custom\") pod \"cinder-api-0\" (UID: \"bf41e1e8-36d7-4805-b824-44322d841e38\") " pod="openstack/cinder-api-0" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.732513 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf41e1e8-36d7-4805-b824-44322d841e38-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"bf41e1e8-36d7-4805-b824-44322d841e38\") " pod="openstack/cinder-api-0" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.733325 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf41e1e8-36d7-4805-b824-44322d841e38-config-data\") pod \"cinder-api-0\" (UID: \"bf41e1e8-36d7-4805-b824-44322d841e38\") " pod="openstack/cinder-api-0" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.733965 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf41e1e8-36d7-4805-b824-44322d841e38-scripts\") pod \"cinder-api-0\" (UID: \"bf41e1e8-36d7-4805-b824-44322d841e38\") " pod="openstack/cinder-api-0" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.749577 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhvtk\" (UniqueName: \"kubernetes.io/projected/bf41e1e8-36d7-4805-b824-44322d841e38-kube-api-access-mhvtk\") pod \"cinder-api-0\" (UID: \"bf41e1e8-36d7-4805-b824-44322d841e38\") " pod="openstack/cinder-api-0" Oct 13 17:41:03 crc kubenswrapper[4720]: I1013 17:41:03.790414 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 13 17:41:04 crc kubenswrapper[4720]: I1013 17:41:04.106951 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 13 17:41:04 crc kubenswrapper[4720]: W1013 17:41:04.107846 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod580a1451_b144_42d0_b400_1bf271b17c7b.slice/crio-6b5c05b4a9a4fa71f2882d58ae3c17813093fe2f0f470e100ae8037fa3361854 WatchSource:0}: Error finding container 6b5c05b4a9a4fa71f2882d58ae3c17813093fe2f0f470e100ae8037fa3361854: Status 404 returned error can't find the container with id 6b5c05b4a9a4fa71f2882d58ae3c17813093fe2f0f470e100ae8037fa3361854 Oct 13 17:41:04 crc kubenswrapper[4720]: I1013 17:41:04.258691 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-bm4z9"] Oct 13 17:41:04 crc kubenswrapper[4720]: I1013 17:41:04.349904 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 13 17:41:04 crc kubenswrapper[4720]: I1013 17:41:04.389455 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 13 17:41:04 crc kubenswrapper[4720]: I1013 17:41:04.431940 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bf41e1e8-36d7-4805-b824-44322d841e38","Type":"ContainerStarted","Data":"23359f6ed524e81b386487f5cc7901fd277f8dc8d500e43f75e91f6555dfc6fe"} Oct 13 17:41:04 crc kubenswrapper[4720]: I1013 17:41:04.433723 4720 generic.go:334] "Generic (PLEG): container finished" podID="444e35b8-1d2a-4d83-be6c-2184ae0e3110" containerID="ba6eb3aa77b3fdc8ec12a4d7e473fe166dc435b469d0dd0061cd6b618f7f5dfa" exitCode=0 Oct 13 17:41:04 crc kubenswrapper[4720]: I1013 17:41:04.433808 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2htwh" event={"ID":"444e35b8-1d2a-4d83-be6c-2184ae0e3110","Type":"ContainerDied","Data":"ba6eb3aa77b3fdc8ec12a4d7e473fe166dc435b469d0dd0061cd6b618f7f5dfa"} Oct 13 17:41:04 crc kubenswrapper[4720]: I1013 17:41:04.436786 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"458b7aa1-4ce6-4ecf-9e25-207256ca2a57","Type":"ContainerStarted","Data":"a7a89ea377cc862471422c319b2d1a3b04c6f29f4990e9e61bdd5342689be54b"} Oct 13 17:41:04 crc kubenswrapper[4720]: I1013 17:41:04.438880 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-bm4z9" event={"ID":"75f08079-c609-4da3-81c4-7c761992396a","Type":"ContainerStarted","Data":"dca69628a2fe1a524262d9cbaf96351dd208a8851b9c26e9e1b172eb46a40a09"} Oct 13 17:41:04 crc kubenswrapper[4720]: I1013 17:41:04.439936 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"580a1451-b144-42d0-b400-1bf271b17c7b","Type":"ContainerStarted","Data":"6b5c05b4a9a4fa71f2882d58ae3c17813093fe2f0f470e100ae8037fa3361854"} Oct 13 17:41:05 crc kubenswrapper[4720]: I1013 17:41:05.456075 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"458b7aa1-4ce6-4ecf-9e25-207256ca2a57","Type":"ContainerStarted","Data":"e87371c13426935a0d120e2ac95d8e565de5c8f6dbdb51202edb676187eee163"} Oct 13 17:41:05 crc kubenswrapper[4720]: I1013 17:41:05.462120 4720 generic.go:334] "Generic (PLEG): container finished" podID="75f08079-c609-4da3-81c4-7c761992396a" containerID="a386a7d73c6e81551b63d0bd8b5919279512925e8d91b58685e26b42e405b745" exitCode=0 Oct 13 17:41:05 crc kubenswrapper[4720]: I1013 17:41:05.462175 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-bm4z9" event={"ID":"75f08079-c609-4da3-81c4-7c761992396a","Type":"ContainerDied","Data":"a386a7d73c6e81551b63d0bd8b5919279512925e8d91b58685e26b42e405b745"} Oct 13 17:41:05 crc kubenswrapper[4720]: I1013 17:41:05.471774 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bf41e1e8-36d7-4805-b824-44322d841e38","Type":"ContainerStarted","Data":"d17932eb3f1249cfefbdc41667ce1235565bba8a0c287910b108807f8759941b"} Oct 13 17:41:05 crc kubenswrapper[4720]: I1013 17:41:05.948038 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.042841 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2htwh" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.203734 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/444e35b8-1d2a-4d83-be6c-2184ae0e3110-logs\") pod \"444e35b8-1d2a-4d83-be6c-2184ae0e3110\" (UID: \"444e35b8-1d2a-4d83-be6c-2184ae0e3110\") " Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.204019 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/444e35b8-1d2a-4d83-be6c-2184ae0e3110-scripts\") pod \"444e35b8-1d2a-4d83-be6c-2184ae0e3110\" (UID: \"444e35b8-1d2a-4d83-be6c-2184ae0e3110\") " Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.204094 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnndp\" (UniqueName: \"kubernetes.io/projected/444e35b8-1d2a-4d83-be6c-2184ae0e3110-kube-api-access-vnndp\") pod \"444e35b8-1d2a-4d83-be6c-2184ae0e3110\" (UID: \"444e35b8-1d2a-4d83-be6c-2184ae0e3110\") " Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.204215 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/444e35b8-1d2a-4d83-be6c-2184ae0e3110-combined-ca-bundle\") pod \"444e35b8-1d2a-4d83-be6c-2184ae0e3110\" (UID: \"444e35b8-1d2a-4d83-be6c-2184ae0e3110\") " Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.204296 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/444e35b8-1d2a-4d83-be6c-2184ae0e3110-config-data\") pod \"444e35b8-1d2a-4d83-be6c-2184ae0e3110\" (UID: \"444e35b8-1d2a-4d83-be6c-2184ae0e3110\") " Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.205895 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/444e35b8-1d2a-4d83-be6c-2184ae0e3110-logs" (OuterVolumeSpecName: "logs") pod "444e35b8-1d2a-4d83-be6c-2184ae0e3110" (UID: "444e35b8-1d2a-4d83-be6c-2184ae0e3110"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.213664 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/444e35b8-1d2a-4d83-be6c-2184ae0e3110-kube-api-access-vnndp" (OuterVolumeSpecName: "kube-api-access-vnndp") pod "444e35b8-1d2a-4d83-be6c-2184ae0e3110" (UID: "444e35b8-1d2a-4d83-be6c-2184ae0e3110"). InnerVolumeSpecName "kube-api-access-vnndp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.214321 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/444e35b8-1d2a-4d83-be6c-2184ae0e3110-scripts" (OuterVolumeSpecName: "scripts") pod "444e35b8-1d2a-4d83-be6c-2184ae0e3110" (UID: "444e35b8-1d2a-4d83-be6c-2184ae0e3110"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.271091 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/444e35b8-1d2a-4d83-be6c-2184ae0e3110-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "444e35b8-1d2a-4d83-be6c-2184ae0e3110" (UID: "444e35b8-1d2a-4d83-be6c-2184ae0e3110"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.271342 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/444e35b8-1d2a-4d83-be6c-2184ae0e3110-config-data" (OuterVolumeSpecName: "config-data") pod "444e35b8-1d2a-4d83-be6c-2184ae0e3110" (UID: "444e35b8-1d2a-4d83-be6c-2184ae0e3110"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.305910 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/444e35b8-1d2a-4d83-be6c-2184ae0e3110-logs\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.305937 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/444e35b8-1d2a-4d83-be6c-2184ae0e3110-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.305946 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnndp\" (UniqueName: \"kubernetes.io/projected/444e35b8-1d2a-4d83-be6c-2184ae0e3110-kube-api-access-vnndp\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.305955 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/444e35b8-1d2a-4d83-be6c-2184ae0e3110-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.305964 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/444e35b8-1d2a-4d83-be6c-2184ae0e3110-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.330322 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7df8489788-ntn24" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.407542 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/139c2e02-2c20-4a21-a5c0-753c6003473b-horizon-secret-key\") pod \"139c2e02-2c20-4a21-a5c0-753c6003473b\" (UID: \"139c2e02-2c20-4a21-a5c0-753c6003473b\") " Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.407615 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/139c2e02-2c20-4a21-a5c0-753c6003473b-scripts\") pod \"139c2e02-2c20-4a21-a5c0-753c6003473b\" (UID: \"139c2e02-2c20-4a21-a5c0-753c6003473b\") " Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.407650 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/139c2e02-2c20-4a21-a5c0-753c6003473b-horizon-tls-certs\") pod \"139c2e02-2c20-4a21-a5c0-753c6003473b\" (UID: \"139c2e02-2c20-4a21-a5c0-753c6003473b\") " Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.407689 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/139c2e02-2c20-4a21-a5c0-753c6003473b-combined-ca-bundle\") pod \"139c2e02-2c20-4a21-a5c0-753c6003473b\" (UID: \"139c2e02-2c20-4a21-a5c0-753c6003473b\") " Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.407744 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/139c2e02-2c20-4a21-a5c0-753c6003473b-logs\") pod \"139c2e02-2c20-4a21-a5c0-753c6003473b\" (UID: \"139c2e02-2c20-4a21-a5c0-753c6003473b\") " Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.407782 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrd2q\" (UniqueName: \"kubernetes.io/projected/139c2e02-2c20-4a21-a5c0-753c6003473b-kube-api-access-lrd2q\") pod \"139c2e02-2c20-4a21-a5c0-753c6003473b\" (UID: \"139c2e02-2c20-4a21-a5c0-753c6003473b\") " Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.407818 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/139c2e02-2c20-4a21-a5c0-753c6003473b-config-data\") pod \"139c2e02-2c20-4a21-a5c0-753c6003473b\" (UID: \"139c2e02-2c20-4a21-a5c0-753c6003473b\") " Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.408363 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/139c2e02-2c20-4a21-a5c0-753c6003473b-logs" (OuterVolumeSpecName: "logs") pod "139c2e02-2c20-4a21-a5c0-753c6003473b" (UID: "139c2e02-2c20-4a21-a5c0-753c6003473b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.413096 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/139c2e02-2c20-4a21-a5c0-753c6003473b-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "139c2e02-2c20-4a21-a5c0-753c6003473b" (UID: "139c2e02-2c20-4a21-a5c0-753c6003473b"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.419752 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/139c2e02-2c20-4a21-a5c0-753c6003473b-kube-api-access-lrd2q" (OuterVolumeSpecName: "kube-api-access-lrd2q") pod "139c2e02-2c20-4a21-a5c0-753c6003473b" (UID: "139c2e02-2c20-4a21-a5c0-753c6003473b"). InnerVolumeSpecName "kube-api-access-lrd2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.442556 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/139c2e02-2c20-4a21-a5c0-753c6003473b-scripts" (OuterVolumeSpecName: "scripts") pod "139c2e02-2c20-4a21-a5c0-753c6003473b" (UID: "139c2e02-2c20-4a21-a5c0-753c6003473b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.444620 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/139c2e02-2c20-4a21-a5c0-753c6003473b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "139c2e02-2c20-4a21-a5c0-753c6003473b" (UID: "139c2e02-2c20-4a21-a5c0-753c6003473b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.455925 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/139c2e02-2c20-4a21-a5c0-753c6003473b-config-data" (OuterVolumeSpecName: "config-data") pod "139c2e02-2c20-4a21-a5c0-753c6003473b" (UID: "139c2e02-2c20-4a21-a5c0-753c6003473b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.469484 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/139c2e02-2c20-4a21-a5c0-753c6003473b-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "139c2e02-2c20-4a21-a5c0-753c6003473b" (UID: "139c2e02-2c20-4a21-a5c0-753c6003473b"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.488658 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"458b7aa1-4ce6-4ecf-9e25-207256ca2a57","Type":"ContainerStarted","Data":"0b16974eb9d178f9ccd519d56427ad64d9073f3d54ebafa4df938014a2de4006"} Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.492613 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-bm4z9" event={"ID":"75f08079-c609-4da3-81c4-7c761992396a","Type":"ContainerStarted","Data":"653ae5b4745a9760568b74b57428ba90af224ba8510179fb2709cf67bae317dc"} Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.493243 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5784cf869f-bm4z9" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.497503 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"580a1451-b144-42d0-b400-1bf271b17c7b","Type":"ContainerStarted","Data":"25b97bc04fd8cade7f4ee8ae6c79a7f030b03ca73cbe965f2d2fbef2d199284a"} Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.499980 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bf41e1e8-36d7-4805-b824-44322d841e38","Type":"ContainerStarted","Data":"7e54e5cf97d638eb9c08897a460c40d94602f2a74b52e91535e4ead95ab08e31"} Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.500088 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="bf41e1e8-36d7-4805-b824-44322d841e38" containerName="cinder-api-log" containerID="cri-o://d17932eb3f1249cfefbdc41667ce1235565bba8a0c287910b108807f8759941b" gracePeriod=30 Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.500280 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.500309 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="bf41e1e8-36d7-4805-b824-44322d841e38" containerName="cinder-api" containerID="cri-o://7e54e5cf97d638eb9c08897a460c40d94602f2a74b52e91535e4ead95ab08e31" gracePeriod=30 Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.509303 4720 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/139c2e02-2c20-4a21-a5c0-753c6003473b-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.509488 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/139c2e02-2c20-4a21-a5c0-753c6003473b-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.509498 4720 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/139c2e02-2c20-4a21-a5c0-753c6003473b-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.509505 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/139c2e02-2c20-4a21-a5c0-753c6003473b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.509513 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/139c2e02-2c20-4a21-a5c0-753c6003473b-logs\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.509522 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrd2q\" (UniqueName: \"kubernetes.io/projected/139c2e02-2c20-4a21-a5c0-753c6003473b-kube-api-access-lrd2q\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.509531 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/139c2e02-2c20-4a21-a5c0-753c6003473b-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.511491 4720 generic.go:334] "Generic (PLEG): container finished" podID="139c2e02-2c20-4a21-a5c0-753c6003473b" containerID="10c21702a32bc872627030d27e43d3321339dec03c70d87460aa37a2d88b5a48" exitCode=137 Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.511573 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7df8489788-ntn24" event={"ID":"139c2e02-2c20-4a21-a5c0-753c6003473b","Type":"ContainerDied","Data":"10c21702a32bc872627030d27e43d3321339dec03c70d87460aa37a2d88b5a48"} Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.511610 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7df8489788-ntn24" event={"ID":"139c2e02-2c20-4a21-a5c0-753c6003473b","Type":"ContainerDied","Data":"f706f305954c24412cc62e6c0afba1dd5060e9a161d6edfb94a7e2ba190d826c"} Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.511628 4720 scope.go:117] "RemoveContainer" containerID="b16a8f5b551805315d0632eef1b1948aa4930fa2db29ab6d2fe042db2c832eb1" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.511745 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7df8489788-ntn24" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.577172 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2htwh" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.580854 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5784cf869f-bm4z9" podStartSLOduration=3.580835811 podStartE2EDuration="3.580835811s" podCreationTimestamp="2025-10-13 17:41:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:41:06.513376685 +0000 UTC m=+1011.970626837" watchObservedRunningTime="2025-10-13 17:41:06.580835811 +0000 UTC m=+1012.038085943" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.584619 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2htwh" event={"ID":"444e35b8-1d2a-4d83-be6c-2184ae0e3110","Type":"ContainerDied","Data":"e39116e555360f96fc4444f8b547f1e09c12a0ff17254ce1332d6ffcad2e942c"} Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.584654 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e39116e555360f96fc4444f8b547f1e09c12a0ff17254ce1332d6ffcad2e942c" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.612684 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-544f6df47b-z9rm6"] Oct 13 17:41:06 crc kubenswrapper[4720]: E1013 17:41:06.614442 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="139c2e02-2c20-4a21-a5c0-753c6003473b" containerName="horizon-log" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.614464 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="139c2e02-2c20-4a21-a5c0-753c6003473b" containerName="horizon-log" Oct 13 17:41:06 crc kubenswrapper[4720]: E1013 17:41:06.614490 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="139c2e02-2c20-4a21-a5c0-753c6003473b" containerName="horizon" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.614497 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="139c2e02-2c20-4a21-a5c0-753c6003473b" containerName="horizon" Oct 13 17:41:06 crc kubenswrapper[4720]: E1013 17:41:06.614507 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="444e35b8-1d2a-4d83-be6c-2184ae0e3110" containerName="placement-db-sync" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.614524 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="444e35b8-1d2a-4d83-be6c-2184ae0e3110" containerName="placement-db-sync" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.614691 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="139c2e02-2c20-4a21-a5c0-753c6003473b" containerName="horizon" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.614700 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="444e35b8-1d2a-4d83-be6c-2184ae0e3110" containerName="placement-db-sync" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.614711 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="139c2e02-2c20-4a21-a5c0-753c6003473b" containerName="horizon-log" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.615600 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-544f6df47b-z9rm6" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.620997 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-jdjtc" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.621229 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.621390 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.624281 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.630547 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.649254 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-544f6df47b-z9rm6"] Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.664955 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.664923095 podStartE2EDuration="3.664923095s" podCreationTimestamp="2025-10-13 17:41:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:41:06.597776017 +0000 UTC m=+1012.055026149" watchObservedRunningTime="2025-10-13 17:41:06.664923095 +0000 UTC m=+1012.122173237" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.690256 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7df8489788-ntn24"] Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.695816 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7df8489788-ntn24"] Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.713805 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acf3c288-2800-445a-9d67-134e0a7faac9-config-data\") pod \"placement-544f6df47b-z9rm6\" (UID: \"acf3c288-2800-445a-9d67-134e0a7faac9\") " pod="openstack/placement-544f6df47b-z9rm6" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.713844 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/acf3c288-2800-445a-9d67-134e0a7faac9-internal-tls-certs\") pod \"placement-544f6df47b-z9rm6\" (UID: \"acf3c288-2800-445a-9d67-134e0a7faac9\") " pod="openstack/placement-544f6df47b-z9rm6" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.713866 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/acf3c288-2800-445a-9d67-134e0a7faac9-public-tls-certs\") pod \"placement-544f6df47b-z9rm6\" (UID: \"acf3c288-2800-445a-9d67-134e0a7faac9\") " pod="openstack/placement-544f6df47b-z9rm6" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.713909 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acf3c288-2800-445a-9d67-134e0a7faac9-scripts\") pod \"placement-544f6df47b-z9rm6\" (UID: \"acf3c288-2800-445a-9d67-134e0a7faac9\") " pod="openstack/placement-544f6df47b-z9rm6" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.713941 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqrbq\" (UniqueName: \"kubernetes.io/projected/acf3c288-2800-445a-9d67-134e0a7faac9-kube-api-access-tqrbq\") pod \"placement-544f6df47b-z9rm6\" (UID: \"acf3c288-2800-445a-9d67-134e0a7faac9\") " pod="openstack/placement-544f6df47b-z9rm6" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.713956 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acf3c288-2800-445a-9d67-134e0a7faac9-combined-ca-bundle\") pod \"placement-544f6df47b-z9rm6\" (UID: \"acf3c288-2800-445a-9d67-134e0a7faac9\") " pod="openstack/placement-544f6df47b-z9rm6" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.713989 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/acf3c288-2800-445a-9d67-134e0a7faac9-logs\") pod \"placement-544f6df47b-z9rm6\" (UID: \"acf3c288-2800-445a-9d67-134e0a7faac9\") " pod="openstack/placement-544f6df47b-z9rm6" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.741117 4720 scope.go:117] "RemoveContainer" containerID="10c21702a32bc872627030d27e43d3321339dec03c70d87460aa37a2d88b5a48" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.775878 4720 scope.go:117] "RemoveContainer" containerID="b16a8f5b551805315d0632eef1b1948aa4930fa2db29ab6d2fe042db2c832eb1" Oct 13 17:41:06 crc kubenswrapper[4720]: E1013 17:41:06.777337 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b16a8f5b551805315d0632eef1b1948aa4930fa2db29ab6d2fe042db2c832eb1\": container with ID starting with b16a8f5b551805315d0632eef1b1948aa4930fa2db29ab6d2fe042db2c832eb1 not found: ID does not exist" containerID="b16a8f5b551805315d0632eef1b1948aa4930fa2db29ab6d2fe042db2c832eb1" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.777365 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b16a8f5b551805315d0632eef1b1948aa4930fa2db29ab6d2fe042db2c832eb1"} err="failed to get container status \"b16a8f5b551805315d0632eef1b1948aa4930fa2db29ab6d2fe042db2c832eb1\": rpc error: code = NotFound desc = could not find container \"b16a8f5b551805315d0632eef1b1948aa4930fa2db29ab6d2fe042db2c832eb1\": container with ID starting with b16a8f5b551805315d0632eef1b1948aa4930fa2db29ab6d2fe042db2c832eb1 not found: ID does not exist" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.777387 4720 scope.go:117] "RemoveContainer" containerID="10c21702a32bc872627030d27e43d3321339dec03c70d87460aa37a2d88b5a48" Oct 13 17:41:06 crc kubenswrapper[4720]: E1013 17:41:06.777654 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10c21702a32bc872627030d27e43d3321339dec03c70d87460aa37a2d88b5a48\": container with ID starting with 10c21702a32bc872627030d27e43d3321339dec03c70d87460aa37a2d88b5a48 not found: ID does not exist" containerID="10c21702a32bc872627030d27e43d3321339dec03c70d87460aa37a2d88b5a48" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.777670 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10c21702a32bc872627030d27e43d3321339dec03c70d87460aa37a2d88b5a48"} err="failed to get container status \"10c21702a32bc872627030d27e43d3321339dec03c70d87460aa37a2d88b5a48\": rpc error: code = NotFound desc = could not find container \"10c21702a32bc872627030d27e43d3321339dec03c70d87460aa37a2d88b5a48\": container with ID starting with 10c21702a32bc872627030d27e43d3321339dec03c70d87460aa37a2d88b5a48 not found: ID does not exist" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.815857 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acf3c288-2800-445a-9d67-134e0a7faac9-config-data\") pod \"placement-544f6df47b-z9rm6\" (UID: \"acf3c288-2800-445a-9d67-134e0a7faac9\") " pod="openstack/placement-544f6df47b-z9rm6" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.815911 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/acf3c288-2800-445a-9d67-134e0a7faac9-internal-tls-certs\") pod \"placement-544f6df47b-z9rm6\" (UID: \"acf3c288-2800-445a-9d67-134e0a7faac9\") " pod="openstack/placement-544f6df47b-z9rm6" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.815929 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/acf3c288-2800-445a-9d67-134e0a7faac9-public-tls-certs\") pod \"placement-544f6df47b-z9rm6\" (UID: \"acf3c288-2800-445a-9d67-134e0a7faac9\") " pod="openstack/placement-544f6df47b-z9rm6" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.815971 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acf3c288-2800-445a-9d67-134e0a7faac9-scripts\") pod \"placement-544f6df47b-z9rm6\" (UID: \"acf3c288-2800-445a-9d67-134e0a7faac9\") " pod="openstack/placement-544f6df47b-z9rm6" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.816006 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqrbq\" (UniqueName: \"kubernetes.io/projected/acf3c288-2800-445a-9d67-134e0a7faac9-kube-api-access-tqrbq\") pod \"placement-544f6df47b-z9rm6\" (UID: \"acf3c288-2800-445a-9d67-134e0a7faac9\") " pod="openstack/placement-544f6df47b-z9rm6" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.816021 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acf3c288-2800-445a-9d67-134e0a7faac9-combined-ca-bundle\") pod \"placement-544f6df47b-z9rm6\" (UID: \"acf3c288-2800-445a-9d67-134e0a7faac9\") " pod="openstack/placement-544f6df47b-z9rm6" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.816055 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/acf3c288-2800-445a-9d67-134e0a7faac9-logs\") pod \"placement-544f6df47b-z9rm6\" (UID: \"acf3c288-2800-445a-9d67-134e0a7faac9\") " pod="openstack/placement-544f6df47b-z9rm6" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.816508 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/acf3c288-2800-445a-9d67-134e0a7faac9-logs\") pod \"placement-544f6df47b-z9rm6\" (UID: \"acf3c288-2800-445a-9d67-134e0a7faac9\") " pod="openstack/placement-544f6df47b-z9rm6" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.822787 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acf3c288-2800-445a-9d67-134e0a7faac9-config-data\") pod \"placement-544f6df47b-z9rm6\" (UID: \"acf3c288-2800-445a-9d67-134e0a7faac9\") " pod="openstack/placement-544f6df47b-z9rm6" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.823275 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/acf3c288-2800-445a-9d67-134e0a7faac9-public-tls-certs\") pod \"placement-544f6df47b-z9rm6\" (UID: \"acf3c288-2800-445a-9d67-134e0a7faac9\") " pod="openstack/placement-544f6df47b-z9rm6" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.823962 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/acf3c288-2800-445a-9d67-134e0a7faac9-internal-tls-certs\") pod \"placement-544f6df47b-z9rm6\" (UID: \"acf3c288-2800-445a-9d67-134e0a7faac9\") " pod="openstack/placement-544f6df47b-z9rm6" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.829850 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acf3c288-2800-445a-9d67-134e0a7faac9-combined-ca-bundle\") pod \"placement-544f6df47b-z9rm6\" (UID: \"acf3c288-2800-445a-9d67-134e0a7faac9\") " pod="openstack/placement-544f6df47b-z9rm6" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.830070 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/acf3c288-2800-445a-9d67-134e0a7faac9-scripts\") pod \"placement-544f6df47b-z9rm6\" (UID: \"acf3c288-2800-445a-9d67-134e0a7faac9\") " pod="openstack/placement-544f6df47b-z9rm6" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.841787 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqrbq\" (UniqueName: \"kubernetes.io/projected/acf3c288-2800-445a-9d67-134e0a7faac9-kube-api-access-tqrbq\") pod \"placement-544f6df47b-z9rm6\" (UID: \"acf3c288-2800-445a-9d67-134e0a7faac9\") " pod="openstack/placement-544f6df47b-z9rm6" Oct 13 17:41:06 crc kubenswrapper[4720]: I1013 17:41:06.980539 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-544f6df47b-z9rm6" Oct 13 17:41:07 crc kubenswrapper[4720]: I1013 17:41:07.190688 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="139c2e02-2c20-4a21-a5c0-753c6003473b" path="/var/lib/kubelet/pods/139c2e02-2c20-4a21-a5c0-753c6003473b/volumes" Oct 13 17:41:07 crc kubenswrapper[4720]: I1013 17:41:07.461846 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-544f6df47b-z9rm6"] Oct 13 17:41:07 crc kubenswrapper[4720]: I1013 17:41:07.595955 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"580a1451-b144-42d0-b400-1bf271b17c7b","Type":"ContainerStarted","Data":"4e9e66062e838ba1e4cb13399586ee872541740d0e9087948b6ca26f086d3b4c"} Oct 13 17:41:07 crc kubenswrapper[4720]: I1013 17:41:07.606536 4720 generic.go:334] "Generic (PLEG): container finished" podID="bf41e1e8-36d7-4805-b824-44322d841e38" containerID="d17932eb3f1249cfefbdc41667ce1235565bba8a0c287910b108807f8759941b" exitCode=143 Oct 13 17:41:07 crc kubenswrapper[4720]: I1013 17:41:07.606601 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bf41e1e8-36d7-4805-b824-44322d841e38","Type":"ContainerDied","Data":"d17932eb3f1249cfefbdc41667ce1235565bba8a0c287910b108807f8759941b"} Oct 13 17:41:07 crc kubenswrapper[4720]: I1013 17:41:07.609670 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-544f6df47b-z9rm6" event={"ID":"acf3c288-2800-445a-9d67-134e0a7faac9","Type":"ContainerStarted","Data":"15f63af7e5c517bc6aa38f358e0951791d487ae3f337c3fcc4505eac05fa9920"} Oct 13 17:41:07 crc kubenswrapper[4720]: I1013 17:41:07.620513 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.791865941 podStartE2EDuration="4.620495292s" podCreationTimestamp="2025-10-13 17:41:03 +0000 UTC" firstStartedPulling="2025-10-13 17:41:04.113298779 +0000 UTC m=+1009.570548911" lastFinishedPulling="2025-10-13 17:41:04.94192812 +0000 UTC m=+1010.399178262" observedRunningTime="2025-10-13 17:41:07.619876616 +0000 UTC m=+1013.077126748" watchObservedRunningTime="2025-10-13 17:41:07.620495292 +0000 UTC m=+1013.077745424" Oct 13 17:41:08 crc kubenswrapper[4720]: I1013 17:41:08.125815 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6579788cd4-czbf2" Oct 13 17:41:08 crc kubenswrapper[4720]: I1013 17:41:08.618380 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Oct 13 17:41:08 crc kubenswrapper[4720]: I1013 17:41:08.621259 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"458b7aa1-4ce6-4ecf-9e25-207256ca2a57","Type":"ContainerStarted","Data":"deb6936aac3ae4b9788b871844e21bc29eca4a7e87fdfbb4a41196a8decabb6e"} Oct 13 17:41:08 crc kubenswrapper[4720]: I1013 17:41:08.621377 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="458b7aa1-4ce6-4ecf-9e25-207256ca2a57" containerName="ceilometer-central-agent" containerID="cri-o://a7a89ea377cc862471422c319b2d1a3b04c6f29f4990e9e61bdd5342689be54b" gracePeriod=30 Oct 13 17:41:08 crc kubenswrapper[4720]: I1013 17:41:08.621626 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="458b7aa1-4ce6-4ecf-9e25-207256ca2a57" containerName="proxy-httpd" containerID="cri-o://deb6936aac3ae4b9788b871844e21bc29eca4a7e87fdfbb4a41196a8decabb6e" gracePeriod=30 Oct 13 17:41:08 crc kubenswrapper[4720]: I1013 17:41:08.621667 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="458b7aa1-4ce6-4ecf-9e25-207256ca2a57" containerName="sg-core" containerID="cri-o://0b16974eb9d178f9ccd519d56427ad64d9073f3d54ebafa4df938014a2de4006" gracePeriod=30 Oct 13 17:41:08 crc kubenswrapper[4720]: I1013 17:41:08.621690 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 13 17:41:08 crc kubenswrapper[4720]: I1013 17:41:08.621712 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="458b7aa1-4ce6-4ecf-9e25-207256ca2a57" containerName="ceilometer-notification-agent" containerID="cri-o://e87371c13426935a0d120e2ac95d8e565de5c8f6dbdb51202edb676187eee163" gracePeriod=30 Oct 13 17:41:08 crc kubenswrapper[4720]: I1013 17:41:08.632690 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-544f6df47b-z9rm6" event={"ID":"acf3c288-2800-445a-9d67-134e0a7faac9","Type":"ContainerStarted","Data":"eda57c6ca01d000938499a59d821b585d4b3116c8e3192990cf2146e06f1bfb1"} Oct 13 17:41:08 crc kubenswrapper[4720]: I1013 17:41:08.633001 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-544f6df47b-z9rm6" Oct 13 17:41:08 crc kubenswrapper[4720]: I1013 17:41:08.633093 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-544f6df47b-z9rm6" event={"ID":"acf3c288-2800-445a-9d67-134e0a7faac9","Type":"ContainerStarted","Data":"0c69f66ba20818c60f81b14493083040a0dc3ac5f5c186ada1d562db64601c13"} Oct 13 17:41:08 crc kubenswrapper[4720]: I1013 17:41:08.633182 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-544f6df47b-z9rm6" Oct 13 17:41:08 crc kubenswrapper[4720]: I1013 17:41:08.649794 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.433734135 podStartE2EDuration="6.649774667s" podCreationTimestamp="2025-10-13 17:41:02 +0000 UTC" firstStartedPulling="2025-10-13 17:41:03.394405291 +0000 UTC m=+1008.851655423" lastFinishedPulling="2025-10-13 17:41:07.610445823 +0000 UTC m=+1013.067695955" observedRunningTime="2025-10-13 17:41:08.643614258 +0000 UTC m=+1014.100864390" watchObservedRunningTime="2025-10-13 17:41:08.649774667 +0000 UTC m=+1014.107024799" Oct 13 17:41:08 crc kubenswrapper[4720]: I1013 17:41:08.665370 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-544f6df47b-z9rm6" podStartSLOduration=2.665350847 podStartE2EDuration="2.665350847s" podCreationTimestamp="2025-10-13 17:41:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:41:08.661048417 +0000 UTC m=+1014.118298549" watchObservedRunningTime="2025-10-13 17:41:08.665350847 +0000 UTC m=+1014.122600979" Oct 13 17:41:09 crc kubenswrapper[4720]: I1013 17:41:09.643729 4720 generic.go:334] "Generic (PLEG): container finished" podID="458b7aa1-4ce6-4ecf-9e25-207256ca2a57" containerID="deb6936aac3ae4b9788b871844e21bc29eca4a7e87fdfbb4a41196a8decabb6e" exitCode=0 Oct 13 17:41:09 crc kubenswrapper[4720]: I1013 17:41:09.644045 4720 generic.go:334] "Generic (PLEG): container finished" podID="458b7aa1-4ce6-4ecf-9e25-207256ca2a57" containerID="0b16974eb9d178f9ccd519d56427ad64d9073f3d54ebafa4df938014a2de4006" exitCode=2 Oct 13 17:41:09 crc kubenswrapper[4720]: I1013 17:41:09.644056 4720 generic.go:334] "Generic (PLEG): container finished" podID="458b7aa1-4ce6-4ecf-9e25-207256ca2a57" containerID="e87371c13426935a0d120e2ac95d8e565de5c8f6dbdb51202edb676187eee163" exitCode=0 Oct 13 17:41:09 crc kubenswrapper[4720]: I1013 17:41:09.643816 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"458b7aa1-4ce6-4ecf-9e25-207256ca2a57","Type":"ContainerDied","Data":"deb6936aac3ae4b9788b871844e21bc29eca4a7e87fdfbb4a41196a8decabb6e"} Oct 13 17:41:09 crc kubenswrapper[4720]: I1013 17:41:09.644948 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"458b7aa1-4ce6-4ecf-9e25-207256ca2a57","Type":"ContainerDied","Data":"0b16974eb9d178f9ccd519d56427ad64d9073f3d54ebafa4df938014a2de4006"} Oct 13 17:41:09 crc kubenswrapper[4720]: I1013 17:41:09.644962 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"458b7aa1-4ce6-4ecf-9e25-207256ca2a57","Type":"ContainerDied","Data":"e87371c13426935a0d120e2ac95d8e565de5c8f6dbdb51202edb676187eee163"} Oct 13 17:41:10 crc kubenswrapper[4720]: I1013 17:41:10.566212 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7c7ddff655-r8ln9" Oct 13 17:41:10 crc kubenswrapper[4720]: I1013 17:41:10.646956 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6579788cd4-czbf2"] Oct 13 17:41:10 crc kubenswrapper[4720]: I1013 17:41:10.647215 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6579788cd4-czbf2" podUID="9a12c67e-6055-4594-850a-61ac731d7a8d" containerName="neutron-api" containerID="cri-o://929b941fbabd15a3a11f8e9db999ac85bec2aac99787b7835b38aecf06b96d9b" gracePeriod=30 Oct 13 17:41:10 crc kubenswrapper[4720]: I1013 17:41:10.647646 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6579788cd4-czbf2" podUID="9a12c67e-6055-4594-850a-61ac731d7a8d" containerName="neutron-httpd" containerID="cri-o://d6124ef920638a00efcc516961d520ffae3592268b676ed7b8bf8600cbd993de" gracePeriod=30 Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.239865 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.298548 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/458b7aa1-4ce6-4ecf-9e25-207256ca2a57-config-data\") pod \"458b7aa1-4ce6-4ecf-9e25-207256ca2a57\" (UID: \"458b7aa1-4ce6-4ecf-9e25-207256ca2a57\") " Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.298604 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/458b7aa1-4ce6-4ecf-9e25-207256ca2a57-sg-core-conf-yaml\") pod \"458b7aa1-4ce6-4ecf-9e25-207256ca2a57\" (UID: \"458b7aa1-4ce6-4ecf-9e25-207256ca2a57\") " Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.298732 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/458b7aa1-4ce6-4ecf-9e25-207256ca2a57-scripts\") pod \"458b7aa1-4ce6-4ecf-9e25-207256ca2a57\" (UID: \"458b7aa1-4ce6-4ecf-9e25-207256ca2a57\") " Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.298789 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/458b7aa1-4ce6-4ecf-9e25-207256ca2a57-run-httpd\") pod \"458b7aa1-4ce6-4ecf-9e25-207256ca2a57\" (UID: \"458b7aa1-4ce6-4ecf-9e25-207256ca2a57\") " Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.298830 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/458b7aa1-4ce6-4ecf-9e25-207256ca2a57-log-httpd\") pod \"458b7aa1-4ce6-4ecf-9e25-207256ca2a57\" (UID: \"458b7aa1-4ce6-4ecf-9e25-207256ca2a57\") " Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.298849 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xr8b9\" (UniqueName: \"kubernetes.io/projected/458b7aa1-4ce6-4ecf-9e25-207256ca2a57-kube-api-access-xr8b9\") pod \"458b7aa1-4ce6-4ecf-9e25-207256ca2a57\" (UID: \"458b7aa1-4ce6-4ecf-9e25-207256ca2a57\") " Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.298867 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/458b7aa1-4ce6-4ecf-9e25-207256ca2a57-combined-ca-bundle\") pod \"458b7aa1-4ce6-4ecf-9e25-207256ca2a57\" (UID: \"458b7aa1-4ce6-4ecf-9e25-207256ca2a57\") " Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.300566 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/458b7aa1-4ce6-4ecf-9e25-207256ca2a57-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "458b7aa1-4ce6-4ecf-9e25-207256ca2a57" (UID: "458b7aa1-4ce6-4ecf-9e25-207256ca2a57"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.300751 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/458b7aa1-4ce6-4ecf-9e25-207256ca2a57-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "458b7aa1-4ce6-4ecf-9e25-207256ca2a57" (UID: "458b7aa1-4ce6-4ecf-9e25-207256ca2a57"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.315505 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.315955 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="972b98ca-7012-422d-8839-a196b6a3b919" containerName="glance-log" containerID="cri-o://cf8d040f3b31142b2e3557ad717515b1227bb986bdfd43dc7fcc2ab6a3a20d90" gracePeriod=30 Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.317269 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="972b98ca-7012-422d-8839-a196b6a3b919" containerName="glance-httpd" containerID="cri-o://dd2f993ad3263814f13966b6e376fc39bbfd7f89fbbfb6c77b97572e6786fed7" gracePeriod=30 Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.333414 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/458b7aa1-4ce6-4ecf-9e25-207256ca2a57-kube-api-access-xr8b9" (OuterVolumeSpecName: "kube-api-access-xr8b9") pod "458b7aa1-4ce6-4ecf-9e25-207256ca2a57" (UID: "458b7aa1-4ce6-4ecf-9e25-207256ca2a57"). InnerVolumeSpecName "kube-api-access-xr8b9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.337500 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/458b7aa1-4ce6-4ecf-9e25-207256ca2a57-scripts" (OuterVolumeSpecName: "scripts") pod "458b7aa1-4ce6-4ecf-9e25-207256ca2a57" (UID: "458b7aa1-4ce6-4ecf-9e25-207256ca2a57"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.347392 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/458b7aa1-4ce6-4ecf-9e25-207256ca2a57-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "458b7aa1-4ce6-4ecf-9e25-207256ca2a57" (UID: "458b7aa1-4ce6-4ecf-9e25-207256ca2a57"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.403177 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/458b7aa1-4ce6-4ecf-9e25-207256ca2a57-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.403245 4720 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/458b7aa1-4ce6-4ecf-9e25-207256ca2a57-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.403255 4720 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/458b7aa1-4ce6-4ecf-9e25-207256ca2a57-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.403263 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xr8b9\" (UniqueName: \"kubernetes.io/projected/458b7aa1-4ce6-4ecf-9e25-207256ca2a57-kube-api-access-xr8b9\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.403294 4720 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/458b7aa1-4ce6-4ecf-9e25-207256ca2a57-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.442953 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/458b7aa1-4ce6-4ecf-9e25-207256ca2a57-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "458b7aa1-4ce6-4ecf-9e25-207256ca2a57" (UID: "458b7aa1-4ce6-4ecf-9e25-207256ca2a57"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.465614 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/458b7aa1-4ce6-4ecf-9e25-207256ca2a57-config-data" (OuterVolumeSpecName: "config-data") pod "458b7aa1-4ce6-4ecf-9e25-207256ca2a57" (UID: "458b7aa1-4ce6-4ecf-9e25-207256ca2a57"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.504784 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/458b7aa1-4ce6-4ecf-9e25-207256ca2a57-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.504817 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/458b7aa1-4ce6-4ecf-9e25-207256ca2a57-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.669351 4720 generic.go:334] "Generic (PLEG): container finished" podID="9a12c67e-6055-4594-850a-61ac731d7a8d" containerID="d6124ef920638a00efcc516961d520ffae3592268b676ed7b8bf8600cbd993de" exitCode=0 Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.669415 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6579788cd4-czbf2" event={"ID":"9a12c67e-6055-4594-850a-61ac731d7a8d","Type":"ContainerDied","Data":"d6124ef920638a00efcc516961d520ffae3592268b676ed7b8bf8600cbd993de"} Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.671631 4720 generic.go:334] "Generic (PLEG): container finished" podID="972b98ca-7012-422d-8839-a196b6a3b919" containerID="cf8d040f3b31142b2e3557ad717515b1227bb986bdfd43dc7fcc2ab6a3a20d90" exitCode=143 Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.671670 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"972b98ca-7012-422d-8839-a196b6a3b919","Type":"ContainerDied","Data":"cf8d040f3b31142b2e3557ad717515b1227bb986bdfd43dc7fcc2ab6a3a20d90"} Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.674226 4720 generic.go:334] "Generic (PLEG): container finished" podID="458b7aa1-4ce6-4ecf-9e25-207256ca2a57" containerID="a7a89ea377cc862471422c319b2d1a3b04c6f29f4990e9e61bdd5342689be54b" exitCode=0 Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.674248 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"458b7aa1-4ce6-4ecf-9e25-207256ca2a57","Type":"ContainerDied","Data":"a7a89ea377cc862471422c319b2d1a3b04c6f29f4990e9e61bdd5342689be54b"} Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.674263 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"458b7aa1-4ce6-4ecf-9e25-207256ca2a57","Type":"ContainerDied","Data":"711a2755b373354f7b7bf2707f7190f1666fda3277faac2e59eb7e0a1e17924a"} Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.674280 4720 scope.go:117] "RemoveContainer" containerID="deb6936aac3ae4b9788b871844e21bc29eca4a7e87fdfbb4a41196a8decabb6e" Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.674412 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.706227 4720 scope.go:117] "RemoveContainer" containerID="0b16974eb9d178f9ccd519d56427ad64d9073f3d54ebafa4df938014a2de4006" Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.708210 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.718061 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.724586 4720 scope.go:117] "RemoveContainer" containerID="e87371c13426935a0d120e2ac95d8e565de5c8f6dbdb51202edb676187eee163" Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.742073 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 13 17:41:11 crc kubenswrapper[4720]: E1013 17:41:11.742466 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="458b7aa1-4ce6-4ecf-9e25-207256ca2a57" containerName="proxy-httpd" Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.742483 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="458b7aa1-4ce6-4ecf-9e25-207256ca2a57" containerName="proxy-httpd" Oct 13 17:41:11 crc kubenswrapper[4720]: E1013 17:41:11.742498 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="458b7aa1-4ce6-4ecf-9e25-207256ca2a57" containerName="sg-core" Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.742504 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="458b7aa1-4ce6-4ecf-9e25-207256ca2a57" containerName="sg-core" Oct 13 17:41:11 crc kubenswrapper[4720]: E1013 17:41:11.742517 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="458b7aa1-4ce6-4ecf-9e25-207256ca2a57" containerName="ceilometer-notification-agent" Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.742524 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="458b7aa1-4ce6-4ecf-9e25-207256ca2a57" containerName="ceilometer-notification-agent" Oct 13 17:41:11 crc kubenswrapper[4720]: E1013 17:41:11.742544 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="458b7aa1-4ce6-4ecf-9e25-207256ca2a57" containerName="ceilometer-central-agent" Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.742549 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="458b7aa1-4ce6-4ecf-9e25-207256ca2a57" containerName="ceilometer-central-agent" Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.742711 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="458b7aa1-4ce6-4ecf-9e25-207256ca2a57" containerName="ceilometer-notification-agent" Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.742722 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="458b7aa1-4ce6-4ecf-9e25-207256ca2a57" containerName="ceilometer-central-agent" Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.742734 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="458b7aa1-4ce6-4ecf-9e25-207256ca2a57" containerName="proxy-httpd" Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.742743 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="458b7aa1-4ce6-4ecf-9e25-207256ca2a57" containerName="sg-core" Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.744239 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.745855 4720 scope.go:117] "RemoveContainer" containerID="a7a89ea377cc862471422c319b2d1a3b04c6f29f4990e9e61bdd5342689be54b" Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.746841 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.751466 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.754354 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.810343 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c0e5acdb-49fb-4d97-8330-61bb2eeba14f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c0e5acdb-49fb-4d97-8330-61bb2eeba14f\") " pod="openstack/ceilometer-0" Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.810886 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0e5acdb-49fb-4d97-8330-61bb2eeba14f-config-data\") pod \"ceilometer-0\" (UID: \"c0e5acdb-49fb-4d97-8330-61bb2eeba14f\") " pod="openstack/ceilometer-0" Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.811026 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx82t\" (UniqueName: \"kubernetes.io/projected/c0e5acdb-49fb-4d97-8330-61bb2eeba14f-kube-api-access-jx82t\") pod \"ceilometer-0\" (UID: \"c0e5acdb-49fb-4d97-8330-61bb2eeba14f\") " pod="openstack/ceilometer-0" Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.811215 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0e5acdb-49fb-4d97-8330-61bb2eeba14f-run-httpd\") pod \"ceilometer-0\" (UID: \"c0e5acdb-49fb-4d97-8330-61bb2eeba14f\") " pod="openstack/ceilometer-0" Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.811373 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0e5acdb-49fb-4d97-8330-61bb2eeba14f-scripts\") pod \"ceilometer-0\" (UID: \"c0e5acdb-49fb-4d97-8330-61bb2eeba14f\") " pod="openstack/ceilometer-0" Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.811487 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0e5acdb-49fb-4d97-8330-61bb2eeba14f-log-httpd\") pod \"ceilometer-0\" (UID: \"c0e5acdb-49fb-4d97-8330-61bb2eeba14f\") " pod="openstack/ceilometer-0" Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.811638 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0e5acdb-49fb-4d97-8330-61bb2eeba14f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c0e5acdb-49fb-4d97-8330-61bb2eeba14f\") " pod="openstack/ceilometer-0" Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.840578 4720 scope.go:117] "RemoveContainer" containerID="deb6936aac3ae4b9788b871844e21bc29eca4a7e87fdfbb4a41196a8decabb6e" Oct 13 17:41:11 crc kubenswrapper[4720]: E1013 17:41:11.841436 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"deb6936aac3ae4b9788b871844e21bc29eca4a7e87fdfbb4a41196a8decabb6e\": container with ID starting with deb6936aac3ae4b9788b871844e21bc29eca4a7e87fdfbb4a41196a8decabb6e not found: ID does not exist" containerID="deb6936aac3ae4b9788b871844e21bc29eca4a7e87fdfbb4a41196a8decabb6e" Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.841482 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"deb6936aac3ae4b9788b871844e21bc29eca4a7e87fdfbb4a41196a8decabb6e"} err="failed to get container status \"deb6936aac3ae4b9788b871844e21bc29eca4a7e87fdfbb4a41196a8decabb6e\": rpc error: code = NotFound desc = could not find container \"deb6936aac3ae4b9788b871844e21bc29eca4a7e87fdfbb4a41196a8decabb6e\": container with ID starting with deb6936aac3ae4b9788b871844e21bc29eca4a7e87fdfbb4a41196a8decabb6e not found: ID does not exist" Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.841507 4720 scope.go:117] "RemoveContainer" containerID="0b16974eb9d178f9ccd519d56427ad64d9073f3d54ebafa4df938014a2de4006" Oct 13 17:41:11 crc kubenswrapper[4720]: E1013 17:41:11.842422 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b16974eb9d178f9ccd519d56427ad64d9073f3d54ebafa4df938014a2de4006\": container with ID starting with 0b16974eb9d178f9ccd519d56427ad64d9073f3d54ebafa4df938014a2de4006 not found: ID does not exist" containerID="0b16974eb9d178f9ccd519d56427ad64d9073f3d54ebafa4df938014a2de4006" Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.842445 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b16974eb9d178f9ccd519d56427ad64d9073f3d54ebafa4df938014a2de4006"} err="failed to get container status \"0b16974eb9d178f9ccd519d56427ad64d9073f3d54ebafa4df938014a2de4006\": rpc error: code = NotFound desc = could not find container \"0b16974eb9d178f9ccd519d56427ad64d9073f3d54ebafa4df938014a2de4006\": container with ID starting with 0b16974eb9d178f9ccd519d56427ad64d9073f3d54ebafa4df938014a2de4006 not found: ID does not exist" Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.842460 4720 scope.go:117] "RemoveContainer" containerID="e87371c13426935a0d120e2ac95d8e565de5c8f6dbdb51202edb676187eee163" Oct 13 17:41:11 crc kubenswrapper[4720]: E1013 17:41:11.842780 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e87371c13426935a0d120e2ac95d8e565de5c8f6dbdb51202edb676187eee163\": container with ID starting with e87371c13426935a0d120e2ac95d8e565de5c8f6dbdb51202edb676187eee163 not found: ID does not exist" containerID="e87371c13426935a0d120e2ac95d8e565de5c8f6dbdb51202edb676187eee163" Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.842852 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e87371c13426935a0d120e2ac95d8e565de5c8f6dbdb51202edb676187eee163"} err="failed to get container status \"e87371c13426935a0d120e2ac95d8e565de5c8f6dbdb51202edb676187eee163\": rpc error: code = NotFound desc = could not find container \"e87371c13426935a0d120e2ac95d8e565de5c8f6dbdb51202edb676187eee163\": container with ID starting with e87371c13426935a0d120e2ac95d8e565de5c8f6dbdb51202edb676187eee163 not found: ID does not exist" Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.842906 4720 scope.go:117] "RemoveContainer" containerID="a7a89ea377cc862471422c319b2d1a3b04c6f29f4990e9e61bdd5342689be54b" Oct 13 17:41:11 crc kubenswrapper[4720]: E1013 17:41:11.844721 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7a89ea377cc862471422c319b2d1a3b04c6f29f4990e9e61bdd5342689be54b\": container with ID starting with a7a89ea377cc862471422c319b2d1a3b04c6f29f4990e9e61bdd5342689be54b not found: ID does not exist" containerID="a7a89ea377cc862471422c319b2d1a3b04c6f29f4990e9e61bdd5342689be54b" Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.844862 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7a89ea377cc862471422c319b2d1a3b04c6f29f4990e9e61bdd5342689be54b"} err="failed to get container status \"a7a89ea377cc862471422c319b2d1a3b04c6f29f4990e9e61bdd5342689be54b\": rpc error: code = NotFound desc = could not find container \"a7a89ea377cc862471422c319b2d1a3b04c6f29f4990e9e61bdd5342689be54b\": container with ID starting with a7a89ea377cc862471422c319b2d1a3b04c6f29f4990e9e61bdd5342689be54b not found: ID does not exist" Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.914818 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0e5acdb-49fb-4d97-8330-61bb2eeba14f-run-httpd\") pod \"ceilometer-0\" (UID: \"c0e5acdb-49fb-4d97-8330-61bb2eeba14f\") " pod="openstack/ceilometer-0" Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.915359 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0e5acdb-49fb-4d97-8330-61bb2eeba14f-run-httpd\") pod \"ceilometer-0\" (UID: \"c0e5acdb-49fb-4d97-8330-61bb2eeba14f\") " pod="openstack/ceilometer-0" Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.915435 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0e5acdb-49fb-4d97-8330-61bb2eeba14f-scripts\") pod \"ceilometer-0\" (UID: \"c0e5acdb-49fb-4d97-8330-61bb2eeba14f\") " pod="openstack/ceilometer-0" Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.915629 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0e5acdb-49fb-4d97-8330-61bb2eeba14f-log-httpd\") pod \"ceilometer-0\" (UID: \"c0e5acdb-49fb-4d97-8330-61bb2eeba14f\") " pod="openstack/ceilometer-0" Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.915749 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0e5acdb-49fb-4d97-8330-61bb2eeba14f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c0e5acdb-49fb-4d97-8330-61bb2eeba14f\") " pod="openstack/ceilometer-0" Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.916200 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c0e5acdb-49fb-4d97-8330-61bb2eeba14f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c0e5acdb-49fb-4d97-8330-61bb2eeba14f\") " pod="openstack/ceilometer-0" Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.915990 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0e5acdb-49fb-4d97-8330-61bb2eeba14f-log-httpd\") pod \"ceilometer-0\" (UID: \"c0e5acdb-49fb-4d97-8330-61bb2eeba14f\") " pod="openstack/ceilometer-0" Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.916476 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0e5acdb-49fb-4d97-8330-61bb2eeba14f-config-data\") pod \"ceilometer-0\" (UID: \"c0e5acdb-49fb-4d97-8330-61bb2eeba14f\") " pod="openstack/ceilometer-0" Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.916581 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jx82t\" (UniqueName: \"kubernetes.io/projected/c0e5acdb-49fb-4d97-8330-61bb2eeba14f-kube-api-access-jx82t\") pod \"ceilometer-0\" (UID: \"c0e5acdb-49fb-4d97-8330-61bb2eeba14f\") " pod="openstack/ceilometer-0" Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.919764 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0e5acdb-49fb-4d97-8330-61bb2eeba14f-scripts\") pod \"ceilometer-0\" (UID: \"c0e5acdb-49fb-4d97-8330-61bb2eeba14f\") " pod="openstack/ceilometer-0" Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.920071 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c0e5acdb-49fb-4d97-8330-61bb2eeba14f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c0e5acdb-49fb-4d97-8330-61bb2eeba14f\") " pod="openstack/ceilometer-0" Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.920103 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0e5acdb-49fb-4d97-8330-61bb2eeba14f-config-data\") pod \"ceilometer-0\" (UID: \"c0e5acdb-49fb-4d97-8330-61bb2eeba14f\") " pod="openstack/ceilometer-0" Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.920468 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0e5acdb-49fb-4d97-8330-61bb2eeba14f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c0e5acdb-49fb-4d97-8330-61bb2eeba14f\") " pod="openstack/ceilometer-0" Oct 13 17:41:11 crc kubenswrapper[4720]: I1013 17:41:11.935684 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jx82t\" (UniqueName: \"kubernetes.io/projected/c0e5acdb-49fb-4d97-8330-61bb2eeba14f-kube-api-access-jx82t\") pod \"ceilometer-0\" (UID: \"c0e5acdb-49fb-4d97-8330-61bb2eeba14f\") " pod="openstack/ceilometer-0" Oct 13 17:41:12 crc kubenswrapper[4720]: I1013 17:41:12.120331 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 17:41:12 crc kubenswrapper[4720]: I1013 17:41:12.561349 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 13 17:41:12 crc kubenswrapper[4720]: I1013 17:41:12.684150 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0e5acdb-49fb-4d97-8330-61bb2eeba14f","Type":"ContainerStarted","Data":"2f6e2ab81fd55c8de49b1dece25af458f8edda5c34a561fc9c9a738c0e90bf85"} Oct 13 17:41:13 crc kubenswrapper[4720]: I1013 17:41:13.178872 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="458b7aa1-4ce6-4ecf-9e25-207256ca2a57" path="/var/lib/kubelet/pods/458b7aa1-4ce6-4ecf-9e25-207256ca2a57/volumes" Oct 13 17:41:13 crc kubenswrapper[4720]: I1013 17:41:13.539047 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 13 17:41:13 crc kubenswrapper[4720]: I1013 17:41:13.539296 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="f1742c95-4906-4545-8e62-38903c72b168" containerName="glance-log" containerID="cri-o://11524c98eb61e0537ae1d6d91addce7769b6a1193e5f00f1994585c7e37277f3" gracePeriod=30 Oct 13 17:41:13 crc kubenswrapper[4720]: I1013 17:41:13.539427 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="f1742c95-4906-4545-8e62-38903c72b168" containerName="glance-httpd" containerID="cri-o://08fb538daabd98d65e3554b1d509b5149739a281779328f80b240a2b7df44866" gracePeriod=30 Oct 13 17:41:13 crc kubenswrapper[4720]: I1013 17:41:13.667363 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5784cf869f-bm4z9" Oct 13 17:41:13 crc kubenswrapper[4720]: I1013 17:41:13.694063 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0e5acdb-49fb-4d97-8330-61bb2eeba14f","Type":"ContainerStarted","Data":"60833019df1e6e9d78cded50a1bc7d0fc7fc1ac7990aa51939fedb0d5f63f88b"} Oct 13 17:41:13 crc kubenswrapper[4720]: I1013 17:41:13.696334 4720 generic.go:334] "Generic (PLEG): container finished" podID="f1742c95-4906-4545-8e62-38903c72b168" containerID="11524c98eb61e0537ae1d6d91addce7769b6a1193e5f00f1994585c7e37277f3" exitCode=143 Oct 13 17:41:13 crc kubenswrapper[4720]: I1013 17:41:13.696400 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f1742c95-4906-4545-8e62-38903c72b168","Type":"ContainerDied","Data":"11524c98eb61e0537ae1d6d91addce7769b6a1193e5f00f1994585c7e37277f3"} Oct 13 17:41:13 crc kubenswrapper[4720]: I1013 17:41:13.855406 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-hrltm"] Oct 13 17:41:13 crc kubenswrapper[4720]: I1013 17:41:13.855670 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75c8ddd69c-hrltm" podUID="aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b" containerName="dnsmasq-dns" containerID="cri-o://f0ecab07f82314444b153181b1e3feb280a903152b88ad306f18918863152e52" gracePeriod=10 Oct 13 17:41:14 crc kubenswrapper[4720]: I1013 17:41:14.115907 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Oct 13 17:41:14 crc kubenswrapper[4720]: I1013 17:41:14.163018 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 13 17:41:14 crc kubenswrapper[4720]: I1013 17:41:14.708393 4720 generic.go:334] "Generic (PLEG): container finished" podID="aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b" containerID="f0ecab07f82314444b153181b1e3feb280a903152b88ad306f18918863152e52" exitCode=0 Oct 13 17:41:14 crc kubenswrapper[4720]: I1013 17:41:14.708433 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-hrltm" event={"ID":"aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b","Type":"ContainerDied","Data":"f0ecab07f82314444b153181b1e3feb280a903152b88ad306f18918863152e52"} Oct 13 17:41:14 crc kubenswrapper[4720]: I1013 17:41:14.708826 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="580a1451-b144-42d0-b400-1bf271b17c7b" containerName="probe" containerID="cri-o://4e9e66062e838ba1e4cb13399586ee872541740d0e9087948b6ca26f086d3b4c" gracePeriod=30 Oct 13 17:41:14 crc kubenswrapper[4720]: I1013 17:41:14.708835 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="580a1451-b144-42d0-b400-1bf271b17c7b" containerName="cinder-scheduler" containerID="cri-o://25b97bc04fd8cade7f4ee8ae6c79a7f030b03ca73cbe965f2d2fbef2d199284a" gracePeriod=30 Oct 13 17:41:15 crc kubenswrapper[4720]: I1013 17:41:15.013332 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-3646-account-create-t2hq4"] Oct 13 17:41:15 crc kubenswrapper[4720]: I1013 17:41:15.014736 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3646-account-create-t2hq4" Oct 13 17:41:15 crc kubenswrapper[4720]: I1013 17:41:15.016902 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Oct 13 17:41:15 crc kubenswrapper[4720]: I1013 17:41:15.020296 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-3646-account-create-t2hq4"] Oct 13 17:41:15 crc kubenswrapper[4720]: I1013 17:41:15.120536 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhfwl\" (UniqueName: \"kubernetes.io/projected/bb3ca24e-8779-41d4-b8cc-8a6bc524e81d-kube-api-access-qhfwl\") pod \"nova-api-3646-account-create-t2hq4\" (UID: \"bb3ca24e-8779-41d4-b8cc-8a6bc524e81d\") " pod="openstack/nova-api-3646-account-create-t2hq4" Oct 13 17:41:15 crc kubenswrapper[4720]: I1013 17:41:15.223499 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-557a-account-create-m5cvn"] Oct 13 17:41:15 crc kubenswrapper[4720]: I1013 17:41:15.225128 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-557a-account-create-m5cvn" Oct 13 17:41:15 crc kubenswrapper[4720]: I1013 17:41:15.231488 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Oct 13 17:41:15 crc kubenswrapper[4720]: I1013 17:41:15.234057 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwd25\" (UniqueName: \"kubernetes.io/projected/0bdf4571-07ae-4672-937b-f445bf5578ad-kube-api-access-kwd25\") pod \"nova-cell0-557a-account-create-m5cvn\" (UID: \"0bdf4571-07ae-4672-937b-f445bf5578ad\") " pod="openstack/nova-cell0-557a-account-create-m5cvn" Oct 13 17:41:15 crc kubenswrapper[4720]: I1013 17:41:15.234155 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhfwl\" (UniqueName: \"kubernetes.io/projected/bb3ca24e-8779-41d4-b8cc-8a6bc524e81d-kube-api-access-qhfwl\") pod \"nova-api-3646-account-create-t2hq4\" (UID: \"bb3ca24e-8779-41d4-b8cc-8a6bc524e81d\") " pod="openstack/nova-api-3646-account-create-t2hq4" Oct 13 17:41:15 crc kubenswrapper[4720]: I1013 17:41:15.234233 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-557a-account-create-m5cvn"] Oct 13 17:41:15 crc kubenswrapper[4720]: I1013 17:41:15.269340 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhfwl\" (UniqueName: \"kubernetes.io/projected/bb3ca24e-8779-41d4-b8cc-8a6bc524e81d-kube-api-access-qhfwl\") pod \"nova-api-3646-account-create-t2hq4\" (UID: \"bb3ca24e-8779-41d4-b8cc-8a6bc524e81d\") " pod="openstack/nova-api-3646-account-create-t2hq4" Oct 13 17:41:15 crc kubenswrapper[4720]: I1013 17:41:15.335947 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwd25\" (UniqueName: \"kubernetes.io/projected/0bdf4571-07ae-4672-937b-f445bf5578ad-kube-api-access-kwd25\") pod \"nova-cell0-557a-account-create-m5cvn\" (UID: \"0bdf4571-07ae-4672-937b-f445bf5578ad\") " pod="openstack/nova-cell0-557a-account-create-m5cvn" Oct 13 17:41:15 crc kubenswrapper[4720]: I1013 17:41:15.351994 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwd25\" (UniqueName: \"kubernetes.io/projected/0bdf4571-07ae-4672-937b-f445bf5578ad-kube-api-access-kwd25\") pod \"nova-cell0-557a-account-create-m5cvn\" (UID: \"0bdf4571-07ae-4672-937b-f445bf5578ad\") " pod="openstack/nova-cell0-557a-account-create-m5cvn" Oct 13 17:41:15 crc kubenswrapper[4720]: I1013 17:41:15.385797 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3646-account-create-t2hq4" Oct 13 17:41:15 crc kubenswrapper[4720]: I1013 17:41:15.426891 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-1187-account-create-kf7dn"] Oct 13 17:41:15 crc kubenswrapper[4720]: I1013 17:41:15.428598 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-1187-account-create-kf7dn" Oct 13 17:41:15 crc kubenswrapper[4720]: I1013 17:41:15.431998 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Oct 13 17:41:15 crc kubenswrapper[4720]: I1013 17:41:15.433437 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-1187-account-create-kf7dn"] Oct 13 17:41:15 crc kubenswrapper[4720]: I1013 17:41:15.438301 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9gmb\" (UniqueName: \"kubernetes.io/projected/fe437be8-b701-42f7-9aae-ba75ccc6be20-kube-api-access-k9gmb\") pod \"nova-cell1-1187-account-create-kf7dn\" (UID: \"fe437be8-b701-42f7-9aae-ba75ccc6be20\") " pod="openstack/nova-cell1-1187-account-create-kf7dn" Oct 13 17:41:15 crc kubenswrapper[4720]: I1013 17:41:15.541818 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9gmb\" (UniqueName: \"kubernetes.io/projected/fe437be8-b701-42f7-9aae-ba75ccc6be20-kube-api-access-k9gmb\") pod \"nova-cell1-1187-account-create-kf7dn\" (UID: \"fe437be8-b701-42f7-9aae-ba75ccc6be20\") " pod="openstack/nova-cell1-1187-account-create-kf7dn" Oct 13 17:41:15 crc kubenswrapper[4720]: I1013 17:41:15.544659 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-557a-account-create-m5cvn" Oct 13 17:41:15 crc kubenswrapper[4720]: I1013 17:41:15.620870 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9gmb\" (UniqueName: \"kubernetes.io/projected/fe437be8-b701-42f7-9aae-ba75ccc6be20-kube-api-access-k9gmb\") pod \"nova-cell1-1187-account-create-kf7dn\" (UID: \"fe437be8-b701-42f7-9aae-ba75ccc6be20\") " pod="openstack/nova-cell1-1187-account-create-kf7dn" Oct 13 17:41:15 crc kubenswrapper[4720]: I1013 17:41:15.670563 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 13 17:41:15 crc kubenswrapper[4720]: I1013 17:41:15.747527 4720 generic.go:334] "Generic (PLEG): container finished" podID="580a1451-b144-42d0-b400-1bf271b17c7b" containerID="4e9e66062e838ba1e4cb13399586ee872541740d0e9087948b6ca26f086d3b4c" exitCode=0 Oct 13 17:41:15 crc kubenswrapper[4720]: I1013 17:41:15.748057 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"580a1451-b144-42d0-b400-1bf271b17c7b","Type":"ContainerDied","Data":"4e9e66062e838ba1e4cb13399586ee872541740d0e9087948b6ca26f086d3b4c"} Oct 13 17:41:15 crc kubenswrapper[4720]: I1013 17:41:15.764007 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0e5acdb-49fb-4d97-8330-61bb2eeba14f","Type":"ContainerStarted","Data":"663f2b043e406be36e53f35b210bb89be8414904d4ba1d10fb48933f5dfc592f"} Oct 13 17:41:15 crc kubenswrapper[4720]: I1013 17:41:15.767719 4720 generic.go:334] "Generic (PLEG): container finished" podID="972b98ca-7012-422d-8839-a196b6a3b919" containerID="dd2f993ad3263814f13966b6e376fc39bbfd7f89fbbfb6c77b97572e6786fed7" exitCode=0 Oct 13 17:41:15 crc kubenswrapper[4720]: I1013 17:41:15.767783 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"972b98ca-7012-422d-8839-a196b6a3b919","Type":"ContainerDied","Data":"dd2f993ad3263814f13966b6e376fc39bbfd7f89fbbfb6c77b97572e6786fed7"} Oct 13 17:41:15 crc kubenswrapper[4720]: I1013 17:41:15.844045 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-1187-account-create-kf7dn" Oct 13 17:41:15 crc kubenswrapper[4720]: I1013 17:41:15.948112 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-3646-account-create-t2hq4"] Oct 13 17:41:15 crc kubenswrapper[4720]: I1013 17:41:15.981606 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Oct 13 17:41:16 crc kubenswrapper[4720]: I1013 17:41:16.073106 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-hrltm" Oct 13 17:41:16 crc kubenswrapper[4720]: I1013 17:41:16.073916 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Oct 13 17:41:16 crc kubenswrapper[4720]: I1013 17:41:16.269456 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b-ovsdbserver-sb\") pod \"aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b\" (UID: \"aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b\") " Oct 13 17:41:16 crc kubenswrapper[4720]: I1013 17:41:16.269557 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgpfw\" (UniqueName: \"kubernetes.io/projected/aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b-kube-api-access-dgpfw\") pod \"aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b\" (UID: \"aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b\") " Oct 13 17:41:16 crc kubenswrapper[4720]: I1013 17:41:16.269605 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b-dns-svc\") pod \"aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b\" (UID: \"aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b\") " Oct 13 17:41:16 crc kubenswrapper[4720]: I1013 17:41:16.269809 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b-dns-swift-storage-0\") pod \"aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b\" (UID: \"aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b\") " Oct 13 17:41:16 crc kubenswrapper[4720]: I1013 17:41:16.269882 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b-config\") pod \"aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b\" (UID: \"aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b\") " Oct 13 17:41:16 crc kubenswrapper[4720]: I1013 17:41:16.269972 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b-ovsdbserver-nb\") pod \"aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b\" (UID: \"aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b\") " Oct 13 17:41:16 crc kubenswrapper[4720]: I1013 17:41:16.287277 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b-kube-api-access-dgpfw" (OuterVolumeSpecName: "kube-api-access-dgpfw") pod "aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b" (UID: "aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b"). InnerVolumeSpecName "kube-api-access-dgpfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:41:16 crc kubenswrapper[4720]: I1013 17:41:16.325542 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-557a-account-create-m5cvn"] Oct 13 17:41:16 crc kubenswrapper[4720]: I1013 17:41:16.373119 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgpfw\" (UniqueName: \"kubernetes.io/projected/aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b-kube-api-access-dgpfw\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:16 crc kubenswrapper[4720]: I1013 17:41:16.440492 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b" (UID: "aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:41:16 crc kubenswrapper[4720]: I1013 17:41:16.445221 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b-config" (OuterVolumeSpecName: "config") pod "aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b" (UID: "aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:41:16 crc kubenswrapper[4720]: I1013 17:41:16.455935 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b" (UID: "aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:41:16 crc kubenswrapper[4720]: I1013 17:41:16.462105 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b" (UID: "aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:41:16 crc kubenswrapper[4720]: I1013 17:41:16.474829 4720 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:16 crc kubenswrapper[4720]: I1013 17:41:16.474854 4720 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:16 crc kubenswrapper[4720]: I1013 17:41:16.474863 4720 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:16 crc kubenswrapper[4720]: I1013 17:41:16.474875 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b-config\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:16 crc kubenswrapper[4720]: I1013 17:41:16.500586 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b" (UID: "aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:41:16 crc kubenswrapper[4720]: I1013 17:41:16.575801 4720 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:16 crc kubenswrapper[4720]: I1013 17:41:16.583904 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-1187-account-create-kf7dn"] Oct 13 17:41:16 crc kubenswrapper[4720]: I1013 17:41:16.782999 4720 generic.go:334] "Generic (PLEG): container finished" podID="580a1451-b144-42d0-b400-1bf271b17c7b" containerID="25b97bc04fd8cade7f4ee8ae6c79a7f030b03ca73cbe965f2d2fbef2d199284a" exitCode=0 Oct 13 17:41:16 crc kubenswrapper[4720]: I1013 17:41:16.783346 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"580a1451-b144-42d0-b400-1bf271b17c7b","Type":"ContainerDied","Data":"25b97bc04fd8cade7f4ee8ae6c79a7f030b03ca73cbe965f2d2fbef2d199284a"} Oct 13 17:41:16 crc kubenswrapper[4720]: I1013 17:41:16.783513 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 13 17:41:16 crc kubenswrapper[4720]: I1013 17:41:16.786316 4720 generic.go:334] "Generic (PLEG): container finished" podID="9a12c67e-6055-4594-850a-61ac731d7a8d" containerID="929b941fbabd15a3a11f8e9db999ac85bec2aac99787b7835b38aecf06b96d9b" exitCode=0 Oct 13 17:41:16 crc kubenswrapper[4720]: I1013 17:41:16.786352 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6579788cd4-czbf2" event={"ID":"9a12c67e-6055-4594-850a-61ac731d7a8d","Type":"ContainerDied","Data":"929b941fbabd15a3a11f8e9db999ac85bec2aac99787b7835b38aecf06b96d9b"} Oct 13 17:41:16 crc kubenswrapper[4720]: I1013 17:41:16.786369 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6579788cd4-czbf2" event={"ID":"9a12c67e-6055-4594-850a-61ac731d7a8d","Type":"ContainerDied","Data":"a5648ae8657c25e667791994f276dbe90d8850aab8e2af589687e69887b37417"} Oct 13 17:41:16 crc kubenswrapper[4720]: I1013 17:41:16.786379 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5648ae8657c25e667791994f276dbe90d8850aab8e2af589687e69887b37417" Oct 13 17:41:16 crc kubenswrapper[4720]: I1013 17:41:16.789777 4720 generic.go:334] "Generic (PLEG): container finished" podID="bb3ca24e-8779-41d4-b8cc-8a6bc524e81d" containerID="bc4b5f0abe9e521c901aaaee076f03cd7514c8c7f4dc44f1a017d8ca5f892c3c" exitCode=0 Oct 13 17:41:16 crc kubenswrapper[4720]: I1013 17:41:16.789840 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-3646-account-create-t2hq4" event={"ID":"bb3ca24e-8779-41d4-b8cc-8a6bc524e81d","Type":"ContainerDied","Data":"bc4b5f0abe9e521c901aaaee076f03cd7514c8c7f4dc44f1a017d8ca5f892c3c"} Oct 13 17:41:16 crc kubenswrapper[4720]: I1013 17:41:16.789859 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-3646-account-create-t2hq4" event={"ID":"bb3ca24e-8779-41d4-b8cc-8a6bc524e81d","Type":"ContainerStarted","Data":"07a1ca6ad9b8f965f99f5f5ef3939df88762abe60322faa77111b6a66ba7978d"} Oct 13 17:41:16 crc kubenswrapper[4720]: I1013 17:41:16.795096 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-557a-account-create-m5cvn" event={"ID":"0bdf4571-07ae-4672-937b-f445bf5578ad","Type":"ContainerStarted","Data":"95d606e07100488e2fcdb7859e92e51706fc679e63c0027393b1071bf62a6b6f"} Oct 13 17:41:16 crc kubenswrapper[4720]: I1013 17:41:16.800511 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-hrltm" event={"ID":"aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b","Type":"ContainerDied","Data":"66fb3a531bf25901fcd32716ff4c61ba56c11fb7412b6510fe681d399c2d2b9f"} Oct 13 17:41:16 crc kubenswrapper[4720]: I1013 17:41:16.800569 4720 scope.go:117] "RemoveContainer" containerID="f0ecab07f82314444b153181b1e3feb280a903152b88ad306f18918863152e52" Oct 13 17:41:16 crc kubenswrapper[4720]: I1013 17:41:16.800530 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-hrltm" Oct 13 17:41:16 crc kubenswrapper[4720]: I1013 17:41:16.802148 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-1187-account-create-kf7dn" event={"ID":"fe437be8-b701-42f7-9aae-ba75ccc6be20","Type":"ContainerStarted","Data":"7ac0e8972b29a0926533710f6456b298f27bb5006fc4505a7aecc4b83b2b06e2"} Oct 13 17:41:16 crc kubenswrapper[4720]: I1013 17:41:16.802606 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6579788cd4-czbf2" Oct 13 17:41:16 crc kubenswrapper[4720]: I1013 17:41:16.804416 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"972b98ca-7012-422d-8839-a196b6a3b919","Type":"ContainerDied","Data":"1b16ae43e2c7123fd632986ea5110d58609a9175e2cef7c8826b4cf23c18a495"} Oct 13 17:41:16 crc kubenswrapper[4720]: I1013 17:41:16.804489 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 13 17:41:16 crc kubenswrapper[4720]: I1013 17:41:16.881906 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9a12c67e-6055-4594-850a-61ac731d7a8d-config\") pod \"9a12c67e-6055-4594-850a-61ac731d7a8d\" (UID: \"9a12c67e-6055-4594-850a-61ac731d7a8d\") " Oct 13 17:41:16 crc kubenswrapper[4720]: I1013 17:41:16.881970 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a12c67e-6055-4594-850a-61ac731d7a8d-ovndb-tls-certs\") pod \"9a12c67e-6055-4594-850a-61ac731d7a8d\" (UID: \"9a12c67e-6055-4594-850a-61ac731d7a8d\") " Oct 13 17:41:16 crc kubenswrapper[4720]: I1013 17:41:16.882003 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/972b98ca-7012-422d-8839-a196b6a3b919-httpd-run\") pod \"972b98ca-7012-422d-8839-a196b6a3b919\" (UID: \"972b98ca-7012-422d-8839-a196b6a3b919\") " Oct 13 17:41:16 crc kubenswrapper[4720]: I1013 17:41:16.882043 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/972b98ca-7012-422d-8839-a196b6a3b919-combined-ca-bundle\") pod \"972b98ca-7012-422d-8839-a196b6a3b919\" (UID: \"972b98ca-7012-422d-8839-a196b6a3b919\") " Oct 13 17:41:16 crc kubenswrapper[4720]: I1013 17:41:16.882088 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/972b98ca-7012-422d-8839-a196b6a3b919-scripts\") pod \"972b98ca-7012-422d-8839-a196b6a3b919\" (UID: \"972b98ca-7012-422d-8839-a196b6a3b919\") " Oct 13 17:41:16 crc kubenswrapper[4720]: I1013 17:41:16.882121 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/972b98ca-7012-422d-8839-a196b6a3b919-logs\") pod \"972b98ca-7012-422d-8839-a196b6a3b919\" (UID: \"972b98ca-7012-422d-8839-a196b6a3b919\") " Oct 13 17:41:16 crc kubenswrapper[4720]: I1013 17:41:16.882160 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9a12c67e-6055-4594-850a-61ac731d7a8d-httpd-config\") pod \"9a12c67e-6055-4594-850a-61ac731d7a8d\" (UID: \"9a12c67e-6055-4594-850a-61ac731d7a8d\") " Oct 13 17:41:16 crc kubenswrapper[4720]: I1013 17:41:16.882202 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/972b98ca-7012-422d-8839-a196b6a3b919-config-data\") pod \"972b98ca-7012-422d-8839-a196b6a3b919\" (UID: \"972b98ca-7012-422d-8839-a196b6a3b919\") " Oct 13 17:41:16 crc kubenswrapper[4720]: I1013 17:41:16.882232 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/972b98ca-7012-422d-8839-a196b6a3b919-public-tls-certs\") pod \"972b98ca-7012-422d-8839-a196b6a3b919\" (UID: \"972b98ca-7012-422d-8839-a196b6a3b919\") " Oct 13 17:41:16 crc kubenswrapper[4720]: I1013 17:41:16.882274 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmw6l\" (UniqueName: \"kubernetes.io/projected/972b98ca-7012-422d-8839-a196b6a3b919-kube-api-access-pmw6l\") pod \"972b98ca-7012-422d-8839-a196b6a3b919\" (UID: \"972b98ca-7012-422d-8839-a196b6a3b919\") " Oct 13 17:41:16 crc kubenswrapper[4720]: I1013 17:41:16.882296 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z86hb\" (UniqueName: \"kubernetes.io/projected/9a12c67e-6055-4594-850a-61ac731d7a8d-kube-api-access-z86hb\") pod \"9a12c67e-6055-4594-850a-61ac731d7a8d\" (UID: \"9a12c67e-6055-4594-850a-61ac731d7a8d\") " Oct 13 17:41:16 crc kubenswrapper[4720]: I1013 17:41:16.882327 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"972b98ca-7012-422d-8839-a196b6a3b919\" (UID: \"972b98ca-7012-422d-8839-a196b6a3b919\") " Oct 13 17:41:16 crc kubenswrapper[4720]: I1013 17:41:16.882352 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a12c67e-6055-4594-850a-61ac731d7a8d-combined-ca-bundle\") pod \"9a12c67e-6055-4594-850a-61ac731d7a8d\" (UID: \"9a12c67e-6055-4594-850a-61ac731d7a8d\") " Oct 13 17:41:16 crc kubenswrapper[4720]: I1013 17:41:16.883312 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/972b98ca-7012-422d-8839-a196b6a3b919-logs" (OuterVolumeSpecName: "logs") pod "972b98ca-7012-422d-8839-a196b6a3b919" (UID: "972b98ca-7012-422d-8839-a196b6a3b919"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:41:16 crc kubenswrapper[4720]: I1013 17:41:16.888846 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/972b98ca-7012-422d-8839-a196b6a3b919-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "972b98ca-7012-422d-8839-a196b6a3b919" (UID: "972b98ca-7012-422d-8839-a196b6a3b919"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:41:16 crc kubenswrapper[4720]: I1013 17:41:16.900726 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a12c67e-6055-4594-850a-61ac731d7a8d-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "9a12c67e-6055-4594-850a-61ac731d7a8d" (UID: "9a12c67e-6055-4594-850a-61ac731d7a8d"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:41:16 crc kubenswrapper[4720]: I1013 17:41:16.900898 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a12c67e-6055-4594-850a-61ac731d7a8d-kube-api-access-z86hb" (OuterVolumeSpecName: "kube-api-access-z86hb") pod "9a12c67e-6055-4594-850a-61ac731d7a8d" (UID: "9a12c67e-6055-4594-850a-61ac731d7a8d"). InnerVolumeSpecName "kube-api-access-z86hb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:41:16 crc kubenswrapper[4720]: I1013 17:41:16.904052 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/972b98ca-7012-422d-8839-a196b6a3b919-scripts" (OuterVolumeSpecName: "scripts") pod "972b98ca-7012-422d-8839-a196b6a3b919" (UID: "972b98ca-7012-422d-8839-a196b6a3b919"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:41:16 crc kubenswrapper[4720]: I1013 17:41:16.909537 4720 scope.go:117] "RemoveContainer" containerID="689014f2818eced872e2cb1a5d96a7a6d90597398b1856959b571927add9b4de" Oct 13 17:41:16 crc kubenswrapper[4720]: I1013 17:41:16.911417 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/972b98ca-7012-422d-8839-a196b6a3b919-kube-api-access-pmw6l" (OuterVolumeSpecName: "kube-api-access-pmw6l") pod "972b98ca-7012-422d-8839-a196b6a3b919" (UID: "972b98ca-7012-422d-8839-a196b6a3b919"). InnerVolumeSpecName "kube-api-access-pmw6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:41:16 crc kubenswrapper[4720]: I1013 17:41:16.939434 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "972b98ca-7012-422d-8839-a196b6a3b919" (UID: "972b98ca-7012-422d-8839-a196b6a3b919"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 13 17:41:16 crc kubenswrapper[4720]: I1013 17:41:16.984030 4720 scope.go:117] "RemoveContainer" containerID="dd2f993ad3263814f13966b6e376fc39bbfd7f89fbbfb6c77b97572e6786fed7" Oct 13 17:41:16 crc kubenswrapper[4720]: I1013 17:41:16.995432 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-hrltm"] Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.020085 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-hrltm"] Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.038814 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/972b98ca-7012-422d-8839-a196b6a3b919-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.038857 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/972b98ca-7012-422d-8839-a196b6a3b919-logs\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.038867 4720 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9a12c67e-6055-4594-850a-61ac731d7a8d-httpd-config\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.038877 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z86hb\" (UniqueName: \"kubernetes.io/projected/9a12c67e-6055-4594-850a-61ac731d7a8d-kube-api-access-z86hb\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.038886 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmw6l\" (UniqueName: \"kubernetes.io/projected/972b98ca-7012-422d-8839-a196b6a3b919-kube-api-access-pmw6l\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.038923 4720 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.038932 4720 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/972b98ca-7012-422d-8839-a196b6a3b919-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.109409 4720 scope.go:117] "RemoveContainer" containerID="cf8d040f3b31142b2e3557ad717515b1227bb986bdfd43dc7fcc2ab6a3a20d90" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.205377 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b" path="/var/lib/kubelet/pods/aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b/volumes" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.241049 4720 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.243032 4720 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.336952 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/972b98ca-7012-422d-8839-a196b6a3b919-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "972b98ca-7012-422d-8839-a196b6a3b919" (UID: "972b98ca-7012-422d-8839-a196b6a3b919"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.346025 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/972b98ca-7012-422d-8839-a196b6a3b919-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.387169 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a12c67e-6055-4594-850a-61ac731d7a8d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9a12c67e-6055-4594-850a-61ac731d7a8d" (UID: "9a12c67e-6055-4594-850a-61ac731d7a8d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.396663 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/972b98ca-7012-422d-8839-a196b6a3b919-config-data" (OuterVolumeSpecName: "config-data") pod "972b98ca-7012-422d-8839-a196b6a3b919" (UID: "972b98ca-7012-422d-8839-a196b6a3b919"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.399287 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a12c67e-6055-4594-850a-61ac731d7a8d-config" (OuterVolumeSpecName: "config") pod "9a12c67e-6055-4594-850a-61ac731d7a8d" (UID: "9a12c67e-6055-4594-850a-61ac731d7a8d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.423832 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a12c67e-6055-4594-850a-61ac731d7a8d-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "9a12c67e-6055-4594-850a-61ac731d7a8d" (UID: "9a12c67e-6055-4594-850a-61ac731d7a8d"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.424469 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/972b98ca-7012-422d-8839-a196b6a3b919-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "972b98ca-7012-422d-8839-a196b6a3b919" (UID: "972b98ca-7012-422d-8839-a196b6a3b919"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.447477 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/972b98ca-7012-422d-8839-a196b6a3b919-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.447516 4720 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/972b98ca-7012-422d-8839-a196b6a3b919-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.447528 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a12c67e-6055-4594-850a-61ac731d7a8d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.447537 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/9a12c67e-6055-4594-850a-61ac731d7a8d-config\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.447548 4720 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a12c67e-6055-4594-850a-61ac731d7a8d-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.635418 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.641263 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.753496 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/580a1451-b144-42d0-b400-1bf271b17c7b-combined-ca-bundle\") pod \"580a1451-b144-42d0-b400-1bf271b17c7b\" (UID: \"580a1451-b144-42d0-b400-1bf271b17c7b\") " Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.753544 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/580a1451-b144-42d0-b400-1bf271b17c7b-config-data-custom\") pod \"580a1451-b144-42d0-b400-1bf271b17c7b\" (UID: \"580a1451-b144-42d0-b400-1bf271b17c7b\") " Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.753601 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/580a1451-b144-42d0-b400-1bf271b17c7b-etc-machine-id\") pod \"580a1451-b144-42d0-b400-1bf271b17c7b\" (UID: \"580a1451-b144-42d0-b400-1bf271b17c7b\") " Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.753626 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1742c95-4906-4545-8e62-38903c72b168-scripts\") pod \"f1742c95-4906-4545-8e62-38903c72b168\" (UID: \"f1742c95-4906-4545-8e62-38903c72b168\") " Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.753656 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8nwk\" (UniqueName: \"kubernetes.io/projected/f1742c95-4906-4545-8e62-38903c72b168-kube-api-access-f8nwk\") pod \"f1742c95-4906-4545-8e62-38903c72b168\" (UID: \"f1742c95-4906-4545-8e62-38903c72b168\") " Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.753680 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsg4x\" (UniqueName: \"kubernetes.io/projected/580a1451-b144-42d0-b400-1bf271b17c7b-kube-api-access-gsg4x\") pod \"580a1451-b144-42d0-b400-1bf271b17c7b\" (UID: \"580a1451-b144-42d0-b400-1bf271b17c7b\") " Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.753747 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/580a1451-b144-42d0-b400-1bf271b17c7b-scripts\") pod \"580a1451-b144-42d0-b400-1bf271b17c7b\" (UID: \"580a1451-b144-42d0-b400-1bf271b17c7b\") " Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.753785 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1742c95-4906-4545-8e62-38903c72b168-internal-tls-certs\") pod \"f1742c95-4906-4545-8e62-38903c72b168\" (UID: \"f1742c95-4906-4545-8e62-38903c72b168\") " Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.753802 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1742c95-4906-4545-8e62-38903c72b168-config-data\") pod \"f1742c95-4906-4545-8e62-38903c72b168\" (UID: \"f1742c95-4906-4545-8e62-38903c72b168\") " Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.753819 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1742c95-4906-4545-8e62-38903c72b168-combined-ca-bundle\") pod \"f1742c95-4906-4545-8e62-38903c72b168\" (UID: \"f1742c95-4906-4545-8e62-38903c72b168\") " Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.753897 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f1742c95-4906-4545-8e62-38903c72b168-httpd-run\") pod \"f1742c95-4906-4545-8e62-38903c72b168\" (UID: \"f1742c95-4906-4545-8e62-38903c72b168\") " Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.753912 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/580a1451-b144-42d0-b400-1bf271b17c7b-config-data\") pod \"580a1451-b144-42d0-b400-1bf271b17c7b\" (UID: \"580a1451-b144-42d0-b400-1bf271b17c7b\") " Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.753940 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1742c95-4906-4545-8e62-38903c72b168-logs\") pod \"f1742c95-4906-4545-8e62-38903c72b168\" (UID: \"f1742c95-4906-4545-8e62-38903c72b168\") " Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.753976 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"f1742c95-4906-4545-8e62-38903c72b168\" (UID: \"f1742c95-4906-4545-8e62-38903c72b168\") " Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.754592 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/580a1451-b144-42d0-b400-1bf271b17c7b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "580a1451-b144-42d0-b400-1bf271b17c7b" (UID: "580a1451-b144-42d0-b400-1bf271b17c7b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.756320 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1742c95-4906-4545-8e62-38903c72b168-logs" (OuterVolumeSpecName: "logs") pod "f1742c95-4906-4545-8e62-38903c72b168" (UID: "f1742c95-4906-4545-8e62-38903c72b168"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.761528 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.761937 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1742c95-4906-4545-8e62-38903c72b168-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f1742c95-4906-4545-8e62-38903c72b168" (UID: "f1742c95-4906-4545-8e62-38903c72b168"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.769117 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/580a1451-b144-42d0-b400-1bf271b17c7b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "580a1451-b144-42d0-b400-1bf271b17c7b" (UID: "580a1451-b144-42d0-b400-1bf271b17c7b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.769576 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/580a1451-b144-42d0-b400-1bf271b17c7b-scripts" (OuterVolumeSpecName: "scripts") pod "580a1451-b144-42d0-b400-1bf271b17c7b" (UID: "580a1451-b144-42d0-b400-1bf271b17c7b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.770585 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "f1742c95-4906-4545-8e62-38903c72b168" (UID: "f1742c95-4906-4545-8e62-38903c72b168"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.772569 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/580a1451-b144-42d0-b400-1bf271b17c7b-kube-api-access-gsg4x" (OuterVolumeSpecName: "kube-api-access-gsg4x") pod "580a1451-b144-42d0-b400-1bf271b17c7b" (UID: "580a1451-b144-42d0-b400-1bf271b17c7b"). InnerVolumeSpecName "kube-api-access-gsg4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.773971 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.775432 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1742c95-4906-4545-8e62-38903c72b168-kube-api-access-f8nwk" (OuterVolumeSpecName: "kube-api-access-f8nwk") pod "f1742c95-4906-4545-8e62-38903c72b168" (UID: "f1742c95-4906-4545-8e62-38903c72b168"). InnerVolumeSpecName "kube-api-access-f8nwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.780059 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1742c95-4906-4545-8e62-38903c72b168-scripts" (OuterVolumeSpecName: "scripts") pod "f1742c95-4906-4545-8e62-38903c72b168" (UID: "f1742c95-4906-4545-8e62-38903c72b168"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.786968 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Oct 13 17:41:17 crc kubenswrapper[4720]: E1013 17:41:17.787390 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b" containerName="dnsmasq-dns" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.787402 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b" containerName="dnsmasq-dns" Oct 13 17:41:17 crc kubenswrapper[4720]: E1013 17:41:17.787413 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a12c67e-6055-4594-850a-61ac731d7a8d" containerName="neutron-httpd" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.787418 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a12c67e-6055-4594-850a-61ac731d7a8d" containerName="neutron-httpd" Oct 13 17:41:17 crc kubenswrapper[4720]: E1013 17:41:17.787430 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1742c95-4906-4545-8e62-38903c72b168" containerName="glance-log" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.787436 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1742c95-4906-4545-8e62-38903c72b168" containerName="glance-log" Oct 13 17:41:17 crc kubenswrapper[4720]: E1013 17:41:17.787448 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="580a1451-b144-42d0-b400-1bf271b17c7b" containerName="cinder-scheduler" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.787453 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="580a1451-b144-42d0-b400-1bf271b17c7b" containerName="cinder-scheduler" Oct 13 17:41:17 crc kubenswrapper[4720]: E1013 17:41:17.787471 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a12c67e-6055-4594-850a-61ac731d7a8d" containerName="neutron-api" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.787477 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a12c67e-6055-4594-850a-61ac731d7a8d" containerName="neutron-api" Oct 13 17:41:17 crc kubenswrapper[4720]: E1013 17:41:17.787489 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="580a1451-b144-42d0-b400-1bf271b17c7b" containerName="probe" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.787494 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="580a1451-b144-42d0-b400-1bf271b17c7b" containerName="probe" Oct 13 17:41:17 crc kubenswrapper[4720]: E1013 17:41:17.787517 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1742c95-4906-4545-8e62-38903c72b168" containerName="glance-httpd" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.787522 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1742c95-4906-4545-8e62-38903c72b168" containerName="glance-httpd" Oct 13 17:41:17 crc kubenswrapper[4720]: E1013 17:41:17.787533 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b" containerName="init" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.787538 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b" containerName="init" Oct 13 17:41:17 crc kubenswrapper[4720]: E1013 17:41:17.787551 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="972b98ca-7012-422d-8839-a196b6a3b919" containerName="glance-log" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.787557 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="972b98ca-7012-422d-8839-a196b6a3b919" containerName="glance-log" Oct 13 17:41:17 crc kubenswrapper[4720]: E1013 17:41:17.787567 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="972b98ca-7012-422d-8839-a196b6a3b919" containerName="glance-httpd" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.787573 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="972b98ca-7012-422d-8839-a196b6a3b919" containerName="glance-httpd" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.787749 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="972b98ca-7012-422d-8839-a196b6a3b919" containerName="glance-httpd" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.787760 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="580a1451-b144-42d0-b400-1bf271b17c7b" containerName="probe" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.787794 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="972b98ca-7012-422d-8839-a196b6a3b919" containerName="glance-log" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.787803 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="580a1451-b144-42d0-b400-1bf271b17c7b" containerName="cinder-scheduler" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.787811 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1742c95-4906-4545-8e62-38903c72b168" containerName="glance-log" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.787824 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a12c67e-6055-4594-850a-61ac731d7a8d" containerName="neutron-api" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.787831 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1742c95-4906-4545-8e62-38903c72b168" containerName="glance-httpd" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.787841 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="aee71b90-ce44-4a05-ad3c-dc4f8ccf3d2b" containerName="dnsmasq-dns" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.787849 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a12c67e-6055-4594-850a-61ac731d7a8d" containerName="neutron-httpd" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.788809 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.792576 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.795791 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.796968 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.808132 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1742c95-4906-4545-8e62-38903c72b168-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f1742c95-4906-4545-8e62-38903c72b168" (UID: "f1742c95-4906-4545-8e62-38903c72b168"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.844159 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"580a1451-b144-42d0-b400-1bf271b17c7b","Type":"ContainerDied","Data":"6b5c05b4a9a4fa71f2882d58ae3c17813093fe2f0f470e100ae8037fa3361854"} Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.844257 4720 scope.go:117] "RemoveContainer" containerID="4e9e66062e838ba1e4cb13399586ee872541740d0e9087948b6ca26f086d3b4c" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.844372 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.844602 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/580a1451-b144-42d0-b400-1bf271b17c7b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "580a1451-b144-42d0-b400-1bf271b17c7b" (UID: "580a1451-b144-42d0-b400-1bf271b17c7b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.848963 4720 generic.go:334] "Generic (PLEG): container finished" podID="f1742c95-4906-4545-8e62-38903c72b168" containerID="08fb538daabd98d65e3554b1d509b5149739a281779328f80b240a2b7df44866" exitCode=0 Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.849012 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f1742c95-4906-4545-8e62-38903c72b168","Type":"ContainerDied","Data":"08fb538daabd98d65e3554b1d509b5149739a281779328f80b240a2b7df44866"} Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.849051 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f1742c95-4906-4545-8e62-38903c72b168","Type":"ContainerDied","Data":"9c5e3d5537d68d093fa80b7305900cbac5ce46c2bdf8a7147c033bd39365de58"} Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.849116 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.855523 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40ee2ba6-d58a-4ee8-b28e-6aeea90de7f6-config-data\") pod \"glance-default-external-api-0\" (UID: \"40ee2ba6-d58a-4ee8-b28e-6aeea90de7f6\") " pod="openstack/glance-default-external-api-0" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.855564 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc8fj\" (UniqueName: \"kubernetes.io/projected/40ee2ba6-d58a-4ee8-b28e-6aeea90de7f6-kube-api-access-tc8fj\") pod \"glance-default-external-api-0\" (UID: \"40ee2ba6-d58a-4ee8-b28e-6aeea90de7f6\") " pod="openstack/glance-default-external-api-0" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.855583 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/40ee2ba6-d58a-4ee8-b28e-6aeea90de7f6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"40ee2ba6-d58a-4ee8-b28e-6aeea90de7f6\") " pod="openstack/glance-default-external-api-0" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.855639 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/40ee2ba6-d58a-4ee8-b28e-6aeea90de7f6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"40ee2ba6-d58a-4ee8-b28e-6aeea90de7f6\") " pod="openstack/glance-default-external-api-0" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.855659 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"40ee2ba6-d58a-4ee8-b28e-6aeea90de7f6\") " pod="openstack/glance-default-external-api-0" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.855684 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40ee2ba6-d58a-4ee8-b28e-6aeea90de7f6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"40ee2ba6-d58a-4ee8-b28e-6aeea90de7f6\") " pod="openstack/glance-default-external-api-0" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.855710 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40ee2ba6-d58a-4ee8-b28e-6aeea90de7f6-logs\") pod \"glance-default-external-api-0\" (UID: \"40ee2ba6-d58a-4ee8-b28e-6aeea90de7f6\") " pod="openstack/glance-default-external-api-0" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.855741 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40ee2ba6-d58a-4ee8-b28e-6aeea90de7f6-scripts\") pod \"glance-default-external-api-0\" (UID: \"40ee2ba6-d58a-4ee8-b28e-6aeea90de7f6\") " pod="openstack/glance-default-external-api-0" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.855797 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1742c95-4906-4545-8e62-38903c72b168-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.855809 4720 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f1742c95-4906-4545-8e62-38903c72b168-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.855817 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1742c95-4906-4545-8e62-38903c72b168-logs\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.855834 4720 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.855845 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/580a1451-b144-42d0-b400-1bf271b17c7b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.855853 4720 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/580a1451-b144-42d0-b400-1bf271b17c7b-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.855863 4720 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/580a1451-b144-42d0-b400-1bf271b17c7b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.855870 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1742c95-4906-4545-8e62-38903c72b168-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.855878 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8nwk\" (UniqueName: \"kubernetes.io/projected/f1742c95-4906-4545-8e62-38903c72b168-kube-api-access-f8nwk\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.855899 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gsg4x\" (UniqueName: \"kubernetes.io/projected/580a1451-b144-42d0-b400-1bf271b17c7b-kube-api-access-gsg4x\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.855907 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/580a1451-b144-42d0-b400-1bf271b17c7b-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.856466 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1742c95-4906-4545-8e62-38903c72b168-config-data" (OuterVolumeSpecName: "config-data") pod "f1742c95-4906-4545-8e62-38903c72b168" (UID: "f1742c95-4906-4545-8e62-38903c72b168"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.857166 4720 generic.go:334] "Generic (PLEG): container finished" podID="fe437be8-b701-42f7-9aae-ba75ccc6be20" containerID="c0ea5f1d207aed801988ac349addd0169004a654dd070fa02ba0427a5cd4e026" exitCode=0 Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.857248 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-1187-account-create-kf7dn" event={"ID":"fe437be8-b701-42f7-9aae-ba75ccc6be20","Type":"ContainerDied","Data":"c0ea5f1d207aed801988ac349addd0169004a654dd070fa02ba0427a5cd4e026"} Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.862097 4720 generic.go:334] "Generic (PLEG): container finished" podID="0bdf4571-07ae-4672-937b-f445bf5578ad" containerID="15827525cf8ca4db758bff78f963fccc5c80a428a61c37b4476c82959dbb9633" exitCode=0 Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.862158 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-557a-account-create-m5cvn" event={"ID":"0bdf4571-07ae-4672-937b-f445bf5578ad","Type":"ContainerDied","Data":"15827525cf8ca4db758bff78f963fccc5c80a428a61c37b4476c82959dbb9633"} Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.871555 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1742c95-4906-4545-8e62-38903c72b168-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f1742c95-4906-4545-8e62-38903c72b168" (UID: "f1742c95-4906-4545-8e62-38903c72b168"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.876504 4720 scope.go:117] "RemoveContainer" containerID="25b97bc04fd8cade7f4ee8ae6c79a7f030b03ca73cbe965f2d2fbef2d199284a" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.879373 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0e5acdb-49fb-4d97-8330-61bb2eeba14f","Type":"ContainerStarted","Data":"6bffc98ff644494f7e39f260402a70dad002a9207c91de28c2144cee8246cb10"} Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.889543 4720 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.900740 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6579788cd4-czbf2" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.904348 4720 scope.go:117] "RemoveContainer" containerID="08fb538daabd98d65e3554b1d509b5149739a281779328f80b240a2b7df44866" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.942265 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6579788cd4-czbf2"] Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.957778 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/40ee2ba6-d58a-4ee8-b28e-6aeea90de7f6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"40ee2ba6-d58a-4ee8-b28e-6aeea90de7f6\") " pod="openstack/glance-default-external-api-0" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.957863 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/40ee2ba6-d58a-4ee8-b28e-6aeea90de7f6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"40ee2ba6-d58a-4ee8-b28e-6aeea90de7f6\") " pod="openstack/glance-default-external-api-0" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.957890 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"40ee2ba6-d58a-4ee8-b28e-6aeea90de7f6\") " pod="openstack/glance-default-external-api-0" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.957918 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40ee2ba6-d58a-4ee8-b28e-6aeea90de7f6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"40ee2ba6-d58a-4ee8-b28e-6aeea90de7f6\") " pod="openstack/glance-default-external-api-0" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.957946 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40ee2ba6-d58a-4ee8-b28e-6aeea90de7f6-logs\") pod \"glance-default-external-api-0\" (UID: \"40ee2ba6-d58a-4ee8-b28e-6aeea90de7f6\") " pod="openstack/glance-default-external-api-0" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.957979 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40ee2ba6-d58a-4ee8-b28e-6aeea90de7f6-scripts\") pod \"glance-default-external-api-0\" (UID: \"40ee2ba6-d58a-4ee8-b28e-6aeea90de7f6\") " pod="openstack/glance-default-external-api-0" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.958013 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40ee2ba6-d58a-4ee8-b28e-6aeea90de7f6-config-data\") pod \"glance-default-external-api-0\" (UID: \"40ee2ba6-d58a-4ee8-b28e-6aeea90de7f6\") " pod="openstack/glance-default-external-api-0" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.958039 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tc8fj\" (UniqueName: \"kubernetes.io/projected/40ee2ba6-d58a-4ee8-b28e-6aeea90de7f6-kube-api-access-tc8fj\") pod \"glance-default-external-api-0\" (UID: \"40ee2ba6-d58a-4ee8-b28e-6aeea90de7f6\") " pod="openstack/glance-default-external-api-0" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.958083 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1742c95-4906-4545-8e62-38903c72b168-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.958095 4720 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1742c95-4906-4545-8e62-38903c72b168-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.958105 4720 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.958743 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/40ee2ba6-d58a-4ee8-b28e-6aeea90de7f6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"40ee2ba6-d58a-4ee8-b28e-6aeea90de7f6\") " pod="openstack/glance-default-external-api-0" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.958976 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40ee2ba6-d58a-4ee8-b28e-6aeea90de7f6-logs\") pod \"glance-default-external-api-0\" (UID: \"40ee2ba6-d58a-4ee8-b28e-6aeea90de7f6\") " pod="openstack/glance-default-external-api-0" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.962078 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40ee2ba6-d58a-4ee8-b28e-6aeea90de7f6-scripts\") pod \"glance-default-external-api-0\" (UID: \"40ee2ba6-d58a-4ee8-b28e-6aeea90de7f6\") " pod="openstack/glance-default-external-api-0" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.962078 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40ee2ba6-d58a-4ee8-b28e-6aeea90de7f6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"40ee2ba6-d58a-4ee8-b28e-6aeea90de7f6\") " pod="openstack/glance-default-external-api-0" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.962366 4720 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"40ee2ba6-d58a-4ee8-b28e-6aeea90de7f6\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-external-api-0" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.963030 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6579788cd4-czbf2"] Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.963777 4720 scope.go:117] "RemoveContainer" containerID="11524c98eb61e0537ae1d6d91addce7769b6a1193e5f00f1994585c7e37277f3" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.963904 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/580a1451-b144-42d0-b400-1bf271b17c7b-config-data" (OuterVolumeSpecName: "config-data") pod "580a1451-b144-42d0-b400-1bf271b17c7b" (UID: "580a1451-b144-42d0-b400-1bf271b17c7b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.965005 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/40ee2ba6-d58a-4ee8-b28e-6aeea90de7f6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"40ee2ba6-d58a-4ee8-b28e-6aeea90de7f6\") " pod="openstack/glance-default-external-api-0" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.968035 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40ee2ba6-d58a-4ee8-b28e-6aeea90de7f6-config-data\") pod \"glance-default-external-api-0\" (UID: \"40ee2ba6-d58a-4ee8-b28e-6aeea90de7f6\") " pod="openstack/glance-default-external-api-0" Oct 13 17:41:17 crc kubenswrapper[4720]: I1013 17:41:17.978217 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tc8fj\" (UniqueName: \"kubernetes.io/projected/40ee2ba6-d58a-4ee8-b28e-6aeea90de7f6-kube-api-access-tc8fj\") pod \"glance-default-external-api-0\" (UID: \"40ee2ba6-d58a-4ee8-b28e-6aeea90de7f6\") " pod="openstack/glance-default-external-api-0" Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.033554 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"40ee2ba6-d58a-4ee8-b28e-6aeea90de7f6\") " pod="openstack/glance-default-external-api-0" Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.040515 4720 scope.go:117] "RemoveContainer" containerID="08fb538daabd98d65e3554b1d509b5149739a281779328f80b240a2b7df44866" Oct 13 17:41:18 crc kubenswrapper[4720]: E1013 17:41:18.041236 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08fb538daabd98d65e3554b1d509b5149739a281779328f80b240a2b7df44866\": container with ID starting with 08fb538daabd98d65e3554b1d509b5149739a281779328f80b240a2b7df44866 not found: ID does not exist" containerID="08fb538daabd98d65e3554b1d509b5149739a281779328f80b240a2b7df44866" Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.041271 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08fb538daabd98d65e3554b1d509b5149739a281779328f80b240a2b7df44866"} err="failed to get container status \"08fb538daabd98d65e3554b1d509b5149739a281779328f80b240a2b7df44866\": rpc error: code = NotFound desc = could not find container \"08fb538daabd98d65e3554b1d509b5149739a281779328f80b240a2b7df44866\": container with ID starting with 08fb538daabd98d65e3554b1d509b5149739a281779328f80b240a2b7df44866 not found: ID does not exist" Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.041291 4720 scope.go:117] "RemoveContainer" containerID="11524c98eb61e0537ae1d6d91addce7769b6a1193e5f00f1994585c7e37277f3" Oct 13 17:41:18 crc kubenswrapper[4720]: E1013 17:41:18.045958 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11524c98eb61e0537ae1d6d91addce7769b6a1193e5f00f1994585c7e37277f3\": container with ID starting with 11524c98eb61e0537ae1d6d91addce7769b6a1193e5f00f1994585c7e37277f3 not found: ID does not exist" containerID="11524c98eb61e0537ae1d6d91addce7769b6a1193e5f00f1994585c7e37277f3" Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.045988 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11524c98eb61e0537ae1d6d91addce7769b6a1193e5f00f1994585c7e37277f3"} err="failed to get container status \"11524c98eb61e0537ae1d6d91addce7769b6a1193e5f00f1994585c7e37277f3\": rpc error: code = NotFound desc = could not find container \"11524c98eb61e0537ae1d6d91addce7769b6a1193e5f00f1994585c7e37277f3\": container with ID starting with 11524c98eb61e0537ae1d6d91addce7769b6a1193e5f00f1994585c7e37277f3 not found: ID does not exist" Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.059609 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/580a1451-b144-42d0-b400-1bf271b17c7b-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.144802 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.257547 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.279655 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.310961 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.312871 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.315198 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.354665 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.374945 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a68622d1-743e-45e7-a021-6d766840711a-config-data\") pod \"cinder-scheduler-0\" (UID: \"a68622d1-743e-45e7-a021-6d766840711a\") " pod="openstack/cinder-scheduler-0" Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.374978 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a68622d1-743e-45e7-a021-6d766840711a-scripts\") pod \"cinder-scheduler-0\" (UID: \"a68622d1-743e-45e7-a021-6d766840711a\") " pod="openstack/cinder-scheduler-0" Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.374999 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a68622d1-743e-45e7-a021-6d766840711a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a68622d1-743e-45e7-a021-6d766840711a\") " pod="openstack/cinder-scheduler-0" Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.375029 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a68622d1-743e-45e7-a021-6d766840711a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a68622d1-743e-45e7-a021-6d766840711a\") " pod="openstack/cinder-scheduler-0" Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.375064 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a68622d1-743e-45e7-a021-6d766840711a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a68622d1-743e-45e7-a021-6d766840711a\") " pod="openstack/cinder-scheduler-0" Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.375089 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgqbs\" (UniqueName: \"kubernetes.io/projected/a68622d1-743e-45e7-a021-6d766840711a-kube-api-access-bgqbs\") pod \"cinder-scheduler-0\" (UID: \"a68622d1-743e-45e7-a021-6d766840711a\") " pod="openstack/cinder-scheduler-0" Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.378937 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3646-account-create-t2hq4" Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.411998 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.471620 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.476531 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhfwl\" (UniqueName: \"kubernetes.io/projected/bb3ca24e-8779-41d4-b8cc-8a6bc524e81d-kube-api-access-qhfwl\") pod \"bb3ca24e-8779-41d4-b8cc-8a6bc524e81d\" (UID: \"bb3ca24e-8779-41d4-b8cc-8a6bc524e81d\") " Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.477236 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a68622d1-743e-45e7-a021-6d766840711a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a68622d1-743e-45e7-a021-6d766840711a\") " pod="openstack/cinder-scheduler-0" Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.477277 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgqbs\" (UniqueName: \"kubernetes.io/projected/a68622d1-743e-45e7-a021-6d766840711a-kube-api-access-bgqbs\") pod \"cinder-scheduler-0\" (UID: \"a68622d1-743e-45e7-a021-6d766840711a\") " pod="openstack/cinder-scheduler-0" Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.477378 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a68622d1-743e-45e7-a021-6d766840711a-config-data\") pod \"cinder-scheduler-0\" (UID: \"a68622d1-743e-45e7-a021-6d766840711a\") " pod="openstack/cinder-scheduler-0" Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.477397 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a68622d1-743e-45e7-a021-6d766840711a-scripts\") pod \"cinder-scheduler-0\" (UID: \"a68622d1-743e-45e7-a021-6d766840711a\") " pod="openstack/cinder-scheduler-0" Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.477422 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a68622d1-743e-45e7-a021-6d766840711a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a68622d1-743e-45e7-a021-6d766840711a\") " pod="openstack/cinder-scheduler-0" Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.477451 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a68622d1-743e-45e7-a021-6d766840711a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a68622d1-743e-45e7-a021-6d766840711a\") " pod="openstack/cinder-scheduler-0" Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.477557 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a68622d1-743e-45e7-a021-6d766840711a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a68622d1-743e-45e7-a021-6d766840711a\") " pod="openstack/cinder-scheduler-0" Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.485878 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a68622d1-743e-45e7-a021-6d766840711a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a68622d1-743e-45e7-a021-6d766840711a\") " pod="openstack/cinder-scheduler-0" Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.486050 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb3ca24e-8779-41d4-b8cc-8a6bc524e81d-kube-api-access-qhfwl" (OuterVolumeSpecName: "kube-api-access-qhfwl") pod "bb3ca24e-8779-41d4-b8cc-8a6bc524e81d" (UID: "bb3ca24e-8779-41d4-b8cc-8a6bc524e81d"). InnerVolumeSpecName "kube-api-access-qhfwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.489047 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.490644 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a68622d1-743e-45e7-a021-6d766840711a-scripts\") pod \"cinder-scheduler-0\" (UID: \"a68622d1-743e-45e7-a021-6d766840711a\") " pod="openstack/cinder-scheduler-0" Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.491220 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a68622d1-743e-45e7-a021-6d766840711a-config-data\") pod \"cinder-scheduler-0\" (UID: \"a68622d1-743e-45e7-a021-6d766840711a\") " pod="openstack/cinder-scheduler-0" Oct 13 17:41:18 crc kubenswrapper[4720]: E1013 17:41:18.491584 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb3ca24e-8779-41d4-b8cc-8a6bc524e81d" containerName="mariadb-account-create" Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.491603 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb3ca24e-8779-41d4-b8cc-8a6bc524e81d" containerName="mariadb-account-create" Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.492460 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb3ca24e-8779-41d4-b8cc-8a6bc524e81d" containerName="mariadb-account-create" Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.494217 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.495463 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a68622d1-743e-45e7-a021-6d766840711a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a68622d1-743e-45e7-a021-6d766840711a\") " pod="openstack/cinder-scheduler-0" Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.496608 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.496685 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.504781 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgqbs\" (UniqueName: \"kubernetes.io/projected/a68622d1-743e-45e7-a021-6d766840711a-kube-api-access-bgqbs\") pod \"cinder-scheduler-0\" (UID: \"a68622d1-743e-45e7-a021-6d766840711a\") " pod="openstack/cinder-scheduler-0" Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.539381 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.564413 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.579747 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65eecc5b-dc6b-482e-bcfe-93915016a1f5-logs\") pod \"glance-default-internal-api-0\" (UID: \"65eecc5b-dc6b-482e-bcfe-93915016a1f5\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.579805 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/65eecc5b-dc6b-482e-bcfe-93915016a1f5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"65eecc5b-dc6b-482e-bcfe-93915016a1f5\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.579829 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"65eecc5b-dc6b-482e-bcfe-93915016a1f5\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.579855 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/65eecc5b-dc6b-482e-bcfe-93915016a1f5-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"65eecc5b-dc6b-482e-bcfe-93915016a1f5\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.579897 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65eecc5b-dc6b-482e-bcfe-93915016a1f5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"65eecc5b-dc6b-482e-bcfe-93915016a1f5\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.579929 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnvr7\" (UniqueName: \"kubernetes.io/projected/65eecc5b-dc6b-482e-bcfe-93915016a1f5-kube-api-access-qnvr7\") pod \"glance-default-internal-api-0\" (UID: \"65eecc5b-dc6b-482e-bcfe-93915016a1f5\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.579944 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65eecc5b-dc6b-482e-bcfe-93915016a1f5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"65eecc5b-dc6b-482e-bcfe-93915016a1f5\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.580004 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65eecc5b-dc6b-482e-bcfe-93915016a1f5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"65eecc5b-dc6b-482e-bcfe-93915016a1f5\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.580051 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhfwl\" (UniqueName: \"kubernetes.io/projected/bb3ca24e-8779-41d4-b8cc-8a6bc524e81d-kube-api-access-qhfwl\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.681603 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65eecc5b-dc6b-482e-bcfe-93915016a1f5-logs\") pod \"glance-default-internal-api-0\" (UID: \"65eecc5b-dc6b-482e-bcfe-93915016a1f5\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.681905 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/65eecc5b-dc6b-482e-bcfe-93915016a1f5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"65eecc5b-dc6b-482e-bcfe-93915016a1f5\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.681929 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"65eecc5b-dc6b-482e-bcfe-93915016a1f5\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.681954 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/65eecc5b-dc6b-482e-bcfe-93915016a1f5-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"65eecc5b-dc6b-482e-bcfe-93915016a1f5\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.682007 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65eecc5b-dc6b-482e-bcfe-93915016a1f5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"65eecc5b-dc6b-482e-bcfe-93915016a1f5\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.682041 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnvr7\" (UniqueName: \"kubernetes.io/projected/65eecc5b-dc6b-482e-bcfe-93915016a1f5-kube-api-access-qnvr7\") pod \"glance-default-internal-api-0\" (UID: \"65eecc5b-dc6b-482e-bcfe-93915016a1f5\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.682057 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65eecc5b-dc6b-482e-bcfe-93915016a1f5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"65eecc5b-dc6b-482e-bcfe-93915016a1f5\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.682104 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65eecc5b-dc6b-482e-bcfe-93915016a1f5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"65eecc5b-dc6b-482e-bcfe-93915016a1f5\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.688044 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65eecc5b-dc6b-482e-bcfe-93915016a1f5-logs\") pod \"glance-default-internal-api-0\" (UID: \"65eecc5b-dc6b-482e-bcfe-93915016a1f5\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.688321 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/65eecc5b-dc6b-482e-bcfe-93915016a1f5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"65eecc5b-dc6b-482e-bcfe-93915016a1f5\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.688375 4720 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"65eecc5b-dc6b-482e-bcfe-93915016a1f5\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.688682 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65eecc5b-dc6b-482e-bcfe-93915016a1f5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"65eecc5b-dc6b-482e-bcfe-93915016a1f5\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.689131 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/65eecc5b-dc6b-482e-bcfe-93915016a1f5-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"65eecc5b-dc6b-482e-bcfe-93915016a1f5\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.693297 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65eecc5b-dc6b-482e-bcfe-93915016a1f5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"65eecc5b-dc6b-482e-bcfe-93915016a1f5\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.697265 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65eecc5b-dc6b-482e-bcfe-93915016a1f5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"65eecc5b-dc6b-482e-bcfe-93915016a1f5\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.703158 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnvr7\" (UniqueName: \"kubernetes.io/projected/65eecc5b-dc6b-482e-bcfe-93915016a1f5-kube-api-access-qnvr7\") pod \"glance-default-internal-api-0\" (UID: \"65eecc5b-dc6b-482e-bcfe-93915016a1f5\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.727927 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"65eecc5b-dc6b-482e-bcfe-93915016a1f5\") " pod="openstack/glance-default-internal-api-0" Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.789375 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.876231 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.921338 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"40ee2ba6-d58a-4ee8-b28e-6aeea90de7f6","Type":"ContainerStarted","Data":"c57e60fbe19cb9383b7d7a2756af55acee2f06116b7c919f41e698e92139ba0f"} Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.924612 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0e5acdb-49fb-4d97-8330-61bb2eeba14f","Type":"ContainerStarted","Data":"9d4fc6fe4c16b453593624c59539e464260a7c4d00254dfca1f55c3d8f9bf7ef"} Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.924791 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.924898 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c0e5acdb-49fb-4d97-8330-61bb2eeba14f" containerName="sg-core" containerID="cri-o://6bffc98ff644494f7e39f260402a70dad002a9207c91de28c2144cee8246cb10" gracePeriod=30 Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.925101 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c0e5acdb-49fb-4d97-8330-61bb2eeba14f" containerName="proxy-httpd" containerID="cri-o://9d4fc6fe4c16b453593624c59539e464260a7c4d00254dfca1f55c3d8f9bf7ef" gracePeriod=30 Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.925364 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c0e5acdb-49fb-4d97-8330-61bb2eeba14f" containerName="ceilometer-notification-agent" containerID="cri-o://663f2b043e406be36e53f35b210bb89be8414904d4ba1d10fb48933f5dfc592f" gracePeriod=30 Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.925244 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c0e5acdb-49fb-4d97-8330-61bb2eeba14f" containerName="ceilometer-central-agent" containerID="cri-o://60833019df1e6e9d78cded50a1bc7d0fc7fc1ac7990aa51939fedb0d5f63f88b" gracePeriod=30 Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.928961 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3646-account-create-t2hq4" Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.932272 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-3646-account-create-t2hq4" event={"ID":"bb3ca24e-8779-41d4-b8cc-8a6bc524e81d","Type":"ContainerDied","Data":"07a1ca6ad9b8f965f99f5f5ef3939df88762abe60322faa77111b6a66ba7978d"} Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.932306 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07a1ca6ad9b8f965f99f5f5ef3939df88762abe60322faa77111b6a66ba7978d" Oct 13 17:41:18 crc kubenswrapper[4720]: I1013 17:41:18.950828 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.105392805 podStartE2EDuration="7.950808932s" podCreationTimestamp="2025-10-13 17:41:11 +0000 UTC" firstStartedPulling="2025-10-13 17:41:12.565723318 +0000 UTC m=+1018.022973450" lastFinishedPulling="2025-10-13 17:41:18.411139455 +0000 UTC m=+1023.868389577" observedRunningTime="2025-10-13 17:41:18.945795153 +0000 UTC m=+1024.403045285" watchObservedRunningTime="2025-10-13 17:41:18.950808932 +0000 UTC m=+1024.408059054" Oct 13 17:41:19 crc kubenswrapper[4720]: I1013 17:41:19.035769 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 13 17:41:19 crc kubenswrapper[4720]: I1013 17:41:19.193285 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="580a1451-b144-42d0-b400-1bf271b17c7b" path="/var/lib/kubelet/pods/580a1451-b144-42d0-b400-1bf271b17c7b/volumes" Oct 13 17:41:19 crc kubenswrapper[4720]: I1013 17:41:19.194286 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="972b98ca-7012-422d-8839-a196b6a3b919" path="/var/lib/kubelet/pods/972b98ca-7012-422d-8839-a196b6a3b919/volumes" Oct 13 17:41:19 crc kubenswrapper[4720]: I1013 17:41:19.194850 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a12c67e-6055-4594-850a-61ac731d7a8d" path="/var/lib/kubelet/pods/9a12c67e-6055-4594-850a-61ac731d7a8d/volumes" Oct 13 17:41:19 crc kubenswrapper[4720]: I1013 17:41:19.195886 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1742c95-4906-4545-8e62-38903c72b168" path="/var/lib/kubelet/pods/f1742c95-4906-4545-8e62-38903c72b168/volumes" Oct 13 17:41:19 crc kubenswrapper[4720]: I1013 17:41:19.553301 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-557a-account-create-m5cvn" Oct 13 17:41:19 crc kubenswrapper[4720]: I1013 17:41:19.562538 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-1187-account-create-kf7dn" Oct 13 17:41:19 crc kubenswrapper[4720]: I1013 17:41:19.697686 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 13 17:41:19 crc kubenswrapper[4720]: I1013 17:41:19.705231 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwd25\" (UniqueName: \"kubernetes.io/projected/0bdf4571-07ae-4672-937b-f445bf5578ad-kube-api-access-kwd25\") pod \"0bdf4571-07ae-4672-937b-f445bf5578ad\" (UID: \"0bdf4571-07ae-4672-937b-f445bf5578ad\") " Oct 13 17:41:19 crc kubenswrapper[4720]: I1013 17:41:19.705353 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9gmb\" (UniqueName: \"kubernetes.io/projected/fe437be8-b701-42f7-9aae-ba75ccc6be20-kube-api-access-k9gmb\") pod \"fe437be8-b701-42f7-9aae-ba75ccc6be20\" (UID: \"fe437be8-b701-42f7-9aae-ba75ccc6be20\") " Oct 13 17:41:19 crc kubenswrapper[4720]: I1013 17:41:19.709080 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe437be8-b701-42f7-9aae-ba75ccc6be20-kube-api-access-k9gmb" (OuterVolumeSpecName: "kube-api-access-k9gmb") pod "fe437be8-b701-42f7-9aae-ba75ccc6be20" (UID: "fe437be8-b701-42f7-9aae-ba75ccc6be20"). InnerVolumeSpecName "kube-api-access-k9gmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:41:19 crc kubenswrapper[4720]: I1013 17:41:19.709215 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bdf4571-07ae-4672-937b-f445bf5578ad-kube-api-access-kwd25" (OuterVolumeSpecName: "kube-api-access-kwd25") pod "0bdf4571-07ae-4672-937b-f445bf5578ad" (UID: "0bdf4571-07ae-4672-937b-f445bf5578ad"). InnerVolumeSpecName "kube-api-access-kwd25". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:41:19 crc kubenswrapper[4720]: I1013 17:41:19.725236 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 17:41:19 crc kubenswrapper[4720]: I1013 17:41:19.807719 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwd25\" (UniqueName: \"kubernetes.io/projected/0bdf4571-07ae-4672-937b-f445bf5578ad-kube-api-access-kwd25\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:19 crc kubenswrapper[4720]: I1013 17:41:19.807752 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9gmb\" (UniqueName: \"kubernetes.io/projected/fe437be8-b701-42f7-9aae-ba75ccc6be20-kube-api-access-k9gmb\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:19 crc kubenswrapper[4720]: I1013 17:41:19.909177 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0e5acdb-49fb-4d97-8330-61bb2eeba14f-scripts\") pod \"c0e5acdb-49fb-4d97-8330-61bb2eeba14f\" (UID: \"c0e5acdb-49fb-4d97-8330-61bb2eeba14f\") " Oct 13 17:41:19 crc kubenswrapper[4720]: I1013 17:41:19.909353 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0e5acdb-49fb-4d97-8330-61bb2eeba14f-log-httpd\") pod \"c0e5acdb-49fb-4d97-8330-61bb2eeba14f\" (UID: \"c0e5acdb-49fb-4d97-8330-61bb2eeba14f\") " Oct 13 17:41:19 crc kubenswrapper[4720]: I1013 17:41:19.909383 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jx82t\" (UniqueName: \"kubernetes.io/projected/c0e5acdb-49fb-4d97-8330-61bb2eeba14f-kube-api-access-jx82t\") pod \"c0e5acdb-49fb-4d97-8330-61bb2eeba14f\" (UID: \"c0e5acdb-49fb-4d97-8330-61bb2eeba14f\") " Oct 13 17:41:19 crc kubenswrapper[4720]: I1013 17:41:19.909478 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0e5acdb-49fb-4d97-8330-61bb2eeba14f-run-httpd\") pod \"c0e5acdb-49fb-4d97-8330-61bb2eeba14f\" (UID: \"c0e5acdb-49fb-4d97-8330-61bb2eeba14f\") " Oct 13 17:41:19 crc kubenswrapper[4720]: I1013 17:41:19.909532 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0e5acdb-49fb-4d97-8330-61bb2eeba14f-config-data\") pod \"c0e5acdb-49fb-4d97-8330-61bb2eeba14f\" (UID: \"c0e5acdb-49fb-4d97-8330-61bb2eeba14f\") " Oct 13 17:41:19 crc kubenswrapper[4720]: I1013 17:41:19.909562 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0e5acdb-49fb-4d97-8330-61bb2eeba14f-combined-ca-bundle\") pod \"c0e5acdb-49fb-4d97-8330-61bb2eeba14f\" (UID: \"c0e5acdb-49fb-4d97-8330-61bb2eeba14f\") " Oct 13 17:41:19 crc kubenswrapper[4720]: I1013 17:41:19.909592 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c0e5acdb-49fb-4d97-8330-61bb2eeba14f-sg-core-conf-yaml\") pod \"c0e5acdb-49fb-4d97-8330-61bb2eeba14f\" (UID: \"c0e5acdb-49fb-4d97-8330-61bb2eeba14f\") " Oct 13 17:41:19 crc kubenswrapper[4720]: I1013 17:41:19.910709 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0e5acdb-49fb-4d97-8330-61bb2eeba14f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c0e5acdb-49fb-4d97-8330-61bb2eeba14f" (UID: "c0e5acdb-49fb-4d97-8330-61bb2eeba14f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:41:19 crc kubenswrapper[4720]: I1013 17:41:19.910978 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0e5acdb-49fb-4d97-8330-61bb2eeba14f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c0e5acdb-49fb-4d97-8330-61bb2eeba14f" (UID: "c0e5acdb-49fb-4d97-8330-61bb2eeba14f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:41:19 crc kubenswrapper[4720]: I1013 17:41:19.914644 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0e5acdb-49fb-4d97-8330-61bb2eeba14f-kube-api-access-jx82t" (OuterVolumeSpecName: "kube-api-access-jx82t") pod "c0e5acdb-49fb-4d97-8330-61bb2eeba14f" (UID: "c0e5acdb-49fb-4d97-8330-61bb2eeba14f"). InnerVolumeSpecName "kube-api-access-jx82t". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:41:19 crc kubenswrapper[4720]: I1013 17:41:19.918977 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0e5acdb-49fb-4d97-8330-61bb2eeba14f-scripts" (OuterVolumeSpecName: "scripts") pod "c0e5acdb-49fb-4d97-8330-61bb2eeba14f" (UID: "c0e5acdb-49fb-4d97-8330-61bb2eeba14f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:41:19 crc kubenswrapper[4720]: I1013 17:41:19.946457 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0e5acdb-49fb-4d97-8330-61bb2eeba14f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c0e5acdb-49fb-4d97-8330-61bb2eeba14f" (UID: "c0e5acdb-49fb-4d97-8330-61bb2eeba14f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:41:19 crc kubenswrapper[4720]: I1013 17:41:19.960609 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a68622d1-743e-45e7-a021-6d766840711a","Type":"ContainerStarted","Data":"edef31632ccd15719036335a287792c5c3c4480178ec8484701abd4f44ec5bcc"} Oct 13 17:41:19 crc kubenswrapper[4720]: I1013 17:41:19.960650 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a68622d1-743e-45e7-a021-6d766840711a","Type":"ContainerStarted","Data":"74cd428e3c3a8138fd001e450494a4739f7088eb3ffa2764f2eeb5ba83e399d3"} Oct 13 17:41:19 crc kubenswrapper[4720]: I1013 17:41:19.967980 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-1187-account-create-kf7dn" event={"ID":"fe437be8-b701-42f7-9aae-ba75ccc6be20","Type":"ContainerDied","Data":"7ac0e8972b29a0926533710f6456b298f27bb5006fc4505a7aecc4b83b2b06e2"} Oct 13 17:41:19 crc kubenswrapper[4720]: I1013 17:41:19.968013 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ac0e8972b29a0926533710f6456b298f27bb5006fc4505a7aecc4b83b2b06e2" Oct 13 17:41:19 crc kubenswrapper[4720]: I1013 17:41:19.968075 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-1187-account-create-kf7dn" Oct 13 17:41:19 crc kubenswrapper[4720]: I1013 17:41:19.972713 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-557a-account-create-m5cvn" event={"ID":"0bdf4571-07ae-4672-937b-f445bf5578ad","Type":"ContainerDied","Data":"95d606e07100488e2fcdb7859e92e51706fc679e63c0027393b1071bf62a6b6f"} Oct 13 17:41:19 crc kubenswrapper[4720]: I1013 17:41:19.972750 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95d606e07100488e2fcdb7859e92e51706fc679e63c0027393b1071bf62a6b6f" Oct 13 17:41:19 crc kubenswrapper[4720]: I1013 17:41:19.972805 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-557a-account-create-m5cvn" Oct 13 17:41:19 crc kubenswrapper[4720]: I1013 17:41:19.986080 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"40ee2ba6-d58a-4ee8-b28e-6aeea90de7f6","Type":"ContainerStarted","Data":"13c7a584970510059504fcac0d718b24896d34b725e8ec7f9d45d90f724e861f"} Oct 13 17:41:19 crc kubenswrapper[4720]: I1013 17:41:19.988255 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"65eecc5b-dc6b-482e-bcfe-93915016a1f5","Type":"ContainerStarted","Data":"7a0194215604a0c3bcf5560a748abb0db5b6179ea551a31c07eef5079a523769"} Oct 13 17:41:19 crc kubenswrapper[4720]: I1013 17:41:19.990906 4720 generic.go:334] "Generic (PLEG): container finished" podID="c0e5acdb-49fb-4d97-8330-61bb2eeba14f" containerID="9d4fc6fe4c16b453593624c59539e464260a7c4d00254dfca1f55c3d8f9bf7ef" exitCode=0 Oct 13 17:41:19 crc kubenswrapper[4720]: I1013 17:41:19.990923 4720 generic.go:334] "Generic (PLEG): container finished" podID="c0e5acdb-49fb-4d97-8330-61bb2eeba14f" containerID="6bffc98ff644494f7e39f260402a70dad002a9207c91de28c2144cee8246cb10" exitCode=2 Oct 13 17:41:19 crc kubenswrapper[4720]: I1013 17:41:19.990932 4720 generic.go:334] "Generic (PLEG): container finished" podID="c0e5acdb-49fb-4d97-8330-61bb2eeba14f" containerID="663f2b043e406be36e53f35b210bb89be8414904d4ba1d10fb48933f5dfc592f" exitCode=0 Oct 13 17:41:19 crc kubenswrapper[4720]: I1013 17:41:19.990938 4720 generic.go:334] "Generic (PLEG): container finished" podID="c0e5acdb-49fb-4d97-8330-61bb2eeba14f" containerID="60833019df1e6e9d78cded50a1bc7d0fc7fc1ac7990aa51939fedb0d5f63f88b" exitCode=0 Oct 13 17:41:19 crc kubenswrapper[4720]: I1013 17:41:19.990950 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0e5acdb-49fb-4d97-8330-61bb2eeba14f","Type":"ContainerDied","Data":"9d4fc6fe4c16b453593624c59539e464260a7c4d00254dfca1f55c3d8f9bf7ef"} Oct 13 17:41:19 crc kubenswrapper[4720]: I1013 17:41:19.990965 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0e5acdb-49fb-4d97-8330-61bb2eeba14f","Type":"ContainerDied","Data":"6bffc98ff644494f7e39f260402a70dad002a9207c91de28c2144cee8246cb10"} Oct 13 17:41:19 crc kubenswrapper[4720]: I1013 17:41:19.990975 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0e5acdb-49fb-4d97-8330-61bb2eeba14f","Type":"ContainerDied","Data":"663f2b043e406be36e53f35b210bb89be8414904d4ba1d10fb48933f5dfc592f"} Oct 13 17:41:19 crc kubenswrapper[4720]: I1013 17:41:19.990985 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0e5acdb-49fb-4d97-8330-61bb2eeba14f","Type":"ContainerDied","Data":"60833019df1e6e9d78cded50a1bc7d0fc7fc1ac7990aa51939fedb0d5f63f88b"} Oct 13 17:41:19 crc kubenswrapper[4720]: I1013 17:41:19.990993 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c0e5acdb-49fb-4d97-8330-61bb2eeba14f","Type":"ContainerDied","Data":"2f6e2ab81fd55c8de49b1dece25af458f8edda5c34a561fc9c9a738c0e90bf85"} Oct 13 17:41:19 crc kubenswrapper[4720]: I1013 17:41:19.991008 4720 scope.go:117] "RemoveContainer" containerID="9d4fc6fe4c16b453593624c59539e464260a7c4d00254dfca1f55c3d8f9bf7ef" Oct 13 17:41:19 crc kubenswrapper[4720]: I1013 17:41:19.991065 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.012248 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0e5acdb-49fb-4d97-8330-61bb2eeba14f-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.012275 4720 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0e5acdb-49fb-4d97-8330-61bb2eeba14f-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.012285 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jx82t\" (UniqueName: \"kubernetes.io/projected/c0e5acdb-49fb-4d97-8330-61bb2eeba14f-kube-api-access-jx82t\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.012294 4720 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c0e5acdb-49fb-4d97-8330-61bb2eeba14f-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.012301 4720 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c0e5acdb-49fb-4d97-8330-61bb2eeba14f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.019168 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0e5acdb-49fb-4d97-8330-61bb2eeba14f-config-data" (OuterVolumeSpecName: "config-data") pod "c0e5acdb-49fb-4d97-8330-61bb2eeba14f" (UID: "c0e5acdb-49fb-4d97-8330-61bb2eeba14f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.031486 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0e5acdb-49fb-4d97-8330-61bb2eeba14f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c0e5acdb-49fb-4d97-8330-61bb2eeba14f" (UID: "c0e5acdb-49fb-4d97-8330-61bb2eeba14f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.038730 4720 scope.go:117] "RemoveContainer" containerID="6bffc98ff644494f7e39f260402a70dad002a9207c91de28c2144cee8246cb10" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.063288 4720 scope.go:117] "RemoveContainer" containerID="663f2b043e406be36e53f35b210bb89be8414904d4ba1d10fb48933f5dfc592f" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.092002 4720 scope.go:117] "RemoveContainer" containerID="60833019df1e6e9d78cded50a1bc7d0fc7fc1ac7990aa51939fedb0d5f63f88b" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.113800 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0e5acdb-49fb-4d97-8330-61bb2eeba14f-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.113827 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0e5acdb-49fb-4d97-8330-61bb2eeba14f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.128107 4720 scope.go:117] "RemoveContainer" containerID="9d4fc6fe4c16b453593624c59539e464260a7c4d00254dfca1f55c3d8f9bf7ef" Oct 13 17:41:20 crc kubenswrapper[4720]: E1013 17:41:20.134586 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d4fc6fe4c16b453593624c59539e464260a7c4d00254dfca1f55c3d8f9bf7ef\": container with ID starting with 9d4fc6fe4c16b453593624c59539e464260a7c4d00254dfca1f55c3d8f9bf7ef not found: ID does not exist" containerID="9d4fc6fe4c16b453593624c59539e464260a7c4d00254dfca1f55c3d8f9bf7ef" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.134624 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d4fc6fe4c16b453593624c59539e464260a7c4d00254dfca1f55c3d8f9bf7ef"} err="failed to get container status \"9d4fc6fe4c16b453593624c59539e464260a7c4d00254dfca1f55c3d8f9bf7ef\": rpc error: code = NotFound desc = could not find container \"9d4fc6fe4c16b453593624c59539e464260a7c4d00254dfca1f55c3d8f9bf7ef\": container with ID starting with 9d4fc6fe4c16b453593624c59539e464260a7c4d00254dfca1f55c3d8f9bf7ef not found: ID does not exist" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.134646 4720 scope.go:117] "RemoveContainer" containerID="6bffc98ff644494f7e39f260402a70dad002a9207c91de28c2144cee8246cb10" Oct 13 17:41:20 crc kubenswrapper[4720]: E1013 17:41:20.134996 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bffc98ff644494f7e39f260402a70dad002a9207c91de28c2144cee8246cb10\": container with ID starting with 6bffc98ff644494f7e39f260402a70dad002a9207c91de28c2144cee8246cb10 not found: ID does not exist" containerID="6bffc98ff644494f7e39f260402a70dad002a9207c91de28c2144cee8246cb10" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.135016 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bffc98ff644494f7e39f260402a70dad002a9207c91de28c2144cee8246cb10"} err="failed to get container status \"6bffc98ff644494f7e39f260402a70dad002a9207c91de28c2144cee8246cb10\": rpc error: code = NotFound desc = could not find container \"6bffc98ff644494f7e39f260402a70dad002a9207c91de28c2144cee8246cb10\": container with ID starting with 6bffc98ff644494f7e39f260402a70dad002a9207c91de28c2144cee8246cb10 not found: ID does not exist" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.135028 4720 scope.go:117] "RemoveContainer" containerID="663f2b043e406be36e53f35b210bb89be8414904d4ba1d10fb48933f5dfc592f" Oct 13 17:41:20 crc kubenswrapper[4720]: E1013 17:41:20.138309 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"663f2b043e406be36e53f35b210bb89be8414904d4ba1d10fb48933f5dfc592f\": container with ID starting with 663f2b043e406be36e53f35b210bb89be8414904d4ba1d10fb48933f5dfc592f not found: ID does not exist" containerID="663f2b043e406be36e53f35b210bb89be8414904d4ba1d10fb48933f5dfc592f" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.138336 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"663f2b043e406be36e53f35b210bb89be8414904d4ba1d10fb48933f5dfc592f"} err="failed to get container status \"663f2b043e406be36e53f35b210bb89be8414904d4ba1d10fb48933f5dfc592f\": rpc error: code = NotFound desc = could not find container \"663f2b043e406be36e53f35b210bb89be8414904d4ba1d10fb48933f5dfc592f\": container with ID starting with 663f2b043e406be36e53f35b210bb89be8414904d4ba1d10fb48933f5dfc592f not found: ID does not exist" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.138354 4720 scope.go:117] "RemoveContainer" containerID="60833019df1e6e9d78cded50a1bc7d0fc7fc1ac7990aa51939fedb0d5f63f88b" Oct 13 17:41:20 crc kubenswrapper[4720]: E1013 17:41:20.141141 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60833019df1e6e9d78cded50a1bc7d0fc7fc1ac7990aa51939fedb0d5f63f88b\": container with ID starting with 60833019df1e6e9d78cded50a1bc7d0fc7fc1ac7990aa51939fedb0d5f63f88b not found: ID does not exist" containerID="60833019df1e6e9d78cded50a1bc7d0fc7fc1ac7990aa51939fedb0d5f63f88b" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.141213 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60833019df1e6e9d78cded50a1bc7d0fc7fc1ac7990aa51939fedb0d5f63f88b"} err="failed to get container status \"60833019df1e6e9d78cded50a1bc7d0fc7fc1ac7990aa51939fedb0d5f63f88b\": rpc error: code = NotFound desc = could not find container \"60833019df1e6e9d78cded50a1bc7d0fc7fc1ac7990aa51939fedb0d5f63f88b\": container with ID starting with 60833019df1e6e9d78cded50a1bc7d0fc7fc1ac7990aa51939fedb0d5f63f88b not found: ID does not exist" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.141230 4720 scope.go:117] "RemoveContainer" containerID="9d4fc6fe4c16b453593624c59539e464260a7c4d00254dfca1f55c3d8f9bf7ef" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.142238 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d4fc6fe4c16b453593624c59539e464260a7c4d00254dfca1f55c3d8f9bf7ef"} err="failed to get container status \"9d4fc6fe4c16b453593624c59539e464260a7c4d00254dfca1f55c3d8f9bf7ef\": rpc error: code = NotFound desc = could not find container \"9d4fc6fe4c16b453593624c59539e464260a7c4d00254dfca1f55c3d8f9bf7ef\": container with ID starting with 9d4fc6fe4c16b453593624c59539e464260a7c4d00254dfca1f55c3d8f9bf7ef not found: ID does not exist" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.142258 4720 scope.go:117] "RemoveContainer" containerID="6bffc98ff644494f7e39f260402a70dad002a9207c91de28c2144cee8246cb10" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.142893 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bffc98ff644494f7e39f260402a70dad002a9207c91de28c2144cee8246cb10"} err="failed to get container status \"6bffc98ff644494f7e39f260402a70dad002a9207c91de28c2144cee8246cb10\": rpc error: code = NotFound desc = could not find container \"6bffc98ff644494f7e39f260402a70dad002a9207c91de28c2144cee8246cb10\": container with ID starting with 6bffc98ff644494f7e39f260402a70dad002a9207c91de28c2144cee8246cb10 not found: ID does not exist" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.142914 4720 scope.go:117] "RemoveContainer" containerID="663f2b043e406be36e53f35b210bb89be8414904d4ba1d10fb48933f5dfc592f" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.143356 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"663f2b043e406be36e53f35b210bb89be8414904d4ba1d10fb48933f5dfc592f"} err="failed to get container status \"663f2b043e406be36e53f35b210bb89be8414904d4ba1d10fb48933f5dfc592f\": rpc error: code = NotFound desc = could not find container \"663f2b043e406be36e53f35b210bb89be8414904d4ba1d10fb48933f5dfc592f\": container with ID starting with 663f2b043e406be36e53f35b210bb89be8414904d4ba1d10fb48933f5dfc592f not found: ID does not exist" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.143376 4720 scope.go:117] "RemoveContainer" containerID="60833019df1e6e9d78cded50a1bc7d0fc7fc1ac7990aa51939fedb0d5f63f88b" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.143842 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60833019df1e6e9d78cded50a1bc7d0fc7fc1ac7990aa51939fedb0d5f63f88b"} err="failed to get container status \"60833019df1e6e9d78cded50a1bc7d0fc7fc1ac7990aa51939fedb0d5f63f88b\": rpc error: code = NotFound desc = could not find container \"60833019df1e6e9d78cded50a1bc7d0fc7fc1ac7990aa51939fedb0d5f63f88b\": container with ID starting with 60833019df1e6e9d78cded50a1bc7d0fc7fc1ac7990aa51939fedb0d5f63f88b not found: ID does not exist" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.143862 4720 scope.go:117] "RemoveContainer" containerID="9d4fc6fe4c16b453593624c59539e464260a7c4d00254dfca1f55c3d8f9bf7ef" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.144507 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d4fc6fe4c16b453593624c59539e464260a7c4d00254dfca1f55c3d8f9bf7ef"} err="failed to get container status \"9d4fc6fe4c16b453593624c59539e464260a7c4d00254dfca1f55c3d8f9bf7ef\": rpc error: code = NotFound desc = could not find container \"9d4fc6fe4c16b453593624c59539e464260a7c4d00254dfca1f55c3d8f9bf7ef\": container with ID starting with 9d4fc6fe4c16b453593624c59539e464260a7c4d00254dfca1f55c3d8f9bf7ef not found: ID does not exist" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.144546 4720 scope.go:117] "RemoveContainer" containerID="6bffc98ff644494f7e39f260402a70dad002a9207c91de28c2144cee8246cb10" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.145213 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bffc98ff644494f7e39f260402a70dad002a9207c91de28c2144cee8246cb10"} err="failed to get container status \"6bffc98ff644494f7e39f260402a70dad002a9207c91de28c2144cee8246cb10\": rpc error: code = NotFound desc = could not find container \"6bffc98ff644494f7e39f260402a70dad002a9207c91de28c2144cee8246cb10\": container with ID starting with 6bffc98ff644494f7e39f260402a70dad002a9207c91de28c2144cee8246cb10 not found: ID does not exist" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.145234 4720 scope.go:117] "RemoveContainer" containerID="663f2b043e406be36e53f35b210bb89be8414904d4ba1d10fb48933f5dfc592f" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.145599 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"663f2b043e406be36e53f35b210bb89be8414904d4ba1d10fb48933f5dfc592f"} err="failed to get container status \"663f2b043e406be36e53f35b210bb89be8414904d4ba1d10fb48933f5dfc592f\": rpc error: code = NotFound desc = could not find container \"663f2b043e406be36e53f35b210bb89be8414904d4ba1d10fb48933f5dfc592f\": container with ID starting with 663f2b043e406be36e53f35b210bb89be8414904d4ba1d10fb48933f5dfc592f not found: ID does not exist" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.145612 4720 scope.go:117] "RemoveContainer" containerID="60833019df1e6e9d78cded50a1bc7d0fc7fc1ac7990aa51939fedb0d5f63f88b" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.145879 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60833019df1e6e9d78cded50a1bc7d0fc7fc1ac7990aa51939fedb0d5f63f88b"} err="failed to get container status \"60833019df1e6e9d78cded50a1bc7d0fc7fc1ac7990aa51939fedb0d5f63f88b\": rpc error: code = NotFound desc = could not find container \"60833019df1e6e9d78cded50a1bc7d0fc7fc1ac7990aa51939fedb0d5f63f88b\": container with ID starting with 60833019df1e6e9d78cded50a1bc7d0fc7fc1ac7990aa51939fedb0d5f63f88b not found: ID does not exist" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.145896 4720 scope.go:117] "RemoveContainer" containerID="9d4fc6fe4c16b453593624c59539e464260a7c4d00254dfca1f55c3d8f9bf7ef" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.146289 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d4fc6fe4c16b453593624c59539e464260a7c4d00254dfca1f55c3d8f9bf7ef"} err="failed to get container status \"9d4fc6fe4c16b453593624c59539e464260a7c4d00254dfca1f55c3d8f9bf7ef\": rpc error: code = NotFound desc = could not find container \"9d4fc6fe4c16b453593624c59539e464260a7c4d00254dfca1f55c3d8f9bf7ef\": container with ID starting with 9d4fc6fe4c16b453593624c59539e464260a7c4d00254dfca1f55c3d8f9bf7ef not found: ID does not exist" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.146309 4720 scope.go:117] "RemoveContainer" containerID="6bffc98ff644494f7e39f260402a70dad002a9207c91de28c2144cee8246cb10" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.146727 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bffc98ff644494f7e39f260402a70dad002a9207c91de28c2144cee8246cb10"} err="failed to get container status \"6bffc98ff644494f7e39f260402a70dad002a9207c91de28c2144cee8246cb10\": rpc error: code = NotFound desc = could not find container \"6bffc98ff644494f7e39f260402a70dad002a9207c91de28c2144cee8246cb10\": container with ID starting with 6bffc98ff644494f7e39f260402a70dad002a9207c91de28c2144cee8246cb10 not found: ID does not exist" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.146745 4720 scope.go:117] "RemoveContainer" containerID="663f2b043e406be36e53f35b210bb89be8414904d4ba1d10fb48933f5dfc592f" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.146959 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"663f2b043e406be36e53f35b210bb89be8414904d4ba1d10fb48933f5dfc592f"} err="failed to get container status \"663f2b043e406be36e53f35b210bb89be8414904d4ba1d10fb48933f5dfc592f\": rpc error: code = NotFound desc = could not find container \"663f2b043e406be36e53f35b210bb89be8414904d4ba1d10fb48933f5dfc592f\": container with ID starting with 663f2b043e406be36e53f35b210bb89be8414904d4ba1d10fb48933f5dfc592f not found: ID does not exist" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.146975 4720 scope.go:117] "RemoveContainer" containerID="60833019df1e6e9d78cded50a1bc7d0fc7fc1ac7990aa51939fedb0d5f63f88b" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.147367 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60833019df1e6e9d78cded50a1bc7d0fc7fc1ac7990aa51939fedb0d5f63f88b"} err="failed to get container status \"60833019df1e6e9d78cded50a1bc7d0fc7fc1ac7990aa51939fedb0d5f63f88b\": rpc error: code = NotFound desc = could not find container \"60833019df1e6e9d78cded50a1bc7d0fc7fc1ac7990aa51939fedb0d5f63f88b\": container with ID starting with 60833019df1e6e9d78cded50a1bc7d0fc7fc1ac7990aa51939fedb0d5f63f88b not found: ID does not exist" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.415286 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.436982 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.449228 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 13 17:41:20 crc kubenswrapper[4720]: E1013 17:41:20.449608 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bdf4571-07ae-4672-937b-f445bf5578ad" containerName="mariadb-account-create" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.449625 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bdf4571-07ae-4672-937b-f445bf5578ad" containerName="mariadb-account-create" Oct 13 17:41:20 crc kubenswrapper[4720]: E1013 17:41:20.449639 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0e5acdb-49fb-4d97-8330-61bb2eeba14f" containerName="proxy-httpd" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.449644 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0e5acdb-49fb-4d97-8330-61bb2eeba14f" containerName="proxy-httpd" Oct 13 17:41:20 crc kubenswrapper[4720]: E1013 17:41:20.449675 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0e5acdb-49fb-4d97-8330-61bb2eeba14f" containerName="ceilometer-central-agent" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.449680 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0e5acdb-49fb-4d97-8330-61bb2eeba14f" containerName="ceilometer-central-agent" Oct 13 17:41:20 crc kubenswrapper[4720]: E1013 17:41:20.449689 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe437be8-b701-42f7-9aae-ba75ccc6be20" containerName="mariadb-account-create" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.449694 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe437be8-b701-42f7-9aae-ba75ccc6be20" containerName="mariadb-account-create" Oct 13 17:41:20 crc kubenswrapper[4720]: E1013 17:41:20.449704 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0e5acdb-49fb-4d97-8330-61bb2eeba14f" containerName="sg-core" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.449710 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0e5acdb-49fb-4d97-8330-61bb2eeba14f" containerName="sg-core" Oct 13 17:41:20 crc kubenswrapper[4720]: E1013 17:41:20.449723 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0e5acdb-49fb-4d97-8330-61bb2eeba14f" containerName="ceilometer-notification-agent" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.449728 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0e5acdb-49fb-4d97-8330-61bb2eeba14f" containerName="ceilometer-notification-agent" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.449895 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0e5acdb-49fb-4d97-8330-61bb2eeba14f" containerName="ceilometer-central-agent" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.449912 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0e5acdb-49fb-4d97-8330-61bb2eeba14f" containerName="sg-core" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.449920 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe437be8-b701-42f7-9aae-ba75ccc6be20" containerName="mariadb-account-create" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.449931 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0e5acdb-49fb-4d97-8330-61bb2eeba14f" containerName="ceilometer-notification-agent" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.449942 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0e5acdb-49fb-4d97-8330-61bb2eeba14f" containerName="proxy-httpd" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.449956 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bdf4571-07ae-4672-937b-f445bf5578ad" containerName="mariadb-account-create" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.451478 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.467551 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.467710 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.477356 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.625568 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvzff\" (UniqueName: \"kubernetes.io/projected/6a7af1d1-204f-43e4-8288-3578575d990d-kube-api-access-cvzff\") pod \"ceilometer-0\" (UID: \"6a7af1d1-204f-43e4-8288-3578575d990d\") " pod="openstack/ceilometer-0" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.625674 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a7af1d1-204f-43e4-8288-3578575d990d-run-httpd\") pod \"ceilometer-0\" (UID: \"6a7af1d1-204f-43e4-8288-3578575d990d\") " pod="openstack/ceilometer-0" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.625713 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a7af1d1-204f-43e4-8288-3578575d990d-scripts\") pod \"ceilometer-0\" (UID: \"6a7af1d1-204f-43e4-8288-3578575d990d\") " pod="openstack/ceilometer-0" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.625761 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a7af1d1-204f-43e4-8288-3578575d990d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6a7af1d1-204f-43e4-8288-3578575d990d\") " pod="openstack/ceilometer-0" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.625785 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a7af1d1-204f-43e4-8288-3578575d990d-config-data\") pod \"ceilometer-0\" (UID: \"6a7af1d1-204f-43e4-8288-3578575d990d\") " pod="openstack/ceilometer-0" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.625816 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6a7af1d1-204f-43e4-8288-3578575d990d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6a7af1d1-204f-43e4-8288-3578575d990d\") " pod="openstack/ceilometer-0" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.625875 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a7af1d1-204f-43e4-8288-3578575d990d-log-httpd\") pod \"ceilometer-0\" (UID: \"6a7af1d1-204f-43e4-8288-3578575d990d\") " pod="openstack/ceilometer-0" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.728059 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvzff\" (UniqueName: \"kubernetes.io/projected/6a7af1d1-204f-43e4-8288-3578575d990d-kube-api-access-cvzff\") pod \"ceilometer-0\" (UID: \"6a7af1d1-204f-43e4-8288-3578575d990d\") " pod="openstack/ceilometer-0" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.728498 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a7af1d1-204f-43e4-8288-3578575d990d-run-httpd\") pod \"ceilometer-0\" (UID: \"6a7af1d1-204f-43e4-8288-3578575d990d\") " pod="openstack/ceilometer-0" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.728526 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a7af1d1-204f-43e4-8288-3578575d990d-scripts\") pod \"ceilometer-0\" (UID: \"6a7af1d1-204f-43e4-8288-3578575d990d\") " pod="openstack/ceilometer-0" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.728565 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a7af1d1-204f-43e4-8288-3578575d990d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6a7af1d1-204f-43e4-8288-3578575d990d\") " pod="openstack/ceilometer-0" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.728587 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a7af1d1-204f-43e4-8288-3578575d990d-config-data\") pod \"ceilometer-0\" (UID: \"6a7af1d1-204f-43e4-8288-3578575d990d\") " pod="openstack/ceilometer-0" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.728612 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6a7af1d1-204f-43e4-8288-3578575d990d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6a7af1d1-204f-43e4-8288-3578575d990d\") " pod="openstack/ceilometer-0" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.728658 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a7af1d1-204f-43e4-8288-3578575d990d-log-httpd\") pod \"ceilometer-0\" (UID: \"6a7af1d1-204f-43e4-8288-3578575d990d\") " pod="openstack/ceilometer-0" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.729004 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a7af1d1-204f-43e4-8288-3578575d990d-run-httpd\") pod \"ceilometer-0\" (UID: \"6a7af1d1-204f-43e4-8288-3578575d990d\") " pod="openstack/ceilometer-0" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.729036 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a7af1d1-204f-43e4-8288-3578575d990d-log-httpd\") pod \"ceilometer-0\" (UID: \"6a7af1d1-204f-43e4-8288-3578575d990d\") " pod="openstack/ceilometer-0" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.731813 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6a7af1d1-204f-43e4-8288-3578575d990d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6a7af1d1-204f-43e4-8288-3578575d990d\") " pod="openstack/ceilometer-0" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.732518 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a7af1d1-204f-43e4-8288-3578575d990d-config-data\") pod \"ceilometer-0\" (UID: \"6a7af1d1-204f-43e4-8288-3578575d990d\") " pod="openstack/ceilometer-0" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.732795 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a7af1d1-204f-43e4-8288-3578575d990d-scripts\") pod \"ceilometer-0\" (UID: \"6a7af1d1-204f-43e4-8288-3578575d990d\") " pod="openstack/ceilometer-0" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.734675 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a7af1d1-204f-43e4-8288-3578575d990d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6a7af1d1-204f-43e4-8288-3578575d990d\") " pod="openstack/ceilometer-0" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.742052 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvzff\" (UniqueName: \"kubernetes.io/projected/6a7af1d1-204f-43e4-8288-3578575d990d-kube-api-access-cvzff\") pod \"ceilometer-0\" (UID: \"6a7af1d1-204f-43e4-8288-3578575d990d\") " pod="openstack/ceilometer-0" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.797669 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 17:41:20 crc kubenswrapper[4720]: I1013 17:41:20.947767 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 13 17:41:21 crc kubenswrapper[4720]: I1013 17:41:21.002513 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"40ee2ba6-d58a-4ee8-b28e-6aeea90de7f6","Type":"ContainerStarted","Data":"56ccd16dc60568c9bf30bab78c31a99512bc44a3aceb7dc3c38d8930c878ce3e"} Oct 13 17:41:21 crc kubenswrapper[4720]: I1013 17:41:21.021539 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.021521864 podStartE2EDuration="4.021521864s" podCreationTimestamp="2025-10-13 17:41:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:41:21.020008845 +0000 UTC m=+1026.477258977" watchObservedRunningTime="2025-10-13 17:41:21.021521864 +0000 UTC m=+1026.478771996" Oct 13 17:41:21 crc kubenswrapper[4720]: I1013 17:41:21.031428 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"65eecc5b-dc6b-482e-bcfe-93915016a1f5","Type":"ContainerStarted","Data":"7f84f40bc82951a0a55a73f344148dcac01977f143ed13b51a731b4414f1e16e"} Oct 13 17:41:21 crc kubenswrapper[4720]: I1013 17:41:21.043073 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a68622d1-743e-45e7-a021-6d766840711a","Type":"ContainerStarted","Data":"3796941a132f852e38cd091f9f912b487656df5dd650f8379f7bb5d7fa997f32"} Oct 13 17:41:21 crc kubenswrapper[4720]: I1013 17:41:21.065892 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.065878205 podStartE2EDuration="3.065878205s" podCreationTimestamp="2025-10-13 17:41:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:41:21.062739154 +0000 UTC m=+1026.519989286" watchObservedRunningTime="2025-10-13 17:41:21.065878205 +0000 UTC m=+1026.523128337" Oct 13 17:41:21 crc kubenswrapper[4720]: I1013 17:41:21.178392 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0e5acdb-49fb-4d97-8330-61bb2eeba14f" path="/var/lib/kubelet/pods/c0e5acdb-49fb-4d97-8330-61bb2eeba14f/volumes" Oct 13 17:41:21 crc kubenswrapper[4720]: I1013 17:41:21.254363 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 13 17:41:22 crc kubenswrapper[4720]: I1013 17:41:22.053698 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"65eecc5b-dc6b-482e-bcfe-93915016a1f5","Type":"ContainerStarted","Data":"b123f56833f1f2922af2cd1a1657f823aff9d3d7ec331c10a373ce5a6b0b9a58"} Oct 13 17:41:22 crc kubenswrapper[4720]: I1013 17:41:22.055249 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6a7af1d1-204f-43e4-8288-3578575d990d","Type":"ContainerStarted","Data":"5b0947c0ff42c6f5edccea85417437cb9679d31a2809a3a35c22e436c53ec68a"} Oct 13 17:41:22 crc kubenswrapper[4720]: I1013 17:41:22.055482 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6a7af1d1-204f-43e4-8288-3578575d990d","Type":"ContainerStarted","Data":"f810aebc316af621426d324e5bb8df57d13fc9dbfe1faabef01e4e85f9a4537e"} Oct 13 17:41:22 crc kubenswrapper[4720]: I1013 17:41:22.076221 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.076203091 podStartE2EDuration="4.076203091s" podCreationTimestamp="2025-10-13 17:41:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:41:22.072945447 +0000 UTC m=+1027.530195579" watchObservedRunningTime="2025-10-13 17:41:22.076203091 +0000 UTC m=+1027.533453223" Oct 13 17:41:23 crc kubenswrapper[4720]: I1013 17:41:23.065599 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6a7af1d1-204f-43e4-8288-3578575d990d","Type":"ContainerStarted","Data":"d1c352afe0934150bdbc90807d9ca9f3f69fb82354f493e2b6166274ec5b3ddf"} Oct 13 17:41:23 crc kubenswrapper[4720]: I1013 17:41:23.540304 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Oct 13 17:41:24 crc kubenswrapper[4720]: I1013 17:41:24.077025 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6a7af1d1-204f-43e4-8288-3578575d990d","Type":"ContainerStarted","Data":"b5dc48445018b386c790d837ff949991628645ad1f70545f7f16147ebc9e96a9"} Oct 13 17:41:25 crc kubenswrapper[4720]: I1013 17:41:25.089666 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6a7af1d1-204f-43e4-8288-3578575d990d","Type":"ContainerStarted","Data":"ae99a6eb00d3d13b51c4ec47d55ee6f85eaf45d4d8bcfc34fd00179b918b1819"} Oct 13 17:41:25 crc kubenswrapper[4720]: I1013 17:41:25.090891 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 13 17:41:25 crc kubenswrapper[4720]: I1013 17:41:25.089956 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6a7af1d1-204f-43e4-8288-3578575d990d" containerName="sg-core" containerID="cri-o://b5dc48445018b386c790d837ff949991628645ad1f70545f7f16147ebc9e96a9" gracePeriod=30 Oct 13 17:41:25 crc kubenswrapper[4720]: I1013 17:41:25.089956 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6a7af1d1-204f-43e4-8288-3578575d990d" containerName="proxy-httpd" containerID="cri-o://ae99a6eb00d3d13b51c4ec47d55ee6f85eaf45d4d8bcfc34fd00179b918b1819" gracePeriod=30 Oct 13 17:41:25 crc kubenswrapper[4720]: I1013 17:41:25.090011 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6a7af1d1-204f-43e4-8288-3578575d990d" containerName="ceilometer-notification-agent" containerID="cri-o://d1c352afe0934150bdbc90807d9ca9f3f69fb82354f493e2b6166274ec5b3ddf" gracePeriod=30 Oct 13 17:41:25 crc kubenswrapper[4720]: I1013 17:41:25.089888 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6a7af1d1-204f-43e4-8288-3578575d990d" containerName="ceilometer-central-agent" containerID="cri-o://5b0947c0ff42c6f5edccea85417437cb9679d31a2809a3a35c22e436c53ec68a" gracePeriod=30 Oct 13 17:41:25 crc kubenswrapper[4720]: I1013 17:41:25.117900 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.727843018 podStartE2EDuration="5.117881837s" podCreationTimestamp="2025-10-13 17:41:20 +0000 UTC" firstStartedPulling="2025-10-13 17:41:21.255988137 +0000 UTC m=+1026.713238269" lastFinishedPulling="2025-10-13 17:41:24.646026916 +0000 UTC m=+1030.103277088" observedRunningTime="2025-10-13 17:41:25.117446706 +0000 UTC m=+1030.574696848" watchObservedRunningTime="2025-10-13 17:41:25.117881837 +0000 UTC m=+1030.575131979" Oct 13 17:41:25 crc kubenswrapper[4720]: I1013 17:41:25.494227 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-5zp5c"] Oct 13 17:41:25 crc kubenswrapper[4720]: I1013 17:41:25.495912 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-5zp5c" Oct 13 17:41:25 crc kubenswrapper[4720]: I1013 17:41:25.498938 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-dctrc" Oct 13 17:41:25 crc kubenswrapper[4720]: I1013 17:41:25.499124 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Oct 13 17:41:25 crc kubenswrapper[4720]: I1013 17:41:25.502912 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Oct 13 17:41:25 crc kubenswrapper[4720]: I1013 17:41:25.513242 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-5zp5c"] Oct 13 17:41:25 crc kubenswrapper[4720]: I1013 17:41:25.621418 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6dabb89f-c7f4-46e9-a7e8-0722a6f73bed-scripts\") pod \"nova-cell0-conductor-db-sync-5zp5c\" (UID: \"6dabb89f-c7f4-46e9-a7e8-0722a6f73bed\") " pod="openstack/nova-cell0-conductor-db-sync-5zp5c" Oct 13 17:41:25 crc kubenswrapper[4720]: I1013 17:41:25.621547 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9nqp\" (UniqueName: \"kubernetes.io/projected/6dabb89f-c7f4-46e9-a7e8-0722a6f73bed-kube-api-access-c9nqp\") pod \"nova-cell0-conductor-db-sync-5zp5c\" (UID: \"6dabb89f-c7f4-46e9-a7e8-0722a6f73bed\") " pod="openstack/nova-cell0-conductor-db-sync-5zp5c" Oct 13 17:41:25 crc kubenswrapper[4720]: I1013 17:41:25.621590 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dabb89f-c7f4-46e9-a7e8-0722a6f73bed-config-data\") pod \"nova-cell0-conductor-db-sync-5zp5c\" (UID: \"6dabb89f-c7f4-46e9-a7e8-0722a6f73bed\") " pod="openstack/nova-cell0-conductor-db-sync-5zp5c" Oct 13 17:41:25 crc kubenswrapper[4720]: I1013 17:41:25.621628 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dabb89f-c7f4-46e9-a7e8-0722a6f73bed-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-5zp5c\" (UID: \"6dabb89f-c7f4-46e9-a7e8-0722a6f73bed\") " pod="openstack/nova-cell0-conductor-db-sync-5zp5c" Oct 13 17:41:25 crc kubenswrapper[4720]: I1013 17:41:25.723624 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9nqp\" (UniqueName: \"kubernetes.io/projected/6dabb89f-c7f4-46e9-a7e8-0722a6f73bed-kube-api-access-c9nqp\") pod \"nova-cell0-conductor-db-sync-5zp5c\" (UID: \"6dabb89f-c7f4-46e9-a7e8-0722a6f73bed\") " pod="openstack/nova-cell0-conductor-db-sync-5zp5c" Oct 13 17:41:25 crc kubenswrapper[4720]: I1013 17:41:25.723688 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dabb89f-c7f4-46e9-a7e8-0722a6f73bed-config-data\") pod \"nova-cell0-conductor-db-sync-5zp5c\" (UID: \"6dabb89f-c7f4-46e9-a7e8-0722a6f73bed\") " pod="openstack/nova-cell0-conductor-db-sync-5zp5c" Oct 13 17:41:25 crc kubenswrapper[4720]: I1013 17:41:25.723723 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dabb89f-c7f4-46e9-a7e8-0722a6f73bed-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-5zp5c\" (UID: \"6dabb89f-c7f4-46e9-a7e8-0722a6f73bed\") " pod="openstack/nova-cell0-conductor-db-sync-5zp5c" Oct 13 17:41:25 crc kubenswrapper[4720]: I1013 17:41:25.723776 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6dabb89f-c7f4-46e9-a7e8-0722a6f73bed-scripts\") pod \"nova-cell0-conductor-db-sync-5zp5c\" (UID: \"6dabb89f-c7f4-46e9-a7e8-0722a6f73bed\") " pod="openstack/nova-cell0-conductor-db-sync-5zp5c" Oct 13 17:41:25 crc kubenswrapper[4720]: I1013 17:41:25.729350 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dabb89f-c7f4-46e9-a7e8-0722a6f73bed-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-5zp5c\" (UID: \"6dabb89f-c7f4-46e9-a7e8-0722a6f73bed\") " pod="openstack/nova-cell0-conductor-db-sync-5zp5c" Oct 13 17:41:25 crc kubenswrapper[4720]: I1013 17:41:25.729784 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6dabb89f-c7f4-46e9-a7e8-0722a6f73bed-scripts\") pod \"nova-cell0-conductor-db-sync-5zp5c\" (UID: \"6dabb89f-c7f4-46e9-a7e8-0722a6f73bed\") " pod="openstack/nova-cell0-conductor-db-sync-5zp5c" Oct 13 17:41:25 crc kubenswrapper[4720]: I1013 17:41:25.738260 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dabb89f-c7f4-46e9-a7e8-0722a6f73bed-config-data\") pod \"nova-cell0-conductor-db-sync-5zp5c\" (UID: \"6dabb89f-c7f4-46e9-a7e8-0722a6f73bed\") " pod="openstack/nova-cell0-conductor-db-sync-5zp5c" Oct 13 17:41:25 crc kubenswrapper[4720]: I1013 17:41:25.741006 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9nqp\" (UniqueName: \"kubernetes.io/projected/6dabb89f-c7f4-46e9-a7e8-0722a6f73bed-kube-api-access-c9nqp\") pod \"nova-cell0-conductor-db-sync-5zp5c\" (UID: \"6dabb89f-c7f4-46e9-a7e8-0722a6f73bed\") " pod="openstack/nova-cell0-conductor-db-sync-5zp5c" Oct 13 17:41:25 crc kubenswrapper[4720]: I1013 17:41:25.814542 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-5zp5c" Oct 13 17:41:26 crc kubenswrapper[4720]: I1013 17:41:26.100867 4720 generic.go:334] "Generic (PLEG): container finished" podID="6a7af1d1-204f-43e4-8288-3578575d990d" containerID="ae99a6eb00d3d13b51c4ec47d55ee6f85eaf45d4d8bcfc34fd00179b918b1819" exitCode=0 Oct 13 17:41:26 crc kubenswrapper[4720]: I1013 17:41:26.101138 4720 generic.go:334] "Generic (PLEG): container finished" podID="6a7af1d1-204f-43e4-8288-3578575d990d" containerID="b5dc48445018b386c790d837ff949991628645ad1f70545f7f16147ebc9e96a9" exitCode=2 Oct 13 17:41:26 crc kubenswrapper[4720]: I1013 17:41:26.101149 4720 generic.go:334] "Generic (PLEG): container finished" podID="6a7af1d1-204f-43e4-8288-3578575d990d" containerID="d1c352afe0934150bdbc90807d9ca9f3f69fb82354f493e2b6166274ec5b3ddf" exitCode=0 Oct 13 17:41:26 crc kubenswrapper[4720]: I1013 17:41:26.101060 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6a7af1d1-204f-43e4-8288-3578575d990d","Type":"ContainerDied","Data":"ae99a6eb00d3d13b51c4ec47d55ee6f85eaf45d4d8bcfc34fd00179b918b1819"} Oct 13 17:41:26 crc kubenswrapper[4720]: I1013 17:41:26.101177 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6a7af1d1-204f-43e4-8288-3578575d990d","Type":"ContainerDied","Data":"b5dc48445018b386c790d837ff949991628645ad1f70545f7f16147ebc9e96a9"} Oct 13 17:41:26 crc kubenswrapper[4720]: I1013 17:41:26.101199 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6a7af1d1-204f-43e4-8288-3578575d990d","Type":"ContainerDied","Data":"d1c352afe0934150bdbc90807d9ca9f3f69fb82354f493e2b6166274ec5b3ddf"} Oct 13 17:41:26 crc kubenswrapper[4720]: I1013 17:41:26.333030 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-5zp5c"] Oct 13 17:41:26 crc kubenswrapper[4720]: W1013 17:41:26.387892 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6dabb89f_c7f4_46e9_a7e8_0722a6f73bed.slice/crio-4f2b64c7141cf4064448931b92e8543f336a7ded0f8b1272e8f3466c7de83733 WatchSource:0}: Error finding container 4f2b64c7141cf4064448931b92e8543f336a7ded0f8b1272e8f3466c7de83733: Status 404 returned error can't find the container with id 4f2b64c7141cf4064448931b92e8543f336a7ded0f8b1272e8f3466c7de83733 Oct 13 17:41:26 crc kubenswrapper[4720]: I1013 17:41:26.762429 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 17:41:26 crc kubenswrapper[4720]: I1013 17:41:26.849838 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvzff\" (UniqueName: \"kubernetes.io/projected/6a7af1d1-204f-43e4-8288-3578575d990d-kube-api-access-cvzff\") pod \"6a7af1d1-204f-43e4-8288-3578575d990d\" (UID: \"6a7af1d1-204f-43e4-8288-3578575d990d\") " Oct 13 17:41:26 crc kubenswrapper[4720]: I1013 17:41:26.849891 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a7af1d1-204f-43e4-8288-3578575d990d-run-httpd\") pod \"6a7af1d1-204f-43e4-8288-3578575d990d\" (UID: \"6a7af1d1-204f-43e4-8288-3578575d990d\") " Oct 13 17:41:26 crc kubenswrapper[4720]: I1013 17:41:26.850011 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a7af1d1-204f-43e4-8288-3578575d990d-config-data\") pod \"6a7af1d1-204f-43e4-8288-3578575d990d\" (UID: \"6a7af1d1-204f-43e4-8288-3578575d990d\") " Oct 13 17:41:26 crc kubenswrapper[4720]: I1013 17:41:26.850099 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a7af1d1-204f-43e4-8288-3578575d990d-scripts\") pod \"6a7af1d1-204f-43e4-8288-3578575d990d\" (UID: \"6a7af1d1-204f-43e4-8288-3578575d990d\") " Oct 13 17:41:26 crc kubenswrapper[4720]: I1013 17:41:26.850119 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a7af1d1-204f-43e4-8288-3578575d990d-combined-ca-bundle\") pod \"6a7af1d1-204f-43e4-8288-3578575d990d\" (UID: \"6a7af1d1-204f-43e4-8288-3578575d990d\") " Oct 13 17:41:26 crc kubenswrapper[4720]: I1013 17:41:26.850151 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6a7af1d1-204f-43e4-8288-3578575d990d-sg-core-conf-yaml\") pod \"6a7af1d1-204f-43e4-8288-3578575d990d\" (UID: \"6a7af1d1-204f-43e4-8288-3578575d990d\") " Oct 13 17:41:26 crc kubenswrapper[4720]: I1013 17:41:26.850222 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a7af1d1-204f-43e4-8288-3578575d990d-log-httpd\") pod \"6a7af1d1-204f-43e4-8288-3578575d990d\" (UID: \"6a7af1d1-204f-43e4-8288-3578575d990d\") " Oct 13 17:41:26 crc kubenswrapper[4720]: I1013 17:41:26.850974 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a7af1d1-204f-43e4-8288-3578575d990d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6a7af1d1-204f-43e4-8288-3578575d990d" (UID: "6a7af1d1-204f-43e4-8288-3578575d990d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:41:26 crc kubenswrapper[4720]: I1013 17:41:26.851506 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a7af1d1-204f-43e4-8288-3578575d990d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6a7af1d1-204f-43e4-8288-3578575d990d" (UID: "6a7af1d1-204f-43e4-8288-3578575d990d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:41:26 crc kubenswrapper[4720]: I1013 17:41:26.864429 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a7af1d1-204f-43e4-8288-3578575d990d-scripts" (OuterVolumeSpecName: "scripts") pod "6a7af1d1-204f-43e4-8288-3578575d990d" (UID: "6a7af1d1-204f-43e4-8288-3578575d990d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:41:26 crc kubenswrapper[4720]: I1013 17:41:26.865171 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a7af1d1-204f-43e4-8288-3578575d990d-kube-api-access-cvzff" (OuterVolumeSpecName: "kube-api-access-cvzff") pod "6a7af1d1-204f-43e4-8288-3578575d990d" (UID: "6a7af1d1-204f-43e4-8288-3578575d990d"). InnerVolumeSpecName "kube-api-access-cvzff". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:41:26 crc kubenswrapper[4720]: I1013 17:41:26.897127 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a7af1d1-204f-43e4-8288-3578575d990d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6a7af1d1-204f-43e4-8288-3578575d990d" (UID: "6a7af1d1-204f-43e4-8288-3578575d990d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:41:26 crc kubenswrapper[4720]: I1013 17:41:26.926899 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a7af1d1-204f-43e4-8288-3578575d990d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6a7af1d1-204f-43e4-8288-3578575d990d" (UID: "6a7af1d1-204f-43e4-8288-3578575d990d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:41:26 crc kubenswrapper[4720]: I1013 17:41:26.940224 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a7af1d1-204f-43e4-8288-3578575d990d-config-data" (OuterVolumeSpecName: "config-data") pod "6a7af1d1-204f-43e4-8288-3578575d990d" (UID: "6a7af1d1-204f-43e4-8288-3578575d990d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:41:26 crc kubenswrapper[4720]: I1013 17:41:26.952632 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a7af1d1-204f-43e4-8288-3578575d990d-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:26 crc kubenswrapper[4720]: I1013 17:41:26.952690 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a7af1d1-204f-43e4-8288-3578575d990d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:26 crc kubenswrapper[4720]: I1013 17:41:26.952704 4720 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6a7af1d1-204f-43e4-8288-3578575d990d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:26 crc kubenswrapper[4720]: I1013 17:41:26.952715 4720 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a7af1d1-204f-43e4-8288-3578575d990d-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:26 crc kubenswrapper[4720]: I1013 17:41:26.952726 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvzff\" (UniqueName: \"kubernetes.io/projected/6a7af1d1-204f-43e4-8288-3578575d990d-kube-api-access-cvzff\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:26 crc kubenswrapper[4720]: I1013 17:41:26.952765 4720 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6a7af1d1-204f-43e4-8288-3578575d990d-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:26 crc kubenswrapper[4720]: I1013 17:41:26.952776 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a7af1d1-204f-43e4-8288-3578575d990d-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:27 crc kubenswrapper[4720]: I1013 17:41:27.112632 4720 generic.go:334] "Generic (PLEG): container finished" podID="6a7af1d1-204f-43e4-8288-3578575d990d" containerID="5b0947c0ff42c6f5edccea85417437cb9679d31a2809a3a35c22e436c53ec68a" exitCode=0 Oct 13 17:41:27 crc kubenswrapper[4720]: I1013 17:41:27.112802 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6a7af1d1-204f-43e4-8288-3578575d990d","Type":"ContainerDied","Data":"5b0947c0ff42c6f5edccea85417437cb9679d31a2809a3a35c22e436c53ec68a"} Oct 13 17:41:27 crc kubenswrapper[4720]: I1013 17:41:27.112992 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6a7af1d1-204f-43e4-8288-3578575d990d","Type":"ContainerDied","Data":"f810aebc316af621426d324e5bb8df57d13fc9dbfe1faabef01e4e85f9a4537e"} Oct 13 17:41:27 crc kubenswrapper[4720]: I1013 17:41:27.113013 4720 scope.go:117] "RemoveContainer" containerID="ae99a6eb00d3d13b51c4ec47d55ee6f85eaf45d4d8bcfc34fd00179b918b1819" Oct 13 17:41:27 crc kubenswrapper[4720]: I1013 17:41:27.112872 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 17:41:27 crc kubenswrapper[4720]: I1013 17:41:27.115144 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-5zp5c" event={"ID":"6dabb89f-c7f4-46e9-a7e8-0722a6f73bed","Type":"ContainerStarted","Data":"4f2b64c7141cf4064448931b92e8543f336a7ded0f8b1272e8f3466c7de83733"} Oct 13 17:41:27 crc kubenswrapper[4720]: I1013 17:41:27.136773 4720 scope.go:117] "RemoveContainer" containerID="b5dc48445018b386c790d837ff949991628645ad1f70545f7f16147ebc9e96a9" Oct 13 17:41:27 crc kubenswrapper[4720]: I1013 17:41:27.148910 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 13 17:41:27 crc kubenswrapper[4720]: I1013 17:41:27.160996 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 13 17:41:27 crc kubenswrapper[4720]: I1013 17:41:27.168040 4720 scope.go:117] "RemoveContainer" containerID="d1c352afe0934150bdbc90807d9ca9f3f69fb82354f493e2b6166274ec5b3ddf" Oct 13 17:41:27 crc kubenswrapper[4720]: I1013 17:41:27.186066 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a7af1d1-204f-43e4-8288-3578575d990d" path="/var/lib/kubelet/pods/6a7af1d1-204f-43e4-8288-3578575d990d/volumes" Oct 13 17:41:27 crc kubenswrapper[4720]: I1013 17:41:27.187220 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 13 17:41:27 crc kubenswrapper[4720]: E1013 17:41:27.188003 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a7af1d1-204f-43e4-8288-3578575d990d" containerName="proxy-httpd" Oct 13 17:41:27 crc kubenswrapper[4720]: I1013 17:41:27.188021 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a7af1d1-204f-43e4-8288-3578575d990d" containerName="proxy-httpd" Oct 13 17:41:27 crc kubenswrapper[4720]: E1013 17:41:27.188038 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a7af1d1-204f-43e4-8288-3578575d990d" containerName="ceilometer-central-agent" Oct 13 17:41:27 crc kubenswrapper[4720]: I1013 17:41:27.188046 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a7af1d1-204f-43e4-8288-3578575d990d" containerName="ceilometer-central-agent" Oct 13 17:41:27 crc kubenswrapper[4720]: E1013 17:41:27.188070 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a7af1d1-204f-43e4-8288-3578575d990d" containerName="ceilometer-notification-agent" Oct 13 17:41:27 crc kubenswrapper[4720]: I1013 17:41:27.188077 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a7af1d1-204f-43e4-8288-3578575d990d" containerName="ceilometer-notification-agent" Oct 13 17:41:27 crc kubenswrapper[4720]: E1013 17:41:27.188089 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a7af1d1-204f-43e4-8288-3578575d990d" containerName="sg-core" Oct 13 17:41:27 crc kubenswrapper[4720]: I1013 17:41:27.188096 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a7af1d1-204f-43e4-8288-3578575d990d" containerName="sg-core" Oct 13 17:41:27 crc kubenswrapper[4720]: I1013 17:41:27.188531 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a7af1d1-204f-43e4-8288-3578575d990d" containerName="ceilometer-notification-agent" Oct 13 17:41:27 crc kubenswrapper[4720]: I1013 17:41:27.188550 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a7af1d1-204f-43e4-8288-3578575d990d" containerName="ceilometer-central-agent" Oct 13 17:41:27 crc kubenswrapper[4720]: I1013 17:41:27.188566 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a7af1d1-204f-43e4-8288-3578575d990d" containerName="sg-core" Oct 13 17:41:27 crc kubenswrapper[4720]: I1013 17:41:27.188578 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a7af1d1-204f-43e4-8288-3578575d990d" containerName="proxy-httpd" Oct 13 17:41:27 crc kubenswrapper[4720]: I1013 17:41:27.190497 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 17:41:27 crc kubenswrapper[4720]: I1013 17:41:27.192497 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 13 17:41:27 crc kubenswrapper[4720]: I1013 17:41:27.192562 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 13 17:41:27 crc kubenswrapper[4720]: I1013 17:41:27.195217 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 13 17:41:27 crc kubenswrapper[4720]: I1013 17:41:27.199380 4720 scope.go:117] "RemoveContainer" containerID="5b0947c0ff42c6f5edccea85417437cb9679d31a2809a3a35c22e436c53ec68a" Oct 13 17:41:27 crc kubenswrapper[4720]: I1013 17:41:27.228676 4720 scope.go:117] "RemoveContainer" containerID="ae99a6eb00d3d13b51c4ec47d55ee6f85eaf45d4d8bcfc34fd00179b918b1819" Oct 13 17:41:27 crc kubenswrapper[4720]: E1013 17:41:27.229235 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae99a6eb00d3d13b51c4ec47d55ee6f85eaf45d4d8bcfc34fd00179b918b1819\": container with ID starting with ae99a6eb00d3d13b51c4ec47d55ee6f85eaf45d4d8bcfc34fd00179b918b1819 not found: ID does not exist" containerID="ae99a6eb00d3d13b51c4ec47d55ee6f85eaf45d4d8bcfc34fd00179b918b1819" Oct 13 17:41:27 crc kubenswrapper[4720]: I1013 17:41:27.229278 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae99a6eb00d3d13b51c4ec47d55ee6f85eaf45d4d8bcfc34fd00179b918b1819"} err="failed to get container status \"ae99a6eb00d3d13b51c4ec47d55ee6f85eaf45d4d8bcfc34fd00179b918b1819\": rpc error: code = NotFound desc = could not find container \"ae99a6eb00d3d13b51c4ec47d55ee6f85eaf45d4d8bcfc34fd00179b918b1819\": container with ID starting with ae99a6eb00d3d13b51c4ec47d55ee6f85eaf45d4d8bcfc34fd00179b918b1819 not found: ID does not exist" Oct 13 17:41:27 crc kubenswrapper[4720]: I1013 17:41:27.229302 4720 scope.go:117] "RemoveContainer" containerID="b5dc48445018b386c790d837ff949991628645ad1f70545f7f16147ebc9e96a9" Oct 13 17:41:27 crc kubenswrapper[4720]: E1013 17:41:27.229640 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5dc48445018b386c790d837ff949991628645ad1f70545f7f16147ebc9e96a9\": container with ID starting with b5dc48445018b386c790d837ff949991628645ad1f70545f7f16147ebc9e96a9 not found: ID does not exist" containerID="b5dc48445018b386c790d837ff949991628645ad1f70545f7f16147ebc9e96a9" Oct 13 17:41:27 crc kubenswrapper[4720]: I1013 17:41:27.229678 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5dc48445018b386c790d837ff949991628645ad1f70545f7f16147ebc9e96a9"} err="failed to get container status \"b5dc48445018b386c790d837ff949991628645ad1f70545f7f16147ebc9e96a9\": rpc error: code = NotFound desc = could not find container \"b5dc48445018b386c790d837ff949991628645ad1f70545f7f16147ebc9e96a9\": container with ID starting with b5dc48445018b386c790d837ff949991628645ad1f70545f7f16147ebc9e96a9 not found: ID does not exist" Oct 13 17:41:27 crc kubenswrapper[4720]: I1013 17:41:27.229704 4720 scope.go:117] "RemoveContainer" containerID="d1c352afe0934150bdbc90807d9ca9f3f69fb82354f493e2b6166274ec5b3ddf" Oct 13 17:41:27 crc kubenswrapper[4720]: E1013 17:41:27.229922 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1c352afe0934150bdbc90807d9ca9f3f69fb82354f493e2b6166274ec5b3ddf\": container with ID starting with d1c352afe0934150bdbc90807d9ca9f3f69fb82354f493e2b6166274ec5b3ddf not found: ID does not exist" containerID="d1c352afe0934150bdbc90807d9ca9f3f69fb82354f493e2b6166274ec5b3ddf" Oct 13 17:41:27 crc kubenswrapper[4720]: I1013 17:41:27.229947 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1c352afe0934150bdbc90807d9ca9f3f69fb82354f493e2b6166274ec5b3ddf"} err="failed to get container status \"d1c352afe0934150bdbc90807d9ca9f3f69fb82354f493e2b6166274ec5b3ddf\": rpc error: code = NotFound desc = could not find container \"d1c352afe0934150bdbc90807d9ca9f3f69fb82354f493e2b6166274ec5b3ddf\": container with ID starting with d1c352afe0934150bdbc90807d9ca9f3f69fb82354f493e2b6166274ec5b3ddf not found: ID does not exist" Oct 13 17:41:27 crc kubenswrapper[4720]: I1013 17:41:27.229962 4720 scope.go:117] "RemoveContainer" containerID="5b0947c0ff42c6f5edccea85417437cb9679d31a2809a3a35c22e436c53ec68a" Oct 13 17:41:27 crc kubenswrapper[4720]: E1013 17:41:27.230412 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b0947c0ff42c6f5edccea85417437cb9679d31a2809a3a35c22e436c53ec68a\": container with ID starting with 5b0947c0ff42c6f5edccea85417437cb9679d31a2809a3a35c22e436c53ec68a not found: ID does not exist" containerID="5b0947c0ff42c6f5edccea85417437cb9679d31a2809a3a35c22e436c53ec68a" Oct 13 17:41:27 crc kubenswrapper[4720]: I1013 17:41:27.230437 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b0947c0ff42c6f5edccea85417437cb9679d31a2809a3a35c22e436c53ec68a"} err="failed to get container status \"5b0947c0ff42c6f5edccea85417437cb9679d31a2809a3a35c22e436c53ec68a\": rpc error: code = NotFound desc = could not find container \"5b0947c0ff42c6f5edccea85417437cb9679d31a2809a3a35c22e436c53ec68a\": container with ID starting with 5b0947c0ff42c6f5edccea85417437cb9679d31a2809a3a35c22e436c53ec68a not found: ID does not exist" Oct 13 17:41:27 crc kubenswrapper[4720]: I1013 17:41:27.257163 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eead66d8-ff12-453d-b005-460cc04aea06-scripts\") pod \"ceilometer-0\" (UID: \"eead66d8-ff12-453d-b005-460cc04aea06\") " pod="openstack/ceilometer-0" Oct 13 17:41:27 crc kubenswrapper[4720]: I1013 17:41:27.257245 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eead66d8-ff12-453d-b005-460cc04aea06-run-httpd\") pod \"ceilometer-0\" (UID: \"eead66d8-ff12-453d-b005-460cc04aea06\") " pod="openstack/ceilometer-0" Oct 13 17:41:27 crc kubenswrapper[4720]: I1013 17:41:27.257273 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eead66d8-ff12-453d-b005-460cc04aea06-config-data\") pod \"ceilometer-0\" (UID: \"eead66d8-ff12-453d-b005-460cc04aea06\") " pod="openstack/ceilometer-0" Oct 13 17:41:27 crc kubenswrapper[4720]: I1013 17:41:27.257306 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eead66d8-ff12-453d-b005-460cc04aea06-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"eead66d8-ff12-453d-b005-460cc04aea06\") " pod="openstack/ceilometer-0" Oct 13 17:41:27 crc kubenswrapper[4720]: I1013 17:41:27.257407 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eead66d8-ff12-453d-b005-460cc04aea06-log-httpd\") pod \"ceilometer-0\" (UID: \"eead66d8-ff12-453d-b005-460cc04aea06\") " pod="openstack/ceilometer-0" Oct 13 17:41:27 crc kubenswrapper[4720]: I1013 17:41:27.257431 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m9p7\" (UniqueName: \"kubernetes.io/projected/eead66d8-ff12-453d-b005-460cc04aea06-kube-api-access-4m9p7\") pod \"ceilometer-0\" (UID: \"eead66d8-ff12-453d-b005-460cc04aea06\") " pod="openstack/ceilometer-0" Oct 13 17:41:27 crc kubenswrapper[4720]: I1013 17:41:27.257497 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eead66d8-ff12-453d-b005-460cc04aea06-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"eead66d8-ff12-453d-b005-460cc04aea06\") " pod="openstack/ceilometer-0" Oct 13 17:41:27 crc kubenswrapper[4720]: I1013 17:41:27.358736 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eead66d8-ff12-453d-b005-460cc04aea06-scripts\") pod \"ceilometer-0\" (UID: \"eead66d8-ff12-453d-b005-460cc04aea06\") " pod="openstack/ceilometer-0" Oct 13 17:41:27 crc kubenswrapper[4720]: I1013 17:41:27.358781 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eead66d8-ff12-453d-b005-460cc04aea06-run-httpd\") pod \"ceilometer-0\" (UID: \"eead66d8-ff12-453d-b005-460cc04aea06\") " pod="openstack/ceilometer-0" Oct 13 17:41:27 crc kubenswrapper[4720]: I1013 17:41:27.358804 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eead66d8-ff12-453d-b005-460cc04aea06-config-data\") pod \"ceilometer-0\" (UID: \"eead66d8-ff12-453d-b005-460cc04aea06\") " pod="openstack/ceilometer-0" Oct 13 17:41:27 crc kubenswrapper[4720]: I1013 17:41:27.358833 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eead66d8-ff12-453d-b005-460cc04aea06-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"eead66d8-ff12-453d-b005-460cc04aea06\") " pod="openstack/ceilometer-0" Oct 13 17:41:27 crc kubenswrapper[4720]: I1013 17:41:27.358886 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eead66d8-ff12-453d-b005-460cc04aea06-log-httpd\") pod \"ceilometer-0\" (UID: \"eead66d8-ff12-453d-b005-460cc04aea06\") " pod="openstack/ceilometer-0" Oct 13 17:41:27 crc kubenswrapper[4720]: I1013 17:41:27.358904 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4m9p7\" (UniqueName: \"kubernetes.io/projected/eead66d8-ff12-453d-b005-460cc04aea06-kube-api-access-4m9p7\") pod \"ceilometer-0\" (UID: \"eead66d8-ff12-453d-b005-460cc04aea06\") " pod="openstack/ceilometer-0" Oct 13 17:41:27 crc kubenswrapper[4720]: I1013 17:41:27.358959 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eead66d8-ff12-453d-b005-460cc04aea06-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"eead66d8-ff12-453d-b005-460cc04aea06\") " pod="openstack/ceilometer-0" Oct 13 17:41:27 crc kubenswrapper[4720]: I1013 17:41:27.360132 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eead66d8-ff12-453d-b005-460cc04aea06-log-httpd\") pod \"ceilometer-0\" (UID: \"eead66d8-ff12-453d-b005-460cc04aea06\") " pod="openstack/ceilometer-0" Oct 13 17:41:27 crc kubenswrapper[4720]: I1013 17:41:27.360434 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eead66d8-ff12-453d-b005-460cc04aea06-run-httpd\") pod \"ceilometer-0\" (UID: \"eead66d8-ff12-453d-b005-460cc04aea06\") " pod="openstack/ceilometer-0" Oct 13 17:41:27 crc kubenswrapper[4720]: I1013 17:41:27.363552 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eead66d8-ff12-453d-b005-460cc04aea06-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"eead66d8-ff12-453d-b005-460cc04aea06\") " pod="openstack/ceilometer-0" Oct 13 17:41:27 crc kubenswrapper[4720]: I1013 17:41:27.364027 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eead66d8-ff12-453d-b005-460cc04aea06-scripts\") pod \"ceilometer-0\" (UID: \"eead66d8-ff12-453d-b005-460cc04aea06\") " pod="openstack/ceilometer-0" Oct 13 17:41:27 crc kubenswrapper[4720]: I1013 17:41:27.364400 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eead66d8-ff12-453d-b005-460cc04aea06-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"eead66d8-ff12-453d-b005-460cc04aea06\") " pod="openstack/ceilometer-0" Oct 13 17:41:27 crc kubenswrapper[4720]: I1013 17:41:27.364828 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eead66d8-ff12-453d-b005-460cc04aea06-config-data\") pod \"ceilometer-0\" (UID: \"eead66d8-ff12-453d-b005-460cc04aea06\") " pod="openstack/ceilometer-0" Oct 13 17:41:27 crc kubenswrapper[4720]: I1013 17:41:27.375550 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4m9p7\" (UniqueName: \"kubernetes.io/projected/eead66d8-ff12-453d-b005-460cc04aea06-kube-api-access-4m9p7\") pod \"ceilometer-0\" (UID: \"eead66d8-ff12-453d-b005-460cc04aea06\") " pod="openstack/ceilometer-0" Oct 13 17:41:27 crc kubenswrapper[4720]: I1013 17:41:27.510025 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 17:41:28 crc kubenswrapper[4720]: I1013 17:41:28.002330 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 13 17:41:28 crc kubenswrapper[4720]: W1013 17:41:28.010450 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeead66d8_ff12_453d_b005_460cc04aea06.slice/crio-0b0e02c5590d811b016f903b8f1df707acbd063b0eeb061cc2032f244ce36b3d WatchSource:0}: Error finding container 0b0e02c5590d811b016f903b8f1df707acbd063b0eeb061cc2032f244ce36b3d: Status 404 returned error can't find the container with id 0b0e02c5590d811b016f903b8f1df707acbd063b0eeb061cc2032f244ce36b3d Oct 13 17:41:28 crc kubenswrapper[4720]: I1013 17:41:28.127266 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eead66d8-ff12-453d-b005-460cc04aea06","Type":"ContainerStarted","Data":"0b0e02c5590d811b016f903b8f1df707acbd063b0eeb061cc2032f244ce36b3d"} Oct 13 17:41:28 crc kubenswrapper[4720]: I1013 17:41:28.145156 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 13 17:41:28 crc kubenswrapper[4720]: I1013 17:41:28.145261 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 13 17:41:28 crc kubenswrapper[4720]: I1013 17:41:28.176812 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 13 17:41:28 crc kubenswrapper[4720]: I1013 17:41:28.197160 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 13 17:41:28 crc kubenswrapper[4720]: I1013 17:41:28.746040 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Oct 13 17:41:28 crc kubenswrapper[4720]: I1013 17:41:28.876638 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 13 17:41:28 crc kubenswrapper[4720]: I1013 17:41:28.876679 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 13 17:41:28 crc kubenswrapper[4720]: I1013 17:41:28.925631 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 13 17:41:28 crc kubenswrapper[4720]: I1013 17:41:28.937842 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 13 17:41:29 crc kubenswrapper[4720]: I1013 17:41:29.139638 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eead66d8-ff12-453d-b005-460cc04aea06","Type":"ContainerStarted","Data":"661eb9bb4d18f43b3c9f6a0be91b6061be05a71447416de90d30500701e00c14"} Oct 13 17:41:29 crc kubenswrapper[4720]: I1013 17:41:29.140077 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 13 17:41:29 crc kubenswrapper[4720]: I1013 17:41:29.140092 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 13 17:41:29 crc kubenswrapper[4720]: I1013 17:41:29.140101 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 13 17:41:29 crc kubenswrapper[4720]: I1013 17:41:29.140110 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 13 17:41:30 crc kubenswrapper[4720]: I1013 17:41:30.719064 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 13 17:41:31 crc kubenswrapper[4720]: I1013 17:41:31.031842 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 13 17:41:31 crc kubenswrapper[4720]: I1013 17:41:31.044247 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 13 17:41:31 crc kubenswrapper[4720]: I1013 17:41:31.045133 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 13 17:41:31 crc kubenswrapper[4720]: I1013 17:41:31.059605 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 13 17:41:35 crc kubenswrapper[4720]: I1013 17:41:35.241686 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-5zp5c" event={"ID":"6dabb89f-c7f4-46e9-a7e8-0722a6f73bed","Type":"ContainerStarted","Data":"8902f6642a3e92b7dfef5171683559267ebcae0d4c7ce10ff2e3a493ad439e5d"} Oct 13 17:41:35 crc kubenswrapper[4720]: I1013 17:41:35.249587 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eead66d8-ff12-453d-b005-460cc04aea06","Type":"ContainerStarted","Data":"5250a6d32266fb1b83716c197c24c281e38a2e28420f3efcb659d4626db0d75d"} Oct 13 17:41:35 crc kubenswrapper[4720]: I1013 17:41:35.262294 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-5zp5c" podStartSLOduration=2.249319127 podStartE2EDuration="10.262272373s" podCreationTimestamp="2025-10-13 17:41:25 +0000 UTC" firstStartedPulling="2025-10-13 17:41:26.390955004 +0000 UTC m=+1031.848205136" lastFinishedPulling="2025-10-13 17:41:34.40390824 +0000 UTC m=+1039.861158382" observedRunningTime="2025-10-13 17:41:35.257038158 +0000 UTC m=+1040.714288310" watchObservedRunningTime="2025-10-13 17:41:35.262272373 +0000 UTC m=+1040.719522505" Oct 13 17:41:36 crc kubenswrapper[4720]: I1013 17:41:36.263893 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eead66d8-ff12-453d-b005-460cc04aea06","Type":"ContainerStarted","Data":"52f0a032ebf1c7393c10af607eda0c334ba1805b3dbef7e58836145b0b3ebdb2"} Oct 13 17:41:36 crc kubenswrapper[4720]: I1013 17:41:36.977865 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.006876 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhvtk\" (UniqueName: \"kubernetes.io/projected/bf41e1e8-36d7-4805-b824-44322d841e38-kube-api-access-mhvtk\") pod \"bf41e1e8-36d7-4805-b824-44322d841e38\" (UID: \"bf41e1e8-36d7-4805-b824-44322d841e38\") " Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.007229 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bf41e1e8-36d7-4805-b824-44322d841e38-etc-machine-id\") pod \"bf41e1e8-36d7-4805-b824-44322d841e38\" (UID: \"bf41e1e8-36d7-4805-b824-44322d841e38\") " Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.007281 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf41e1e8-36d7-4805-b824-44322d841e38-config-data\") pod \"bf41e1e8-36d7-4805-b824-44322d841e38\" (UID: \"bf41e1e8-36d7-4805-b824-44322d841e38\") " Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.007360 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf41e1e8-36d7-4805-b824-44322d841e38-scripts\") pod \"bf41e1e8-36d7-4805-b824-44322d841e38\" (UID: \"bf41e1e8-36d7-4805-b824-44322d841e38\") " Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.007388 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf41e1e8-36d7-4805-b824-44322d841e38-combined-ca-bundle\") pod \"bf41e1e8-36d7-4805-b824-44322d841e38\" (UID: \"bf41e1e8-36d7-4805-b824-44322d841e38\") " Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.007441 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf41e1e8-36d7-4805-b824-44322d841e38-logs\") pod \"bf41e1e8-36d7-4805-b824-44322d841e38\" (UID: \"bf41e1e8-36d7-4805-b824-44322d841e38\") " Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.007503 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bf41e1e8-36d7-4805-b824-44322d841e38-config-data-custom\") pod \"bf41e1e8-36d7-4805-b824-44322d841e38\" (UID: \"bf41e1e8-36d7-4805-b824-44322d841e38\") " Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.014584 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf41e1e8-36d7-4805-b824-44322d841e38-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "bf41e1e8-36d7-4805-b824-44322d841e38" (UID: "bf41e1e8-36d7-4805-b824-44322d841e38"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.015608 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf41e1e8-36d7-4805-b824-44322d841e38-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "bf41e1e8-36d7-4805-b824-44322d841e38" (UID: "bf41e1e8-36d7-4805-b824-44322d841e38"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.016443 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf41e1e8-36d7-4805-b824-44322d841e38-scripts" (OuterVolumeSpecName: "scripts") pod "bf41e1e8-36d7-4805-b824-44322d841e38" (UID: "bf41e1e8-36d7-4805-b824-44322d841e38"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.016942 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf41e1e8-36d7-4805-b824-44322d841e38-logs" (OuterVolumeSpecName: "logs") pod "bf41e1e8-36d7-4805-b824-44322d841e38" (UID: "bf41e1e8-36d7-4805-b824-44322d841e38"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.032539 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf41e1e8-36d7-4805-b824-44322d841e38-kube-api-access-mhvtk" (OuterVolumeSpecName: "kube-api-access-mhvtk") pod "bf41e1e8-36d7-4805-b824-44322d841e38" (UID: "bf41e1e8-36d7-4805-b824-44322d841e38"). InnerVolumeSpecName "kube-api-access-mhvtk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.109251 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhvtk\" (UniqueName: \"kubernetes.io/projected/bf41e1e8-36d7-4805-b824-44322d841e38-kube-api-access-mhvtk\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.109286 4720 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bf41e1e8-36d7-4805-b824-44322d841e38-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.109294 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf41e1e8-36d7-4805-b824-44322d841e38-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.109302 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf41e1e8-36d7-4805-b824-44322d841e38-logs\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.109310 4720 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bf41e1e8-36d7-4805-b824-44322d841e38-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.125946 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf41e1e8-36d7-4805-b824-44322d841e38-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bf41e1e8-36d7-4805-b824-44322d841e38" (UID: "bf41e1e8-36d7-4805-b824-44322d841e38"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.134679 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf41e1e8-36d7-4805-b824-44322d841e38-config-data" (OuterVolumeSpecName: "config-data") pod "bf41e1e8-36d7-4805-b824-44322d841e38" (UID: "bf41e1e8-36d7-4805-b824-44322d841e38"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.213427 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf41e1e8-36d7-4805-b824-44322d841e38-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.213468 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf41e1e8-36d7-4805-b824-44322d841e38-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.281582 4720 generic.go:334] "Generic (PLEG): container finished" podID="bf41e1e8-36d7-4805-b824-44322d841e38" containerID="7e54e5cf97d638eb9c08897a460c40d94602f2a74b52e91535e4ead95ab08e31" exitCode=137 Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.281637 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bf41e1e8-36d7-4805-b824-44322d841e38","Type":"ContainerDied","Data":"7e54e5cf97d638eb9c08897a460c40d94602f2a74b52e91535e4ead95ab08e31"} Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.281662 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bf41e1e8-36d7-4805-b824-44322d841e38","Type":"ContainerDied","Data":"23359f6ed524e81b386487f5cc7901fd277f8dc8d500e43f75e91f6555dfc6fe"} Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.281678 4720 scope.go:117] "RemoveContainer" containerID="7e54e5cf97d638eb9c08897a460c40d94602f2a74b52e91535e4ead95ab08e31" Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.281757 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.291579 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eead66d8-ff12-453d-b005-460cc04aea06","Type":"ContainerStarted","Data":"35d58da87b7edb63dce499c24322aaf20a9d40849364f5e9360c61a5f0e7444b"} Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.292052 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eead66d8-ff12-453d-b005-460cc04aea06" containerName="ceilometer-central-agent" containerID="cri-o://661eb9bb4d18f43b3c9f6a0be91b6061be05a71447416de90d30500701e00c14" gracePeriod=30 Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.292070 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.292105 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eead66d8-ff12-453d-b005-460cc04aea06" containerName="proxy-httpd" containerID="cri-o://35d58da87b7edb63dce499c24322aaf20a9d40849364f5e9360c61a5f0e7444b" gracePeriod=30 Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.292207 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eead66d8-ff12-453d-b005-460cc04aea06" containerName="ceilometer-notification-agent" containerID="cri-o://5250a6d32266fb1b83716c197c24c281e38a2e28420f3efcb659d4626db0d75d" gracePeriod=30 Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.294108 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eead66d8-ff12-453d-b005-460cc04aea06" containerName="sg-core" containerID="cri-o://52f0a032ebf1c7393c10af607eda0c334ba1805b3dbef7e58836145b0b3ebdb2" gracePeriod=30 Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.310879 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.322856 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.331204 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Oct 13 17:41:37 crc kubenswrapper[4720]: E1013 17:41:37.331900 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf41e1e8-36d7-4805-b824-44322d841e38" containerName="cinder-api" Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.332105 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf41e1e8-36d7-4805-b824-44322d841e38" containerName="cinder-api" Oct 13 17:41:37 crc kubenswrapper[4720]: E1013 17:41:37.332258 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf41e1e8-36d7-4805-b824-44322d841e38" containerName="cinder-api-log" Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.332374 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf41e1e8-36d7-4805-b824-44322d841e38" containerName="cinder-api-log" Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.332811 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf41e1e8-36d7-4805-b824-44322d841e38" containerName="cinder-api" Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.332920 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf41e1e8-36d7-4805-b824-44322d841e38" containerName="cinder-api-log" Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.334719 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.354654 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.355256 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.355296 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.367304 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.367610 4720 scope.go:117] "RemoveContainer" containerID="d17932eb3f1249cfefbdc41667ce1235565bba8a0c287910b108807f8759941b" Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.375608 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.9643655230000001 podStartE2EDuration="10.375585102s" podCreationTimestamp="2025-10-13 17:41:27 +0000 UTC" firstStartedPulling="2025-10-13 17:41:28.013860113 +0000 UTC m=+1033.471110245" lastFinishedPulling="2025-10-13 17:41:36.425079652 +0000 UTC m=+1041.882329824" observedRunningTime="2025-10-13 17:41:37.335821976 +0000 UTC m=+1042.793072108" watchObservedRunningTime="2025-10-13 17:41:37.375585102 +0000 UTC m=+1042.832835234" Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.411564 4720 scope.go:117] "RemoveContainer" containerID="7e54e5cf97d638eb9c08897a460c40d94602f2a74b52e91535e4ead95ab08e31" Oct 13 17:41:37 crc kubenswrapper[4720]: E1013 17:41:37.412051 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e54e5cf97d638eb9c08897a460c40d94602f2a74b52e91535e4ead95ab08e31\": container with ID starting with 7e54e5cf97d638eb9c08897a460c40d94602f2a74b52e91535e4ead95ab08e31 not found: ID does not exist" containerID="7e54e5cf97d638eb9c08897a460c40d94602f2a74b52e91535e4ead95ab08e31" Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.412139 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e54e5cf97d638eb9c08897a460c40d94602f2a74b52e91535e4ead95ab08e31"} err="failed to get container status \"7e54e5cf97d638eb9c08897a460c40d94602f2a74b52e91535e4ead95ab08e31\": rpc error: code = NotFound desc = could not find container \"7e54e5cf97d638eb9c08897a460c40d94602f2a74b52e91535e4ead95ab08e31\": container with ID starting with 7e54e5cf97d638eb9c08897a460c40d94602f2a74b52e91535e4ead95ab08e31 not found: ID does not exist" Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.412242 4720 scope.go:117] "RemoveContainer" containerID="d17932eb3f1249cfefbdc41667ce1235565bba8a0c287910b108807f8759941b" Oct 13 17:41:37 crc kubenswrapper[4720]: E1013 17:41:37.412518 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d17932eb3f1249cfefbdc41667ce1235565bba8a0c287910b108807f8759941b\": container with ID starting with d17932eb3f1249cfefbdc41667ce1235565bba8a0c287910b108807f8759941b not found: ID does not exist" containerID="d17932eb3f1249cfefbdc41667ce1235565bba8a0c287910b108807f8759941b" Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.412593 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d17932eb3f1249cfefbdc41667ce1235565bba8a0c287910b108807f8759941b"} err="failed to get container status \"d17932eb3f1249cfefbdc41667ce1235565bba8a0c287910b108807f8759941b\": rpc error: code = NotFound desc = could not find container \"d17932eb3f1249cfefbdc41667ce1235565bba8a0c287910b108807f8759941b\": container with ID starting with d17932eb3f1249cfefbdc41667ce1235565bba8a0c287910b108807f8759941b not found: ID does not exist" Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.424260 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ed0bf93-abdc-4a94-bfcb-6293c9e01853-public-tls-certs\") pod \"cinder-api-0\" (UID: \"9ed0bf93-abdc-4a94-bfcb-6293c9e01853\") " pod="openstack/cinder-api-0" Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.424408 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9ed0bf93-abdc-4a94-bfcb-6293c9e01853-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9ed0bf93-abdc-4a94-bfcb-6293c9e01853\") " pod="openstack/cinder-api-0" Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.424487 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ed0bf93-abdc-4a94-bfcb-6293c9e01853-scripts\") pod \"cinder-api-0\" (UID: \"9ed0bf93-abdc-4a94-bfcb-6293c9e01853\") " pod="openstack/cinder-api-0" Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.424588 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ed0bf93-abdc-4a94-bfcb-6293c9e01853-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"9ed0bf93-abdc-4a94-bfcb-6293c9e01853\") " pod="openstack/cinder-api-0" Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.424656 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9ed0bf93-abdc-4a94-bfcb-6293c9e01853-config-data-custom\") pod \"cinder-api-0\" (UID: \"9ed0bf93-abdc-4a94-bfcb-6293c9e01853\") " pod="openstack/cinder-api-0" Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.424781 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ed0bf93-abdc-4a94-bfcb-6293c9e01853-config-data\") pod \"cinder-api-0\" (UID: \"9ed0bf93-abdc-4a94-bfcb-6293c9e01853\") " pod="openstack/cinder-api-0" Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.424821 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ed0bf93-abdc-4a94-bfcb-6293c9e01853-logs\") pod \"cinder-api-0\" (UID: \"9ed0bf93-abdc-4a94-bfcb-6293c9e01853\") " pod="openstack/cinder-api-0" Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.424870 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ed0bf93-abdc-4a94-bfcb-6293c9e01853-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"9ed0bf93-abdc-4a94-bfcb-6293c9e01853\") " pod="openstack/cinder-api-0" Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.424916 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skbg4\" (UniqueName: \"kubernetes.io/projected/9ed0bf93-abdc-4a94-bfcb-6293c9e01853-kube-api-access-skbg4\") pod \"cinder-api-0\" (UID: \"9ed0bf93-abdc-4a94-bfcb-6293c9e01853\") " pod="openstack/cinder-api-0" Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.526972 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ed0bf93-abdc-4a94-bfcb-6293c9e01853-config-data\") pod \"cinder-api-0\" (UID: \"9ed0bf93-abdc-4a94-bfcb-6293c9e01853\") " pod="openstack/cinder-api-0" Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.527026 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ed0bf93-abdc-4a94-bfcb-6293c9e01853-logs\") pod \"cinder-api-0\" (UID: \"9ed0bf93-abdc-4a94-bfcb-6293c9e01853\") " pod="openstack/cinder-api-0" Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.527063 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ed0bf93-abdc-4a94-bfcb-6293c9e01853-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"9ed0bf93-abdc-4a94-bfcb-6293c9e01853\") " pod="openstack/cinder-api-0" Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.527090 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skbg4\" (UniqueName: \"kubernetes.io/projected/9ed0bf93-abdc-4a94-bfcb-6293c9e01853-kube-api-access-skbg4\") pod \"cinder-api-0\" (UID: \"9ed0bf93-abdc-4a94-bfcb-6293c9e01853\") " pod="openstack/cinder-api-0" Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.527135 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ed0bf93-abdc-4a94-bfcb-6293c9e01853-public-tls-certs\") pod \"cinder-api-0\" (UID: \"9ed0bf93-abdc-4a94-bfcb-6293c9e01853\") " pod="openstack/cinder-api-0" Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.527160 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9ed0bf93-abdc-4a94-bfcb-6293c9e01853-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9ed0bf93-abdc-4a94-bfcb-6293c9e01853\") " pod="openstack/cinder-api-0" Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.527176 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ed0bf93-abdc-4a94-bfcb-6293c9e01853-scripts\") pod \"cinder-api-0\" (UID: \"9ed0bf93-abdc-4a94-bfcb-6293c9e01853\") " pod="openstack/cinder-api-0" Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.527230 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ed0bf93-abdc-4a94-bfcb-6293c9e01853-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"9ed0bf93-abdc-4a94-bfcb-6293c9e01853\") " pod="openstack/cinder-api-0" Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.527248 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9ed0bf93-abdc-4a94-bfcb-6293c9e01853-config-data-custom\") pod \"cinder-api-0\" (UID: \"9ed0bf93-abdc-4a94-bfcb-6293c9e01853\") " pod="openstack/cinder-api-0" Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.528072 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9ed0bf93-abdc-4a94-bfcb-6293c9e01853-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9ed0bf93-abdc-4a94-bfcb-6293c9e01853\") " pod="openstack/cinder-api-0" Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.528686 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ed0bf93-abdc-4a94-bfcb-6293c9e01853-logs\") pod \"cinder-api-0\" (UID: \"9ed0bf93-abdc-4a94-bfcb-6293c9e01853\") " pod="openstack/cinder-api-0" Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.532002 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9ed0bf93-abdc-4a94-bfcb-6293c9e01853-config-data-custom\") pod \"cinder-api-0\" (UID: \"9ed0bf93-abdc-4a94-bfcb-6293c9e01853\") " pod="openstack/cinder-api-0" Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.532553 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ed0bf93-abdc-4a94-bfcb-6293c9e01853-public-tls-certs\") pod \"cinder-api-0\" (UID: \"9ed0bf93-abdc-4a94-bfcb-6293c9e01853\") " pod="openstack/cinder-api-0" Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.532825 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ed0bf93-abdc-4a94-bfcb-6293c9e01853-scripts\") pod \"cinder-api-0\" (UID: \"9ed0bf93-abdc-4a94-bfcb-6293c9e01853\") " pod="openstack/cinder-api-0" Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.532877 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ed0bf93-abdc-4a94-bfcb-6293c9e01853-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"9ed0bf93-abdc-4a94-bfcb-6293c9e01853\") " pod="openstack/cinder-api-0" Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.533664 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ed0bf93-abdc-4a94-bfcb-6293c9e01853-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"9ed0bf93-abdc-4a94-bfcb-6293c9e01853\") " pod="openstack/cinder-api-0" Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.536330 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ed0bf93-abdc-4a94-bfcb-6293c9e01853-config-data\") pod \"cinder-api-0\" (UID: \"9ed0bf93-abdc-4a94-bfcb-6293c9e01853\") " pod="openstack/cinder-api-0" Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.549836 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skbg4\" (UniqueName: \"kubernetes.io/projected/9ed0bf93-abdc-4a94-bfcb-6293c9e01853-kube-api-access-skbg4\") pod \"cinder-api-0\" (UID: \"9ed0bf93-abdc-4a94-bfcb-6293c9e01853\") " pod="openstack/cinder-api-0" Oct 13 17:41:37 crc kubenswrapper[4720]: I1013 17:41:37.697610 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 13 17:41:38 crc kubenswrapper[4720]: I1013 17:41:38.240685 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-544f6df47b-z9rm6" Oct 13 17:41:38 crc kubenswrapper[4720]: I1013 17:41:38.288105 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-544f6df47b-z9rm6" Oct 13 17:41:38 crc kubenswrapper[4720]: I1013 17:41:38.320533 4720 generic.go:334] "Generic (PLEG): container finished" podID="eead66d8-ff12-453d-b005-460cc04aea06" containerID="35d58da87b7edb63dce499c24322aaf20a9d40849364f5e9360c61a5f0e7444b" exitCode=0 Oct 13 17:41:38 crc kubenswrapper[4720]: I1013 17:41:38.320563 4720 generic.go:334] "Generic (PLEG): container finished" podID="eead66d8-ff12-453d-b005-460cc04aea06" containerID="52f0a032ebf1c7393c10af607eda0c334ba1805b3dbef7e58836145b0b3ebdb2" exitCode=2 Oct 13 17:41:38 crc kubenswrapper[4720]: I1013 17:41:38.320569 4720 generic.go:334] "Generic (PLEG): container finished" podID="eead66d8-ff12-453d-b005-460cc04aea06" containerID="5250a6d32266fb1b83716c197c24c281e38a2e28420f3efcb659d4626db0d75d" exitCode=0 Oct 13 17:41:38 crc kubenswrapper[4720]: I1013 17:41:38.320576 4720 generic.go:334] "Generic (PLEG): container finished" podID="eead66d8-ff12-453d-b005-460cc04aea06" containerID="661eb9bb4d18f43b3c9f6a0be91b6061be05a71447416de90d30500701e00c14" exitCode=0 Oct 13 17:41:38 crc kubenswrapper[4720]: I1013 17:41:38.321473 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eead66d8-ff12-453d-b005-460cc04aea06","Type":"ContainerDied","Data":"35d58da87b7edb63dce499c24322aaf20a9d40849364f5e9360c61a5f0e7444b"} Oct 13 17:41:38 crc kubenswrapper[4720]: I1013 17:41:38.321501 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eead66d8-ff12-453d-b005-460cc04aea06","Type":"ContainerDied","Data":"52f0a032ebf1c7393c10af607eda0c334ba1805b3dbef7e58836145b0b3ebdb2"} Oct 13 17:41:38 crc kubenswrapper[4720]: I1013 17:41:38.321511 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eead66d8-ff12-453d-b005-460cc04aea06","Type":"ContainerDied","Data":"5250a6d32266fb1b83716c197c24c281e38a2e28420f3efcb659d4626db0d75d"} Oct 13 17:41:38 crc kubenswrapper[4720]: I1013 17:41:38.321519 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eead66d8-ff12-453d-b005-460cc04aea06","Type":"ContainerDied","Data":"661eb9bb4d18f43b3c9f6a0be91b6061be05a71447416de90d30500701e00c14"} Oct 13 17:41:38 crc kubenswrapper[4720]: I1013 17:41:38.354307 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 13 17:41:38 crc kubenswrapper[4720]: I1013 17:41:38.428045 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 17:41:38 crc kubenswrapper[4720]: I1013 17:41:38.552239 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eead66d8-ff12-453d-b005-460cc04aea06-config-data\") pod \"eead66d8-ff12-453d-b005-460cc04aea06\" (UID: \"eead66d8-ff12-453d-b005-460cc04aea06\") " Oct 13 17:41:38 crc kubenswrapper[4720]: I1013 17:41:38.552285 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eead66d8-ff12-453d-b005-460cc04aea06-scripts\") pod \"eead66d8-ff12-453d-b005-460cc04aea06\" (UID: \"eead66d8-ff12-453d-b005-460cc04aea06\") " Oct 13 17:41:38 crc kubenswrapper[4720]: I1013 17:41:38.552306 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eead66d8-ff12-453d-b005-460cc04aea06-run-httpd\") pod \"eead66d8-ff12-453d-b005-460cc04aea06\" (UID: \"eead66d8-ff12-453d-b005-460cc04aea06\") " Oct 13 17:41:38 crc kubenswrapper[4720]: I1013 17:41:38.552546 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eead66d8-ff12-453d-b005-460cc04aea06-log-httpd\") pod \"eead66d8-ff12-453d-b005-460cc04aea06\" (UID: \"eead66d8-ff12-453d-b005-460cc04aea06\") " Oct 13 17:41:38 crc kubenswrapper[4720]: I1013 17:41:38.552630 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eead66d8-ff12-453d-b005-460cc04aea06-sg-core-conf-yaml\") pod \"eead66d8-ff12-453d-b005-460cc04aea06\" (UID: \"eead66d8-ff12-453d-b005-460cc04aea06\") " Oct 13 17:41:38 crc kubenswrapper[4720]: I1013 17:41:38.552764 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4m9p7\" (UniqueName: \"kubernetes.io/projected/eead66d8-ff12-453d-b005-460cc04aea06-kube-api-access-4m9p7\") pod \"eead66d8-ff12-453d-b005-460cc04aea06\" (UID: \"eead66d8-ff12-453d-b005-460cc04aea06\") " Oct 13 17:41:38 crc kubenswrapper[4720]: I1013 17:41:38.552859 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eead66d8-ff12-453d-b005-460cc04aea06-combined-ca-bundle\") pod \"eead66d8-ff12-453d-b005-460cc04aea06\" (UID: \"eead66d8-ff12-453d-b005-460cc04aea06\") " Oct 13 17:41:38 crc kubenswrapper[4720]: I1013 17:41:38.553535 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eead66d8-ff12-453d-b005-460cc04aea06-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "eead66d8-ff12-453d-b005-460cc04aea06" (UID: "eead66d8-ff12-453d-b005-460cc04aea06"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:41:38 crc kubenswrapper[4720]: I1013 17:41:38.554926 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eead66d8-ff12-453d-b005-460cc04aea06-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "eead66d8-ff12-453d-b005-460cc04aea06" (UID: "eead66d8-ff12-453d-b005-460cc04aea06"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:41:38 crc kubenswrapper[4720]: I1013 17:41:38.557368 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eead66d8-ff12-453d-b005-460cc04aea06-scripts" (OuterVolumeSpecName: "scripts") pod "eead66d8-ff12-453d-b005-460cc04aea06" (UID: "eead66d8-ff12-453d-b005-460cc04aea06"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:41:38 crc kubenswrapper[4720]: I1013 17:41:38.568438 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eead66d8-ff12-453d-b005-460cc04aea06-kube-api-access-4m9p7" (OuterVolumeSpecName: "kube-api-access-4m9p7") pod "eead66d8-ff12-453d-b005-460cc04aea06" (UID: "eead66d8-ff12-453d-b005-460cc04aea06"). InnerVolumeSpecName "kube-api-access-4m9p7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:41:38 crc kubenswrapper[4720]: I1013 17:41:38.586115 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eead66d8-ff12-453d-b005-460cc04aea06-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "eead66d8-ff12-453d-b005-460cc04aea06" (UID: "eead66d8-ff12-453d-b005-460cc04aea06"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:41:38 crc kubenswrapper[4720]: I1013 17:41:38.654876 4720 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eead66d8-ff12-453d-b005-460cc04aea06-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:38 crc kubenswrapper[4720]: I1013 17:41:38.654903 4720 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eead66d8-ff12-453d-b005-460cc04aea06-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:38 crc kubenswrapper[4720]: I1013 17:41:38.654912 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4m9p7\" (UniqueName: \"kubernetes.io/projected/eead66d8-ff12-453d-b005-460cc04aea06-kube-api-access-4m9p7\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:38 crc kubenswrapper[4720]: I1013 17:41:38.654922 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eead66d8-ff12-453d-b005-460cc04aea06-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:38 crc kubenswrapper[4720]: I1013 17:41:38.654932 4720 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eead66d8-ff12-453d-b005-460cc04aea06-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:38 crc kubenswrapper[4720]: I1013 17:41:38.688280 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eead66d8-ff12-453d-b005-460cc04aea06-config-data" (OuterVolumeSpecName: "config-data") pod "eead66d8-ff12-453d-b005-460cc04aea06" (UID: "eead66d8-ff12-453d-b005-460cc04aea06"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:41:38 crc kubenswrapper[4720]: I1013 17:41:38.698063 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eead66d8-ff12-453d-b005-460cc04aea06-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eead66d8-ff12-453d-b005-460cc04aea06" (UID: "eead66d8-ff12-453d-b005-460cc04aea06"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:41:38 crc kubenswrapper[4720]: I1013 17:41:38.756408 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eead66d8-ff12-453d-b005-460cc04aea06-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:38 crc kubenswrapper[4720]: I1013 17:41:38.756435 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eead66d8-ff12-453d-b005-460cc04aea06-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:39 crc kubenswrapper[4720]: I1013 17:41:39.181806 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf41e1e8-36d7-4805-b824-44322d841e38" path="/var/lib/kubelet/pods/bf41e1e8-36d7-4805-b824-44322d841e38/volumes" Oct 13 17:41:39 crc kubenswrapper[4720]: I1013 17:41:39.335990 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eead66d8-ff12-453d-b005-460cc04aea06","Type":"ContainerDied","Data":"0b0e02c5590d811b016f903b8f1df707acbd063b0eeb061cc2032f244ce36b3d"} Oct 13 17:41:39 crc kubenswrapper[4720]: I1013 17:41:39.336034 4720 scope.go:117] "RemoveContainer" containerID="35d58da87b7edb63dce499c24322aaf20a9d40849364f5e9360c61a5f0e7444b" Oct 13 17:41:39 crc kubenswrapper[4720]: I1013 17:41:39.336088 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 17:41:39 crc kubenswrapper[4720]: I1013 17:41:39.337935 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9ed0bf93-abdc-4a94-bfcb-6293c9e01853","Type":"ContainerStarted","Data":"a6618a663fd3ca067f8988ebecd1520c5647796694e6e790366158ce7440ad70"} Oct 13 17:41:39 crc kubenswrapper[4720]: I1013 17:41:39.337980 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9ed0bf93-abdc-4a94-bfcb-6293c9e01853","Type":"ContainerStarted","Data":"7cc5f876cea1702ab583a6daceac77ab5d88b554f8f1df3d21189eb4cf2a5b24"} Oct 13 17:41:39 crc kubenswrapper[4720]: I1013 17:41:39.364028 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 13 17:41:39 crc kubenswrapper[4720]: I1013 17:41:39.379476 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 13 17:41:39 crc kubenswrapper[4720]: I1013 17:41:39.394650 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 13 17:41:39 crc kubenswrapper[4720]: E1013 17:41:39.395072 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eead66d8-ff12-453d-b005-460cc04aea06" containerName="sg-core" Oct 13 17:41:39 crc kubenswrapper[4720]: I1013 17:41:39.395083 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="eead66d8-ff12-453d-b005-460cc04aea06" containerName="sg-core" Oct 13 17:41:39 crc kubenswrapper[4720]: E1013 17:41:39.395106 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eead66d8-ff12-453d-b005-460cc04aea06" containerName="proxy-httpd" Oct 13 17:41:39 crc kubenswrapper[4720]: I1013 17:41:39.395112 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="eead66d8-ff12-453d-b005-460cc04aea06" containerName="proxy-httpd" Oct 13 17:41:39 crc kubenswrapper[4720]: E1013 17:41:39.395125 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eead66d8-ff12-453d-b005-460cc04aea06" containerName="ceilometer-notification-agent" Oct 13 17:41:39 crc kubenswrapper[4720]: I1013 17:41:39.395131 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="eead66d8-ff12-453d-b005-460cc04aea06" containerName="ceilometer-notification-agent" Oct 13 17:41:39 crc kubenswrapper[4720]: E1013 17:41:39.395151 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eead66d8-ff12-453d-b005-460cc04aea06" containerName="ceilometer-central-agent" Oct 13 17:41:39 crc kubenswrapper[4720]: I1013 17:41:39.395157 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="eead66d8-ff12-453d-b005-460cc04aea06" containerName="ceilometer-central-agent" Oct 13 17:41:39 crc kubenswrapper[4720]: I1013 17:41:39.395333 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="eead66d8-ff12-453d-b005-460cc04aea06" containerName="proxy-httpd" Oct 13 17:41:39 crc kubenswrapper[4720]: I1013 17:41:39.395350 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="eead66d8-ff12-453d-b005-460cc04aea06" containerName="ceilometer-central-agent" Oct 13 17:41:39 crc kubenswrapper[4720]: I1013 17:41:39.395358 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="eead66d8-ff12-453d-b005-460cc04aea06" containerName="sg-core" Oct 13 17:41:39 crc kubenswrapper[4720]: I1013 17:41:39.395370 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="eead66d8-ff12-453d-b005-460cc04aea06" containerName="ceilometer-notification-agent" Oct 13 17:41:39 crc kubenswrapper[4720]: I1013 17:41:39.397707 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 17:41:39 crc kubenswrapper[4720]: I1013 17:41:39.399950 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 13 17:41:39 crc kubenswrapper[4720]: I1013 17:41:39.400826 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 13 17:41:39 crc kubenswrapper[4720]: I1013 17:41:39.401933 4720 scope.go:117] "RemoveContainer" containerID="52f0a032ebf1c7393c10af607eda0c334ba1805b3dbef7e58836145b0b3ebdb2" Oct 13 17:41:39 crc kubenswrapper[4720]: I1013 17:41:39.421218 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 13 17:41:39 crc kubenswrapper[4720]: I1013 17:41:39.437720 4720 scope.go:117] "RemoveContainer" containerID="5250a6d32266fb1b83716c197c24c281e38a2e28420f3efcb659d4626db0d75d" Oct 13 17:41:39 crc kubenswrapper[4720]: I1013 17:41:39.468411 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a82b2d7-243a-4818-b224-76f0300c42ba-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5a82b2d7-243a-4818-b224-76f0300c42ba\") " pod="openstack/ceilometer-0" Oct 13 17:41:39 crc kubenswrapper[4720]: I1013 17:41:39.468638 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a82b2d7-243a-4818-b224-76f0300c42ba-run-httpd\") pod \"ceilometer-0\" (UID: \"5a82b2d7-243a-4818-b224-76f0300c42ba\") " pod="openstack/ceilometer-0" Oct 13 17:41:39 crc kubenswrapper[4720]: I1013 17:41:39.468800 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shsr5\" (UniqueName: \"kubernetes.io/projected/5a82b2d7-243a-4818-b224-76f0300c42ba-kube-api-access-shsr5\") pod \"ceilometer-0\" (UID: \"5a82b2d7-243a-4818-b224-76f0300c42ba\") " pod="openstack/ceilometer-0" Oct 13 17:41:39 crc kubenswrapper[4720]: I1013 17:41:39.468878 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a82b2d7-243a-4818-b224-76f0300c42ba-log-httpd\") pod \"ceilometer-0\" (UID: \"5a82b2d7-243a-4818-b224-76f0300c42ba\") " pod="openstack/ceilometer-0" Oct 13 17:41:39 crc kubenswrapper[4720]: I1013 17:41:39.469013 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a82b2d7-243a-4818-b224-76f0300c42ba-config-data\") pod \"ceilometer-0\" (UID: \"5a82b2d7-243a-4818-b224-76f0300c42ba\") " pod="openstack/ceilometer-0" Oct 13 17:41:39 crc kubenswrapper[4720]: I1013 17:41:39.469114 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5a82b2d7-243a-4818-b224-76f0300c42ba-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5a82b2d7-243a-4818-b224-76f0300c42ba\") " pod="openstack/ceilometer-0" Oct 13 17:41:39 crc kubenswrapper[4720]: I1013 17:41:39.469153 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a82b2d7-243a-4818-b224-76f0300c42ba-scripts\") pod \"ceilometer-0\" (UID: \"5a82b2d7-243a-4818-b224-76f0300c42ba\") " pod="openstack/ceilometer-0" Oct 13 17:41:39 crc kubenswrapper[4720]: I1013 17:41:39.494175 4720 scope.go:117] "RemoveContainer" containerID="661eb9bb4d18f43b3c9f6a0be91b6061be05a71447416de90d30500701e00c14" Oct 13 17:41:39 crc kubenswrapper[4720]: I1013 17:41:39.570658 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5a82b2d7-243a-4818-b224-76f0300c42ba-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5a82b2d7-243a-4818-b224-76f0300c42ba\") " pod="openstack/ceilometer-0" Oct 13 17:41:39 crc kubenswrapper[4720]: I1013 17:41:39.570699 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a82b2d7-243a-4818-b224-76f0300c42ba-scripts\") pod \"ceilometer-0\" (UID: \"5a82b2d7-243a-4818-b224-76f0300c42ba\") " pod="openstack/ceilometer-0" Oct 13 17:41:39 crc kubenswrapper[4720]: I1013 17:41:39.570754 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a82b2d7-243a-4818-b224-76f0300c42ba-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5a82b2d7-243a-4818-b224-76f0300c42ba\") " pod="openstack/ceilometer-0" Oct 13 17:41:39 crc kubenswrapper[4720]: I1013 17:41:39.570806 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a82b2d7-243a-4818-b224-76f0300c42ba-run-httpd\") pod \"ceilometer-0\" (UID: \"5a82b2d7-243a-4818-b224-76f0300c42ba\") " pod="openstack/ceilometer-0" Oct 13 17:41:39 crc kubenswrapper[4720]: I1013 17:41:39.570840 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shsr5\" (UniqueName: \"kubernetes.io/projected/5a82b2d7-243a-4818-b224-76f0300c42ba-kube-api-access-shsr5\") pod \"ceilometer-0\" (UID: \"5a82b2d7-243a-4818-b224-76f0300c42ba\") " pod="openstack/ceilometer-0" Oct 13 17:41:39 crc kubenswrapper[4720]: I1013 17:41:39.570865 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a82b2d7-243a-4818-b224-76f0300c42ba-log-httpd\") pod \"ceilometer-0\" (UID: \"5a82b2d7-243a-4818-b224-76f0300c42ba\") " pod="openstack/ceilometer-0" Oct 13 17:41:39 crc kubenswrapper[4720]: I1013 17:41:39.570920 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a82b2d7-243a-4818-b224-76f0300c42ba-config-data\") pod \"ceilometer-0\" (UID: \"5a82b2d7-243a-4818-b224-76f0300c42ba\") " pod="openstack/ceilometer-0" Oct 13 17:41:39 crc kubenswrapper[4720]: I1013 17:41:39.575499 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a82b2d7-243a-4818-b224-76f0300c42ba-log-httpd\") pod \"ceilometer-0\" (UID: \"5a82b2d7-243a-4818-b224-76f0300c42ba\") " pod="openstack/ceilometer-0" Oct 13 17:41:39 crc kubenswrapper[4720]: I1013 17:41:39.575756 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a82b2d7-243a-4818-b224-76f0300c42ba-run-httpd\") pod \"ceilometer-0\" (UID: \"5a82b2d7-243a-4818-b224-76f0300c42ba\") " pod="openstack/ceilometer-0" Oct 13 17:41:39 crc kubenswrapper[4720]: I1013 17:41:39.579772 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5a82b2d7-243a-4818-b224-76f0300c42ba-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5a82b2d7-243a-4818-b224-76f0300c42ba\") " pod="openstack/ceilometer-0" Oct 13 17:41:39 crc kubenswrapper[4720]: I1013 17:41:39.580415 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a82b2d7-243a-4818-b224-76f0300c42ba-config-data\") pod \"ceilometer-0\" (UID: \"5a82b2d7-243a-4818-b224-76f0300c42ba\") " pod="openstack/ceilometer-0" Oct 13 17:41:39 crc kubenswrapper[4720]: I1013 17:41:39.580663 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a82b2d7-243a-4818-b224-76f0300c42ba-scripts\") pod \"ceilometer-0\" (UID: \"5a82b2d7-243a-4818-b224-76f0300c42ba\") " pod="openstack/ceilometer-0" Oct 13 17:41:39 crc kubenswrapper[4720]: I1013 17:41:39.585586 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a82b2d7-243a-4818-b224-76f0300c42ba-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5a82b2d7-243a-4818-b224-76f0300c42ba\") " pod="openstack/ceilometer-0" Oct 13 17:41:39 crc kubenswrapper[4720]: I1013 17:41:39.592529 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shsr5\" (UniqueName: \"kubernetes.io/projected/5a82b2d7-243a-4818-b224-76f0300c42ba-kube-api-access-shsr5\") pod \"ceilometer-0\" (UID: \"5a82b2d7-243a-4818-b224-76f0300c42ba\") " pod="openstack/ceilometer-0" Oct 13 17:41:39 crc kubenswrapper[4720]: I1013 17:41:39.720737 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 17:41:40 crc kubenswrapper[4720]: I1013 17:41:40.178608 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 13 17:41:40 crc kubenswrapper[4720]: W1013 17:41:40.180122 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a82b2d7_243a_4818_b224_76f0300c42ba.slice/crio-cd9189558accd16ddfdd8250f66f07c51a666a1b3d4fdedc66486c06b3141d63 WatchSource:0}: Error finding container cd9189558accd16ddfdd8250f66f07c51a666a1b3d4fdedc66486c06b3141d63: Status 404 returned error can't find the container with id cd9189558accd16ddfdd8250f66f07c51a666a1b3d4fdedc66486c06b3141d63 Oct 13 17:41:40 crc kubenswrapper[4720]: I1013 17:41:40.348747 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9ed0bf93-abdc-4a94-bfcb-6293c9e01853","Type":"ContainerStarted","Data":"9ce2ea39e27de965dd11558d1c03899d067ec7f6dd8c9503567d4c8194529bd3"} Oct 13 17:41:40 crc kubenswrapper[4720]: I1013 17:41:40.348932 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Oct 13 17:41:40 crc kubenswrapper[4720]: I1013 17:41:40.353430 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5a82b2d7-243a-4818-b224-76f0300c42ba","Type":"ContainerStarted","Data":"cd9189558accd16ddfdd8250f66f07c51a666a1b3d4fdedc66486c06b3141d63"} Oct 13 17:41:40 crc kubenswrapper[4720]: I1013 17:41:40.368664 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.368639766 podStartE2EDuration="3.368639766s" podCreationTimestamp="2025-10-13 17:41:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:41:40.367586489 +0000 UTC m=+1045.824836661" watchObservedRunningTime="2025-10-13 17:41:40.368639766 +0000 UTC m=+1045.825889908" Oct 13 17:41:41 crc kubenswrapper[4720]: I1013 17:41:41.192970 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eead66d8-ff12-453d-b005-460cc04aea06" path="/var/lib/kubelet/pods/eead66d8-ff12-453d-b005-460cc04aea06/volumes" Oct 13 17:41:41 crc kubenswrapper[4720]: I1013 17:41:41.369255 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5a82b2d7-243a-4818-b224-76f0300c42ba","Type":"ContainerStarted","Data":"4633b935c5b9760435cd21725484ee711fe66257c2de65d45347215089861ab3"} Oct 13 17:41:43 crc kubenswrapper[4720]: I1013 17:41:43.392472 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5a82b2d7-243a-4818-b224-76f0300c42ba","Type":"ContainerStarted","Data":"fbc39403f7abc33cbdda6e1fb9203b73c77e09f779f05eeb2259f0e8129e05b8"} Oct 13 17:41:44 crc kubenswrapper[4720]: I1013 17:41:44.409157 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5a82b2d7-243a-4818-b224-76f0300c42ba","Type":"ContainerStarted","Data":"62d9e7d22f8b672deaf7a4bc47921fe4f6398a29c88fd07c2c16caa31a10accb"} Oct 13 17:41:45 crc kubenswrapper[4720]: I1013 17:41:45.212600 4720 patch_prober.go:28] interesting pod/machine-config-daemon-htwnl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 17:41:45 crc kubenswrapper[4720]: I1013 17:41:45.212884 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 17:41:45 crc kubenswrapper[4720]: I1013 17:41:45.424714 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5a82b2d7-243a-4818-b224-76f0300c42ba","Type":"ContainerStarted","Data":"c12f79b8ebfb5ff6ab164a385f83a17d992a0f0ed66a28ba5dc44aaad41aa971"} Oct 13 17:41:45 crc kubenswrapper[4720]: I1013 17:41:45.425006 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 13 17:41:45 crc kubenswrapper[4720]: I1013 17:41:45.469533 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.547812141 podStartE2EDuration="6.469510659s" podCreationTimestamp="2025-10-13 17:41:39 +0000 UTC" firstStartedPulling="2025-10-13 17:41:40.18239455 +0000 UTC m=+1045.639644682" lastFinishedPulling="2025-10-13 17:41:45.104093058 +0000 UTC m=+1050.561343200" observedRunningTime="2025-10-13 17:41:45.453765013 +0000 UTC m=+1050.911015155" watchObservedRunningTime="2025-10-13 17:41:45.469510659 +0000 UTC m=+1050.926760791" Oct 13 17:41:48 crc kubenswrapper[4720]: I1013 17:41:48.458527 4720 generic.go:334] "Generic (PLEG): container finished" podID="6dabb89f-c7f4-46e9-a7e8-0722a6f73bed" containerID="8902f6642a3e92b7dfef5171683559267ebcae0d4c7ce10ff2e3a493ad439e5d" exitCode=0 Oct 13 17:41:48 crc kubenswrapper[4720]: I1013 17:41:48.458577 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-5zp5c" event={"ID":"6dabb89f-c7f4-46e9-a7e8-0722a6f73bed","Type":"ContainerDied","Data":"8902f6642a3e92b7dfef5171683559267ebcae0d4c7ce10ff2e3a493ad439e5d"} Oct 13 17:41:49 crc kubenswrapper[4720]: I1013 17:41:49.572574 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Oct 13 17:41:49 crc kubenswrapper[4720]: I1013 17:41:49.846542 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-5zp5c" Oct 13 17:41:49 crc kubenswrapper[4720]: I1013 17:41:49.871453 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6dabb89f-c7f4-46e9-a7e8-0722a6f73bed-scripts\") pod \"6dabb89f-c7f4-46e9-a7e8-0722a6f73bed\" (UID: \"6dabb89f-c7f4-46e9-a7e8-0722a6f73bed\") " Oct 13 17:41:49 crc kubenswrapper[4720]: I1013 17:41:49.871619 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dabb89f-c7f4-46e9-a7e8-0722a6f73bed-combined-ca-bundle\") pod \"6dabb89f-c7f4-46e9-a7e8-0722a6f73bed\" (UID: \"6dabb89f-c7f4-46e9-a7e8-0722a6f73bed\") " Oct 13 17:41:49 crc kubenswrapper[4720]: I1013 17:41:49.871656 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dabb89f-c7f4-46e9-a7e8-0722a6f73bed-config-data\") pod \"6dabb89f-c7f4-46e9-a7e8-0722a6f73bed\" (UID: \"6dabb89f-c7f4-46e9-a7e8-0722a6f73bed\") " Oct 13 17:41:49 crc kubenswrapper[4720]: I1013 17:41:49.877305 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9nqp\" (UniqueName: \"kubernetes.io/projected/6dabb89f-c7f4-46e9-a7e8-0722a6f73bed-kube-api-access-c9nqp\") pod \"6dabb89f-c7f4-46e9-a7e8-0722a6f73bed\" (UID: \"6dabb89f-c7f4-46e9-a7e8-0722a6f73bed\") " Oct 13 17:41:49 crc kubenswrapper[4720]: I1013 17:41:49.882411 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dabb89f-c7f4-46e9-a7e8-0722a6f73bed-scripts" (OuterVolumeSpecName: "scripts") pod "6dabb89f-c7f4-46e9-a7e8-0722a6f73bed" (UID: "6dabb89f-c7f4-46e9-a7e8-0722a6f73bed"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:41:49 crc kubenswrapper[4720]: I1013 17:41:49.891429 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dabb89f-c7f4-46e9-a7e8-0722a6f73bed-kube-api-access-c9nqp" (OuterVolumeSpecName: "kube-api-access-c9nqp") pod "6dabb89f-c7f4-46e9-a7e8-0722a6f73bed" (UID: "6dabb89f-c7f4-46e9-a7e8-0722a6f73bed"). InnerVolumeSpecName "kube-api-access-c9nqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:41:49 crc kubenswrapper[4720]: I1013 17:41:49.921212 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dabb89f-c7f4-46e9-a7e8-0722a6f73bed-config-data" (OuterVolumeSpecName: "config-data") pod "6dabb89f-c7f4-46e9-a7e8-0722a6f73bed" (UID: "6dabb89f-c7f4-46e9-a7e8-0722a6f73bed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:41:49 crc kubenswrapper[4720]: I1013 17:41:49.929369 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dabb89f-c7f4-46e9-a7e8-0722a6f73bed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6dabb89f-c7f4-46e9-a7e8-0722a6f73bed" (UID: "6dabb89f-c7f4-46e9-a7e8-0722a6f73bed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:41:49 crc kubenswrapper[4720]: I1013 17:41:49.979998 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dabb89f-c7f4-46e9-a7e8-0722a6f73bed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:49 crc kubenswrapper[4720]: I1013 17:41:49.980029 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dabb89f-c7f4-46e9-a7e8-0722a6f73bed-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:49 crc kubenswrapper[4720]: I1013 17:41:49.980038 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9nqp\" (UniqueName: \"kubernetes.io/projected/6dabb89f-c7f4-46e9-a7e8-0722a6f73bed-kube-api-access-c9nqp\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:49 crc kubenswrapper[4720]: I1013 17:41:49.980048 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6dabb89f-c7f4-46e9-a7e8-0722a6f73bed-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:50 crc kubenswrapper[4720]: I1013 17:41:50.476571 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-5zp5c" event={"ID":"6dabb89f-c7f4-46e9-a7e8-0722a6f73bed","Type":"ContainerDied","Data":"4f2b64c7141cf4064448931b92e8543f336a7ded0f8b1272e8f3466c7de83733"} Oct 13 17:41:50 crc kubenswrapper[4720]: I1013 17:41:50.476611 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f2b64c7141cf4064448931b92e8543f336a7ded0f8b1272e8f3466c7de83733" Oct 13 17:41:50 crc kubenswrapper[4720]: I1013 17:41:50.476636 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-5zp5c" Oct 13 17:41:50 crc kubenswrapper[4720]: I1013 17:41:50.572529 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 13 17:41:50 crc kubenswrapper[4720]: E1013 17:41:50.573093 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dabb89f-c7f4-46e9-a7e8-0722a6f73bed" containerName="nova-cell0-conductor-db-sync" Oct 13 17:41:50 crc kubenswrapper[4720]: I1013 17:41:50.573110 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dabb89f-c7f4-46e9-a7e8-0722a6f73bed" containerName="nova-cell0-conductor-db-sync" Oct 13 17:41:50 crc kubenswrapper[4720]: I1013 17:41:50.573276 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dabb89f-c7f4-46e9-a7e8-0722a6f73bed" containerName="nova-cell0-conductor-db-sync" Oct 13 17:41:50 crc kubenswrapper[4720]: I1013 17:41:50.573814 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Oct 13 17:41:50 crc kubenswrapper[4720]: I1013 17:41:50.576555 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Oct 13 17:41:50 crc kubenswrapper[4720]: I1013 17:41:50.577067 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-dctrc" Oct 13 17:41:50 crc kubenswrapper[4720]: I1013 17:41:50.585460 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 13 17:41:50 crc kubenswrapper[4720]: I1013 17:41:50.589938 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b869ee69-b8f2-4318-a977-da27405dd698-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"b869ee69-b8f2-4318-a977-da27405dd698\") " pod="openstack/nova-cell0-conductor-0" Oct 13 17:41:50 crc kubenswrapper[4720]: I1013 17:41:50.590182 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd2kk\" (UniqueName: \"kubernetes.io/projected/b869ee69-b8f2-4318-a977-da27405dd698-kube-api-access-bd2kk\") pod \"nova-cell0-conductor-0\" (UID: \"b869ee69-b8f2-4318-a977-da27405dd698\") " pod="openstack/nova-cell0-conductor-0" Oct 13 17:41:50 crc kubenswrapper[4720]: I1013 17:41:50.590384 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b869ee69-b8f2-4318-a977-da27405dd698-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"b869ee69-b8f2-4318-a977-da27405dd698\") " pod="openstack/nova-cell0-conductor-0" Oct 13 17:41:50 crc kubenswrapper[4720]: I1013 17:41:50.691958 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b869ee69-b8f2-4318-a977-da27405dd698-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"b869ee69-b8f2-4318-a977-da27405dd698\") " pod="openstack/nova-cell0-conductor-0" Oct 13 17:41:50 crc kubenswrapper[4720]: I1013 17:41:50.692061 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b869ee69-b8f2-4318-a977-da27405dd698-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"b869ee69-b8f2-4318-a977-da27405dd698\") " pod="openstack/nova-cell0-conductor-0" Oct 13 17:41:50 crc kubenswrapper[4720]: I1013 17:41:50.692106 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bd2kk\" (UniqueName: \"kubernetes.io/projected/b869ee69-b8f2-4318-a977-da27405dd698-kube-api-access-bd2kk\") pod \"nova-cell0-conductor-0\" (UID: \"b869ee69-b8f2-4318-a977-da27405dd698\") " pod="openstack/nova-cell0-conductor-0" Oct 13 17:41:50 crc kubenswrapper[4720]: I1013 17:41:50.697275 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b869ee69-b8f2-4318-a977-da27405dd698-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"b869ee69-b8f2-4318-a977-da27405dd698\") " pod="openstack/nova-cell0-conductor-0" Oct 13 17:41:50 crc kubenswrapper[4720]: I1013 17:41:50.698277 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b869ee69-b8f2-4318-a977-da27405dd698-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"b869ee69-b8f2-4318-a977-da27405dd698\") " pod="openstack/nova-cell0-conductor-0" Oct 13 17:41:50 crc kubenswrapper[4720]: I1013 17:41:50.711250 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bd2kk\" (UniqueName: \"kubernetes.io/projected/b869ee69-b8f2-4318-a977-da27405dd698-kube-api-access-bd2kk\") pod \"nova-cell0-conductor-0\" (UID: \"b869ee69-b8f2-4318-a977-da27405dd698\") " pod="openstack/nova-cell0-conductor-0" Oct 13 17:41:50 crc kubenswrapper[4720]: I1013 17:41:50.911094 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Oct 13 17:41:51 crc kubenswrapper[4720]: I1013 17:41:51.373245 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 13 17:41:51 crc kubenswrapper[4720]: I1013 17:41:51.487152 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"b869ee69-b8f2-4318-a977-da27405dd698","Type":"ContainerStarted","Data":"645b5dc1955df2349a5033c3bb117f86bf141ef69ebd07763758a5d7bce91eda"} Oct 13 17:41:51 crc kubenswrapper[4720]: I1013 17:41:51.944315 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 13 17:41:51 crc kubenswrapper[4720]: I1013 17:41:51.945701 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5a82b2d7-243a-4818-b224-76f0300c42ba" containerName="ceilometer-central-agent" containerID="cri-o://4633b935c5b9760435cd21725484ee711fe66257c2de65d45347215089861ab3" gracePeriod=30 Oct 13 17:41:51 crc kubenswrapper[4720]: I1013 17:41:51.945756 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5a82b2d7-243a-4818-b224-76f0300c42ba" containerName="ceilometer-notification-agent" containerID="cri-o://fbc39403f7abc33cbdda6e1fb9203b73c77e09f779f05eeb2259f0e8129e05b8" gracePeriod=30 Oct 13 17:41:51 crc kubenswrapper[4720]: I1013 17:41:51.945783 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5a82b2d7-243a-4818-b224-76f0300c42ba" containerName="proxy-httpd" containerID="cri-o://c12f79b8ebfb5ff6ab164a385f83a17d992a0f0ed66a28ba5dc44aaad41aa971" gracePeriod=30 Oct 13 17:41:51 crc kubenswrapper[4720]: I1013 17:41:51.945796 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5a82b2d7-243a-4818-b224-76f0300c42ba" containerName="sg-core" containerID="cri-o://62d9e7d22f8b672deaf7a4bc47921fe4f6398a29c88fd07c2c16caa31a10accb" gracePeriod=30 Oct 13 17:41:52 crc kubenswrapper[4720]: I1013 17:41:52.503064 4720 generic.go:334] "Generic (PLEG): container finished" podID="5a82b2d7-243a-4818-b224-76f0300c42ba" containerID="c12f79b8ebfb5ff6ab164a385f83a17d992a0f0ed66a28ba5dc44aaad41aa971" exitCode=0 Oct 13 17:41:52 crc kubenswrapper[4720]: I1013 17:41:52.503113 4720 generic.go:334] "Generic (PLEG): container finished" podID="5a82b2d7-243a-4818-b224-76f0300c42ba" containerID="62d9e7d22f8b672deaf7a4bc47921fe4f6398a29c88fd07c2c16caa31a10accb" exitCode=2 Oct 13 17:41:52 crc kubenswrapper[4720]: I1013 17:41:52.503125 4720 generic.go:334] "Generic (PLEG): container finished" podID="5a82b2d7-243a-4818-b224-76f0300c42ba" containerID="4633b935c5b9760435cd21725484ee711fe66257c2de65d45347215089861ab3" exitCode=0 Oct 13 17:41:52 crc kubenswrapper[4720]: I1013 17:41:52.503167 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5a82b2d7-243a-4818-b224-76f0300c42ba","Type":"ContainerDied","Data":"c12f79b8ebfb5ff6ab164a385f83a17d992a0f0ed66a28ba5dc44aaad41aa971"} Oct 13 17:41:52 crc kubenswrapper[4720]: I1013 17:41:52.503249 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5a82b2d7-243a-4818-b224-76f0300c42ba","Type":"ContainerDied","Data":"62d9e7d22f8b672deaf7a4bc47921fe4f6398a29c88fd07c2c16caa31a10accb"} Oct 13 17:41:52 crc kubenswrapper[4720]: I1013 17:41:52.503269 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5a82b2d7-243a-4818-b224-76f0300c42ba","Type":"ContainerDied","Data":"4633b935c5b9760435cd21725484ee711fe66257c2de65d45347215089861ab3"} Oct 13 17:41:52 crc kubenswrapper[4720]: I1013 17:41:52.507137 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"b869ee69-b8f2-4318-a977-da27405dd698","Type":"ContainerStarted","Data":"6a6919d1e9df66f8474682b733cfd67cc4ed469a73c8dc64d132784b1745a8a5"} Oct 13 17:41:52 crc kubenswrapper[4720]: I1013 17:41:52.507389 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Oct 13 17:41:52 crc kubenswrapper[4720]: I1013 17:41:52.539581 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.53956582 podStartE2EDuration="2.53956582s" podCreationTimestamp="2025-10-13 17:41:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:41:52.530589249 +0000 UTC m=+1057.987839381" watchObservedRunningTime="2025-10-13 17:41:52.53956582 +0000 UTC m=+1057.996815952" Oct 13 17:41:53 crc kubenswrapper[4720]: I1013 17:41:53.414969 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 17:41:53 crc kubenswrapper[4720]: I1013 17:41:53.452454 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shsr5\" (UniqueName: \"kubernetes.io/projected/5a82b2d7-243a-4818-b224-76f0300c42ba-kube-api-access-shsr5\") pod \"5a82b2d7-243a-4818-b224-76f0300c42ba\" (UID: \"5a82b2d7-243a-4818-b224-76f0300c42ba\") " Oct 13 17:41:53 crc kubenswrapper[4720]: I1013 17:41:53.452610 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a82b2d7-243a-4818-b224-76f0300c42ba-scripts\") pod \"5a82b2d7-243a-4818-b224-76f0300c42ba\" (UID: \"5a82b2d7-243a-4818-b224-76f0300c42ba\") " Oct 13 17:41:53 crc kubenswrapper[4720]: I1013 17:41:53.452695 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a82b2d7-243a-4818-b224-76f0300c42ba-combined-ca-bundle\") pod \"5a82b2d7-243a-4818-b224-76f0300c42ba\" (UID: \"5a82b2d7-243a-4818-b224-76f0300c42ba\") " Oct 13 17:41:53 crc kubenswrapper[4720]: I1013 17:41:53.452726 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a82b2d7-243a-4818-b224-76f0300c42ba-config-data\") pod \"5a82b2d7-243a-4818-b224-76f0300c42ba\" (UID: \"5a82b2d7-243a-4818-b224-76f0300c42ba\") " Oct 13 17:41:53 crc kubenswrapper[4720]: I1013 17:41:53.452783 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5a82b2d7-243a-4818-b224-76f0300c42ba-sg-core-conf-yaml\") pod \"5a82b2d7-243a-4818-b224-76f0300c42ba\" (UID: \"5a82b2d7-243a-4818-b224-76f0300c42ba\") " Oct 13 17:41:53 crc kubenswrapper[4720]: I1013 17:41:53.452817 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a82b2d7-243a-4818-b224-76f0300c42ba-run-httpd\") pod \"5a82b2d7-243a-4818-b224-76f0300c42ba\" (UID: \"5a82b2d7-243a-4818-b224-76f0300c42ba\") " Oct 13 17:41:53 crc kubenswrapper[4720]: I1013 17:41:53.452929 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a82b2d7-243a-4818-b224-76f0300c42ba-log-httpd\") pod \"5a82b2d7-243a-4818-b224-76f0300c42ba\" (UID: \"5a82b2d7-243a-4818-b224-76f0300c42ba\") " Oct 13 17:41:53 crc kubenswrapper[4720]: I1013 17:41:53.453994 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a82b2d7-243a-4818-b224-76f0300c42ba-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5a82b2d7-243a-4818-b224-76f0300c42ba" (UID: "5a82b2d7-243a-4818-b224-76f0300c42ba"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:41:53 crc kubenswrapper[4720]: I1013 17:41:53.462612 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a82b2d7-243a-4818-b224-76f0300c42ba-scripts" (OuterVolumeSpecName: "scripts") pod "5a82b2d7-243a-4818-b224-76f0300c42ba" (UID: "5a82b2d7-243a-4818-b224-76f0300c42ba"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:41:53 crc kubenswrapper[4720]: I1013 17:41:53.462851 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a82b2d7-243a-4818-b224-76f0300c42ba-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5a82b2d7-243a-4818-b224-76f0300c42ba" (UID: "5a82b2d7-243a-4818-b224-76f0300c42ba"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:41:53 crc kubenswrapper[4720]: I1013 17:41:53.466909 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a82b2d7-243a-4818-b224-76f0300c42ba-kube-api-access-shsr5" (OuterVolumeSpecName: "kube-api-access-shsr5") pod "5a82b2d7-243a-4818-b224-76f0300c42ba" (UID: "5a82b2d7-243a-4818-b224-76f0300c42ba"). InnerVolumeSpecName "kube-api-access-shsr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:41:53 crc kubenswrapper[4720]: I1013 17:41:53.484885 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a82b2d7-243a-4818-b224-76f0300c42ba-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5a82b2d7-243a-4818-b224-76f0300c42ba" (UID: "5a82b2d7-243a-4818-b224-76f0300c42ba"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:41:53 crc kubenswrapper[4720]: I1013 17:41:53.519544 4720 generic.go:334] "Generic (PLEG): container finished" podID="5a82b2d7-243a-4818-b224-76f0300c42ba" containerID="fbc39403f7abc33cbdda6e1fb9203b73c77e09f779f05eeb2259f0e8129e05b8" exitCode=0 Oct 13 17:41:53 crc kubenswrapper[4720]: I1013 17:41:53.519656 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 17:41:53 crc kubenswrapper[4720]: I1013 17:41:53.520307 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5a82b2d7-243a-4818-b224-76f0300c42ba","Type":"ContainerDied","Data":"fbc39403f7abc33cbdda6e1fb9203b73c77e09f779f05eeb2259f0e8129e05b8"} Oct 13 17:41:53 crc kubenswrapper[4720]: I1013 17:41:53.520339 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5a82b2d7-243a-4818-b224-76f0300c42ba","Type":"ContainerDied","Data":"cd9189558accd16ddfdd8250f66f07c51a666a1b3d4fdedc66486c06b3141d63"} Oct 13 17:41:53 crc kubenswrapper[4720]: I1013 17:41:53.520357 4720 scope.go:117] "RemoveContainer" containerID="c12f79b8ebfb5ff6ab164a385f83a17d992a0f0ed66a28ba5dc44aaad41aa971" Oct 13 17:41:53 crc kubenswrapper[4720]: I1013 17:41:53.535060 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a82b2d7-243a-4818-b224-76f0300c42ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5a82b2d7-243a-4818-b224-76f0300c42ba" (UID: "5a82b2d7-243a-4818-b224-76f0300c42ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:41:53 crc kubenswrapper[4720]: I1013 17:41:53.538548 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a82b2d7-243a-4818-b224-76f0300c42ba-config-data" (OuterVolumeSpecName: "config-data") pod "5a82b2d7-243a-4818-b224-76f0300c42ba" (UID: "5a82b2d7-243a-4818-b224-76f0300c42ba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:41:53 crc kubenswrapper[4720]: I1013 17:41:53.555860 4720 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a82b2d7-243a-4818-b224-76f0300c42ba-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:53 crc kubenswrapper[4720]: I1013 17:41:53.556085 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shsr5\" (UniqueName: \"kubernetes.io/projected/5a82b2d7-243a-4818-b224-76f0300c42ba-kube-api-access-shsr5\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:53 crc kubenswrapper[4720]: I1013 17:41:53.556170 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a82b2d7-243a-4818-b224-76f0300c42ba-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:53 crc kubenswrapper[4720]: I1013 17:41:53.556570 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a82b2d7-243a-4818-b224-76f0300c42ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:53 crc kubenswrapper[4720]: I1013 17:41:53.556641 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a82b2d7-243a-4818-b224-76f0300c42ba-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:53 crc kubenswrapper[4720]: I1013 17:41:53.556714 4720 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5a82b2d7-243a-4818-b224-76f0300c42ba-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:53 crc kubenswrapper[4720]: I1013 17:41:53.556780 4720 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a82b2d7-243a-4818-b224-76f0300c42ba-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 13 17:41:53 crc kubenswrapper[4720]: I1013 17:41:53.559486 4720 scope.go:117] "RemoveContainer" containerID="62d9e7d22f8b672deaf7a4bc47921fe4f6398a29c88fd07c2c16caa31a10accb" Oct 13 17:41:53 crc kubenswrapper[4720]: I1013 17:41:53.577571 4720 scope.go:117] "RemoveContainer" containerID="fbc39403f7abc33cbdda6e1fb9203b73c77e09f779f05eeb2259f0e8129e05b8" Oct 13 17:41:53 crc kubenswrapper[4720]: I1013 17:41:53.596985 4720 scope.go:117] "RemoveContainer" containerID="4633b935c5b9760435cd21725484ee711fe66257c2de65d45347215089861ab3" Oct 13 17:41:53 crc kubenswrapper[4720]: I1013 17:41:53.615729 4720 scope.go:117] "RemoveContainer" containerID="c12f79b8ebfb5ff6ab164a385f83a17d992a0f0ed66a28ba5dc44aaad41aa971" Oct 13 17:41:53 crc kubenswrapper[4720]: E1013 17:41:53.616269 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c12f79b8ebfb5ff6ab164a385f83a17d992a0f0ed66a28ba5dc44aaad41aa971\": container with ID starting with c12f79b8ebfb5ff6ab164a385f83a17d992a0f0ed66a28ba5dc44aaad41aa971 not found: ID does not exist" containerID="c12f79b8ebfb5ff6ab164a385f83a17d992a0f0ed66a28ba5dc44aaad41aa971" Oct 13 17:41:53 crc kubenswrapper[4720]: I1013 17:41:53.616356 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c12f79b8ebfb5ff6ab164a385f83a17d992a0f0ed66a28ba5dc44aaad41aa971"} err="failed to get container status \"c12f79b8ebfb5ff6ab164a385f83a17d992a0f0ed66a28ba5dc44aaad41aa971\": rpc error: code = NotFound desc = could not find container \"c12f79b8ebfb5ff6ab164a385f83a17d992a0f0ed66a28ba5dc44aaad41aa971\": container with ID starting with c12f79b8ebfb5ff6ab164a385f83a17d992a0f0ed66a28ba5dc44aaad41aa971 not found: ID does not exist" Oct 13 17:41:53 crc kubenswrapper[4720]: I1013 17:41:53.616431 4720 scope.go:117] "RemoveContainer" containerID="62d9e7d22f8b672deaf7a4bc47921fe4f6398a29c88fd07c2c16caa31a10accb" Oct 13 17:41:53 crc kubenswrapper[4720]: E1013 17:41:53.616735 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62d9e7d22f8b672deaf7a4bc47921fe4f6398a29c88fd07c2c16caa31a10accb\": container with ID starting with 62d9e7d22f8b672deaf7a4bc47921fe4f6398a29c88fd07c2c16caa31a10accb not found: ID does not exist" containerID="62d9e7d22f8b672deaf7a4bc47921fe4f6398a29c88fd07c2c16caa31a10accb" Oct 13 17:41:53 crc kubenswrapper[4720]: I1013 17:41:53.616807 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62d9e7d22f8b672deaf7a4bc47921fe4f6398a29c88fd07c2c16caa31a10accb"} err="failed to get container status \"62d9e7d22f8b672deaf7a4bc47921fe4f6398a29c88fd07c2c16caa31a10accb\": rpc error: code = NotFound desc = could not find container \"62d9e7d22f8b672deaf7a4bc47921fe4f6398a29c88fd07c2c16caa31a10accb\": container with ID starting with 62d9e7d22f8b672deaf7a4bc47921fe4f6398a29c88fd07c2c16caa31a10accb not found: ID does not exist" Oct 13 17:41:53 crc kubenswrapper[4720]: I1013 17:41:53.616877 4720 scope.go:117] "RemoveContainer" containerID="fbc39403f7abc33cbdda6e1fb9203b73c77e09f779f05eeb2259f0e8129e05b8" Oct 13 17:41:53 crc kubenswrapper[4720]: E1013 17:41:53.617132 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbc39403f7abc33cbdda6e1fb9203b73c77e09f779f05eeb2259f0e8129e05b8\": container with ID starting with fbc39403f7abc33cbdda6e1fb9203b73c77e09f779f05eeb2259f0e8129e05b8 not found: ID does not exist" containerID="fbc39403f7abc33cbdda6e1fb9203b73c77e09f779f05eeb2259f0e8129e05b8" Oct 13 17:41:53 crc kubenswrapper[4720]: I1013 17:41:53.617227 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbc39403f7abc33cbdda6e1fb9203b73c77e09f779f05eeb2259f0e8129e05b8"} err="failed to get container status \"fbc39403f7abc33cbdda6e1fb9203b73c77e09f779f05eeb2259f0e8129e05b8\": rpc error: code = NotFound desc = could not find container \"fbc39403f7abc33cbdda6e1fb9203b73c77e09f779f05eeb2259f0e8129e05b8\": container with ID starting with fbc39403f7abc33cbdda6e1fb9203b73c77e09f779f05eeb2259f0e8129e05b8 not found: ID does not exist" Oct 13 17:41:53 crc kubenswrapper[4720]: I1013 17:41:53.617296 4720 scope.go:117] "RemoveContainer" containerID="4633b935c5b9760435cd21725484ee711fe66257c2de65d45347215089861ab3" Oct 13 17:41:53 crc kubenswrapper[4720]: E1013 17:41:53.617579 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4633b935c5b9760435cd21725484ee711fe66257c2de65d45347215089861ab3\": container with ID starting with 4633b935c5b9760435cd21725484ee711fe66257c2de65d45347215089861ab3 not found: ID does not exist" containerID="4633b935c5b9760435cd21725484ee711fe66257c2de65d45347215089861ab3" Oct 13 17:41:53 crc kubenswrapper[4720]: I1013 17:41:53.617661 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4633b935c5b9760435cd21725484ee711fe66257c2de65d45347215089861ab3"} err="failed to get container status \"4633b935c5b9760435cd21725484ee711fe66257c2de65d45347215089861ab3\": rpc error: code = NotFound desc = could not find container \"4633b935c5b9760435cd21725484ee711fe66257c2de65d45347215089861ab3\": container with ID starting with 4633b935c5b9760435cd21725484ee711fe66257c2de65d45347215089861ab3 not found: ID does not exist" Oct 13 17:41:53 crc kubenswrapper[4720]: I1013 17:41:53.863074 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 13 17:41:53 crc kubenswrapper[4720]: I1013 17:41:53.876731 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 13 17:41:53 crc kubenswrapper[4720]: I1013 17:41:53.883486 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 13 17:41:53 crc kubenswrapper[4720]: E1013 17:41:53.883844 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a82b2d7-243a-4818-b224-76f0300c42ba" containerName="ceilometer-central-agent" Oct 13 17:41:53 crc kubenswrapper[4720]: I1013 17:41:53.883864 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a82b2d7-243a-4818-b224-76f0300c42ba" containerName="ceilometer-central-agent" Oct 13 17:41:53 crc kubenswrapper[4720]: E1013 17:41:53.883880 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a82b2d7-243a-4818-b224-76f0300c42ba" containerName="sg-core" Oct 13 17:41:53 crc kubenswrapper[4720]: I1013 17:41:53.883886 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a82b2d7-243a-4818-b224-76f0300c42ba" containerName="sg-core" Oct 13 17:41:53 crc kubenswrapper[4720]: E1013 17:41:53.883893 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a82b2d7-243a-4818-b224-76f0300c42ba" containerName="ceilometer-notification-agent" Oct 13 17:41:53 crc kubenswrapper[4720]: I1013 17:41:53.883901 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a82b2d7-243a-4818-b224-76f0300c42ba" containerName="ceilometer-notification-agent" Oct 13 17:41:53 crc kubenswrapper[4720]: E1013 17:41:53.883909 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a82b2d7-243a-4818-b224-76f0300c42ba" containerName="proxy-httpd" Oct 13 17:41:53 crc kubenswrapper[4720]: I1013 17:41:53.883915 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a82b2d7-243a-4818-b224-76f0300c42ba" containerName="proxy-httpd" Oct 13 17:41:53 crc kubenswrapper[4720]: I1013 17:41:53.884095 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a82b2d7-243a-4818-b224-76f0300c42ba" containerName="ceilometer-notification-agent" Oct 13 17:41:53 crc kubenswrapper[4720]: I1013 17:41:53.884112 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a82b2d7-243a-4818-b224-76f0300c42ba" containerName="ceilometer-central-agent" Oct 13 17:41:53 crc kubenswrapper[4720]: I1013 17:41:53.884122 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a82b2d7-243a-4818-b224-76f0300c42ba" containerName="proxy-httpd" Oct 13 17:41:53 crc kubenswrapper[4720]: I1013 17:41:53.884138 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a82b2d7-243a-4818-b224-76f0300c42ba" containerName="sg-core" Oct 13 17:41:53 crc kubenswrapper[4720]: I1013 17:41:53.885689 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 17:41:53 crc kubenswrapper[4720]: I1013 17:41:53.888391 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 13 17:41:53 crc kubenswrapper[4720]: I1013 17:41:53.889230 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 13 17:41:53 crc kubenswrapper[4720]: I1013 17:41:53.898914 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 13 17:41:54 crc kubenswrapper[4720]: I1013 17:41:54.064304 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e562d7d-19f2-4b5d-82a9-129d8128f66f-run-httpd\") pod \"ceilometer-0\" (UID: \"6e562d7d-19f2-4b5d-82a9-129d8128f66f\") " pod="openstack/ceilometer-0" Oct 13 17:41:54 crc kubenswrapper[4720]: I1013 17:41:54.064852 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e562d7d-19f2-4b5d-82a9-129d8128f66f-log-httpd\") pod \"ceilometer-0\" (UID: \"6e562d7d-19f2-4b5d-82a9-129d8128f66f\") " pod="openstack/ceilometer-0" Oct 13 17:41:54 crc kubenswrapper[4720]: I1013 17:41:54.065136 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e562d7d-19f2-4b5d-82a9-129d8128f66f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6e562d7d-19f2-4b5d-82a9-129d8128f66f\") " pod="openstack/ceilometer-0" Oct 13 17:41:54 crc kubenswrapper[4720]: I1013 17:41:54.065304 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgdgc\" (UniqueName: \"kubernetes.io/projected/6e562d7d-19f2-4b5d-82a9-129d8128f66f-kube-api-access-fgdgc\") pod \"ceilometer-0\" (UID: \"6e562d7d-19f2-4b5d-82a9-129d8128f66f\") " pod="openstack/ceilometer-0" Oct 13 17:41:54 crc kubenswrapper[4720]: I1013 17:41:54.065379 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6e562d7d-19f2-4b5d-82a9-129d8128f66f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6e562d7d-19f2-4b5d-82a9-129d8128f66f\") " pod="openstack/ceilometer-0" Oct 13 17:41:54 crc kubenswrapper[4720]: I1013 17:41:54.065429 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e562d7d-19f2-4b5d-82a9-129d8128f66f-config-data\") pod \"ceilometer-0\" (UID: \"6e562d7d-19f2-4b5d-82a9-129d8128f66f\") " pod="openstack/ceilometer-0" Oct 13 17:41:54 crc kubenswrapper[4720]: I1013 17:41:54.065624 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e562d7d-19f2-4b5d-82a9-129d8128f66f-scripts\") pod \"ceilometer-0\" (UID: \"6e562d7d-19f2-4b5d-82a9-129d8128f66f\") " pod="openstack/ceilometer-0" Oct 13 17:41:54 crc kubenswrapper[4720]: I1013 17:41:54.167660 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e562d7d-19f2-4b5d-82a9-129d8128f66f-scripts\") pod \"ceilometer-0\" (UID: \"6e562d7d-19f2-4b5d-82a9-129d8128f66f\") " pod="openstack/ceilometer-0" Oct 13 17:41:54 crc kubenswrapper[4720]: I1013 17:41:54.167731 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e562d7d-19f2-4b5d-82a9-129d8128f66f-run-httpd\") pod \"ceilometer-0\" (UID: \"6e562d7d-19f2-4b5d-82a9-129d8128f66f\") " pod="openstack/ceilometer-0" Oct 13 17:41:54 crc kubenswrapper[4720]: I1013 17:41:54.167785 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e562d7d-19f2-4b5d-82a9-129d8128f66f-log-httpd\") pod \"ceilometer-0\" (UID: \"6e562d7d-19f2-4b5d-82a9-129d8128f66f\") " pod="openstack/ceilometer-0" Oct 13 17:41:54 crc kubenswrapper[4720]: I1013 17:41:54.167861 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e562d7d-19f2-4b5d-82a9-129d8128f66f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6e562d7d-19f2-4b5d-82a9-129d8128f66f\") " pod="openstack/ceilometer-0" Oct 13 17:41:54 crc kubenswrapper[4720]: I1013 17:41:54.167903 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgdgc\" (UniqueName: \"kubernetes.io/projected/6e562d7d-19f2-4b5d-82a9-129d8128f66f-kube-api-access-fgdgc\") pod \"ceilometer-0\" (UID: \"6e562d7d-19f2-4b5d-82a9-129d8128f66f\") " pod="openstack/ceilometer-0" Oct 13 17:41:54 crc kubenswrapper[4720]: I1013 17:41:54.168105 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6e562d7d-19f2-4b5d-82a9-129d8128f66f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6e562d7d-19f2-4b5d-82a9-129d8128f66f\") " pod="openstack/ceilometer-0" Oct 13 17:41:54 crc kubenswrapper[4720]: I1013 17:41:54.168135 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e562d7d-19f2-4b5d-82a9-129d8128f66f-config-data\") pod \"ceilometer-0\" (UID: \"6e562d7d-19f2-4b5d-82a9-129d8128f66f\") " pod="openstack/ceilometer-0" Oct 13 17:41:54 crc kubenswrapper[4720]: I1013 17:41:54.168212 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e562d7d-19f2-4b5d-82a9-129d8128f66f-run-httpd\") pod \"ceilometer-0\" (UID: \"6e562d7d-19f2-4b5d-82a9-129d8128f66f\") " pod="openstack/ceilometer-0" Oct 13 17:41:54 crc kubenswrapper[4720]: I1013 17:41:54.168409 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e562d7d-19f2-4b5d-82a9-129d8128f66f-log-httpd\") pod \"ceilometer-0\" (UID: \"6e562d7d-19f2-4b5d-82a9-129d8128f66f\") " pod="openstack/ceilometer-0" Oct 13 17:41:54 crc kubenswrapper[4720]: I1013 17:41:54.172693 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6e562d7d-19f2-4b5d-82a9-129d8128f66f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6e562d7d-19f2-4b5d-82a9-129d8128f66f\") " pod="openstack/ceilometer-0" Oct 13 17:41:54 crc kubenswrapper[4720]: I1013 17:41:54.172937 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e562d7d-19f2-4b5d-82a9-129d8128f66f-scripts\") pod \"ceilometer-0\" (UID: \"6e562d7d-19f2-4b5d-82a9-129d8128f66f\") " pod="openstack/ceilometer-0" Oct 13 17:41:54 crc kubenswrapper[4720]: I1013 17:41:54.173306 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e562d7d-19f2-4b5d-82a9-129d8128f66f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6e562d7d-19f2-4b5d-82a9-129d8128f66f\") " pod="openstack/ceilometer-0" Oct 13 17:41:54 crc kubenswrapper[4720]: I1013 17:41:54.173593 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e562d7d-19f2-4b5d-82a9-129d8128f66f-config-data\") pod \"ceilometer-0\" (UID: \"6e562d7d-19f2-4b5d-82a9-129d8128f66f\") " pod="openstack/ceilometer-0" Oct 13 17:41:54 crc kubenswrapper[4720]: I1013 17:41:54.192596 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgdgc\" (UniqueName: \"kubernetes.io/projected/6e562d7d-19f2-4b5d-82a9-129d8128f66f-kube-api-access-fgdgc\") pod \"ceilometer-0\" (UID: \"6e562d7d-19f2-4b5d-82a9-129d8128f66f\") " pod="openstack/ceilometer-0" Oct 13 17:41:54 crc kubenswrapper[4720]: I1013 17:41:54.250420 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 17:41:54 crc kubenswrapper[4720]: W1013 17:41:54.713773 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e562d7d_19f2_4b5d_82a9_129d8128f66f.slice/crio-a91a9df1bc1a3a79d410437d32935c9302d5bafa0be08ec1b4abe6bca901c454 WatchSource:0}: Error finding container a91a9df1bc1a3a79d410437d32935c9302d5bafa0be08ec1b4abe6bca901c454: Status 404 returned error can't find the container with id a91a9df1bc1a3a79d410437d32935c9302d5bafa0be08ec1b4abe6bca901c454 Oct 13 17:41:54 crc kubenswrapper[4720]: I1013 17:41:54.722594 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 13 17:41:55 crc kubenswrapper[4720]: I1013 17:41:55.183347 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a82b2d7-243a-4818-b224-76f0300c42ba" path="/var/lib/kubelet/pods/5a82b2d7-243a-4818-b224-76f0300c42ba/volumes" Oct 13 17:41:55 crc kubenswrapper[4720]: I1013 17:41:55.546784 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e562d7d-19f2-4b5d-82a9-129d8128f66f","Type":"ContainerStarted","Data":"6719bdac9c22b77bc6474d466c3f90512df9624bf9a3c652cee60f5fdd7c1d1c"} Oct 13 17:41:55 crc kubenswrapper[4720]: I1013 17:41:55.546832 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e562d7d-19f2-4b5d-82a9-129d8128f66f","Type":"ContainerStarted","Data":"a91a9df1bc1a3a79d410437d32935c9302d5bafa0be08ec1b4abe6bca901c454"} Oct 13 17:41:56 crc kubenswrapper[4720]: I1013 17:41:56.560480 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e562d7d-19f2-4b5d-82a9-129d8128f66f","Type":"ContainerStarted","Data":"a5c3001d1504ef7a974b2871e80e14c20eb321021e69f6c7a93a7ba98e613e9f"} Oct 13 17:41:57 crc kubenswrapper[4720]: I1013 17:41:57.574595 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e562d7d-19f2-4b5d-82a9-129d8128f66f","Type":"ContainerStarted","Data":"52e4948cd05dd0f58932a8ee391786e5bf454f487420016896bc53b4f4164bfa"} Oct 13 17:41:58 crc kubenswrapper[4720]: I1013 17:41:58.588124 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e562d7d-19f2-4b5d-82a9-129d8128f66f","Type":"ContainerStarted","Data":"3ebf0b71f7adf2a473991fde6ef983541ecfc4bc5fc63ef4a3e18e5746125e11"} Oct 13 17:41:58 crc kubenswrapper[4720]: I1013 17:41:58.588500 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 13 17:41:58 crc kubenswrapper[4720]: I1013 17:41:58.632826 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.264039513 podStartE2EDuration="5.632794753s" podCreationTimestamp="2025-10-13 17:41:53 +0000 UTC" firstStartedPulling="2025-10-13 17:41:54.716977834 +0000 UTC m=+1060.174228006" lastFinishedPulling="2025-10-13 17:41:58.085733074 +0000 UTC m=+1063.542983246" observedRunningTime="2025-10-13 17:41:58.619218642 +0000 UTC m=+1064.076468814" watchObservedRunningTime="2025-10-13 17:41:58.632794753 +0000 UTC m=+1064.090044915" Oct 13 17:42:00 crc kubenswrapper[4720]: I1013 17:42:00.957721 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Oct 13 17:42:01 crc kubenswrapper[4720]: I1013 17:42:01.468389 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-5hfwq"] Oct 13 17:42:01 crc kubenswrapper[4720]: I1013 17:42:01.469919 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-5hfwq" Oct 13 17:42:01 crc kubenswrapper[4720]: I1013 17:42:01.473114 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Oct 13 17:42:01 crc kubenswrapper[4720]: I1013 17:42:01.480383 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Oct 13 17:42:01 crc kubenswrapper[4720]: I1013 17:42:01.495879 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-5hfwq"] Oct 13 17:42:01 crc kubenswrapper[4720]: I1013 17:42:01.607615 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05ec558f-1d81-4986-a415-06281c3cff62-config-data\") pod \"nova-cell0-cell-mapping-5hfwq\" (UID: \"05ec558f-1d81-4986-a415-06281c3cff62\") " pod="openstack/nova-cell0-cell-mapping-5hfwq" Oct 13 17:42:01 crc kubenswrapper[4720]: I1013 17:42:01.607709 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05ec558f-1d81-4986-a415-06281c3cff62-scripts\") pod \"nova-cell0-cell-mapping-5hfwq\" (UID: \"05ec558f-1d81-4986-a415-06281c3cff62\") " pod="openstack/nova-cell0-cell-mapping-5hfwq" Oct 13 17:42:01 crc kubenswrapper[4720]: I1013 17:42:01.607737 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rql5j\" (UniqueName: \"kubernetes.io/projected/05ec558f-1d81-4986-a415-06281c3cff62-kube-api-access-rql5j\") pod \"nova-cell0-cell-mapping-5hfwq\" (UID: \"05ec558f-1d81-4986-a415-06281c3cff62\") " pod="openstack/nova-cell0-cell-mapping-5hfwq" Oct 13 17:42:01 crc kubenswrapper[4720]: I1013 17:42:01.607754 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05ec558f-1d81-4986-a415-06281c3cff62-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-5hfwq\" (UID: \"05ec558f-1d81-4986-a415-06281c3cff62\") " pod="openstack/nova-cell0-cell-mapping-5hfwq" Oct 13 17:42:01 crc kubenswrapper[4720]: I1013 17:42:01.643855 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 13 17:42:01 crc kubenswrapper[4720]: I1013 17:42:01.645268 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 13 17:42:01 crc kubenswrapper[4720]: I1013 17:42:01.652451 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 13 17:42:01 crc kubenswrapper[4720]: I1013 17:42:01.659504 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 13 17:42:01 crc kubenswrapper[4720]: I1013 17:42:01.684370 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 13 17:42:01 crc kubenswrapper[4720]: I1013 17:42:01.685420 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 13 17:42:01 crc kubenswrapper[4720]: I1013 17:42:01.685515 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 13 17:42:01 crc kubenswrapper[4720]: I1013 17:42:01.687928 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 13 17:42:01 crc kubenswrapper[4720]: I1013 17:42:01.709290 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05ec558f-1d81-4986-a415-06281c3cff62-scripts\") pod \"nova-cell0-cell-mapping-5hfwq\" (UID: \"05ec558f-1d81-4986-a415-06281c3cff62\") " pod="openstack/nova-cell0-cell-mapping-5hfwq" Oct 13 17:42:01 crc kubenswrapper[4720]: I1013 17:42:01.709329 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rql5j\" (UniqueName: \"kubernetes.io/projected/05ec558f-1d81-4986-a415-06281c3cff62-kube-api-access-rql5j\") pod \"nova-cell0-cell-mapping-5hfwq\" (UID: \"05ec558f-1d81-4986-a415-06281c3cff62\") " pod="openstack/nova-cell0-cell-mapping-5hfwq" Oct 13 17:42:01 crc kubenswrapper[4720]: I1013 17:42:01.709350 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05ec558f-1d81-4986-a415-06281c3cff62-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-5hfwq\" (UID: \"05ec558f-1d81-4986-a415-06281c3cff62\") " pod="openstack/nova-cell0-cell-mapping-5hfwq" Oct 13 17:42:01 crc kubenswrapper[4720]: I1013 17:42:01.709473 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05ec558f-1d81-4986-a415-06281c3cff62-config-data\") pod \"nova-cell0-cell-mapping-5hfwq\" (UID: \"05ec558f-1d81-4986-a415-06281c3cff62\") " pod="openstack/nova-cell0-cell-mapping-5hfwq" Oct 13 17:42:01 crc kubenswrapper[4720]: I1013 17:42:01.714847 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05ec558f-1d81-4986-a415-06281c3cff62-config-data\") pod \"nova-cell0-cell-mapping-5hfwq\" (UID: \"05ec558f-1d81-4986-a415-06281c3cff62\") " pod="openstack/nova-cell0-cell-mapping-5hfwq" Oct 13 17:42:01 crc kubenswrapper[4720]: I1013 17:42:01.715132 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05ec558f-1d81-4986-a415-06281c3cff62-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-5hfwq\" (UID: \"05ec558f-1d81-4986-a415-06281c3cff62\") " pod="openstack/nova-cell0-cell-mapping-5hfwq" Oct 13 17:42:01 crc kubenswrapper[4720]: I1013 17:42:01.717590 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05ec558f-1d81-4986-a415-06281c3cff62-scripts\") pod \"nova-cell0-cell-mapping-5hfwq\" (UID: \"05ec558f-1d81-4986-a415-06281c3cff62\") " pod="openstack/nova-cell0-cell-mapping-5hfwq" Oct 13 17:42:01 crc kubenswrapper[4720]: I1013 17:42:01.741728 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rql5j\" (UniqueName: \"kubernetes.io/projected/05ec558f-1d81-4986-a415-06281c3cff62-kube-api-access-rql5j\") pod \"nova-cell0-cell-mapping-5hfwq\" (UID: \"05ec558f-1d81-4986-a415-06281c3cff62\") " pod="openstack/nova-cell0-cell-mapping-5hfwq" Oct 13 17:42:01 crc kubenswrapper[4720]: I1013 17:42:01.741795 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 13 17:42:01 crc kubenswrapper[4720]: I1013 17:42:01.742967 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 13 17:42:01 crc kubenswrapper[4720]: I1013 17:42:01.752942 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Oct 13 17:42:01 crc kubenswrapper[4720]: I1013 17:42:01.764509 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 13 17:42:01 crc kubenswrapper[4720]: I1013 17:42:01.766066 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 13 17:42:01 crc kubenswrapper[4720]: I1013 17:42:01.767826 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 13 17:42:01 crc kubenswrapper[4720]: I1013 17:42:01.779639 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 13 17:42:01 crc kubenswrapper[4720]: I1013 17:42:01.793250 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-5hfwq" Oct 13 17:42:01 crc kubenswrapper[4720]: I1013 17:42:01.814935 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8794551-360d-4269-b42d-9133a92f05c1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f8794551-360d-4269-b42d-9133a92f05c1\") " pod="openstack/nova-api-0" Oct 13 17:42:01 crc kubenswrapper[4720]: I1013 17:42:01.814997 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbq2c\" (UniqueName: \"kubernetes.io/projected/f8794551-360d-4269-b42d-9133a92f05c1-kube-api-access-pbq2c\") pod \"nova-api-0\" (UID: \"f8794551-360d-4269-b42d-9133a92f05c1\") " pod="openstack/nova-api-0" Oct 13 17:42:01 crc kubenswrapper[4720]: I1013 17:42:01.815028 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b78e3734-742b-488b-ace9-2e0ded0c394e-config-data\") pod \"nova-scheduler-0\" (UID: \"b78e3734-742b-488b-ace9-2e0ded0c394e\") " pod="openstack/nova-scheduler-0" Oct 13 17:42:01 crc kubenswrapper[4720]: I1013 17:42:01.815049 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsrsw\" (UniqueName: \"kubernetes.io/projected/b78e3734-742b-488b-ace9-2e0ded0c394e-kube-api-access-rsrsw\") pod \"nova-scheduler-0\" (UID: \"b78e3734-742b-488b-ace9-2e0ded0c394e\") " pod="openstack/nova-scheduler-0" Oct 13 17:42:01 crc kubenswrapper[4720]: I1013 17:42:01.815105 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8794551-360d-4269-b42d-9133a92f05c1-logs\") pod \"nova-api-0\" (UID: \"f8794551-360d-4269-b42d-9133a92f05c1\") " pod="openstack/nova-api-0" Oct 13 17:42:01 crc kubenswrapper[4720]: I1013 17:42:01.815131 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b78e3734-742b-488b-ace9-2e0ded0c394e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b78e3734-742b-488b-ace9-2e0ded0c394e\") " pod="openstack/nova-scheduler-0" Oct 13 17:42:01 crc kubenswrapper[4720]: I1013 17:42:01.815156 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8794551-360d-4269-b42d-9133a92f05c1-config-data\") pod \"nova-api-0\" (UID: \"f8794551-360d-4269-b42d-9133a92f05c1\") " pod="openstack/nova-api-0" Oct 13 17:42:01 crc kubenswrapper[4720]: I1013 17:42:01.823056 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 13 17:42:01 crc kubenswrapper[4720]: I1013 17:42:01.875558 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-trkfz"] Oct 13 17:42:01 crc kubenswrapper[4720]: I1013 17:42:01.877934 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-trkfz" Oct 13 17:42:01 crc kubenswrapper[4720]: I1013 17:42:01.897718 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-trkfz"] Oct 13 17:42:01 crc kubenswrapper[4720]: I1013 17:42:01.923430 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b11439ac-25b3-45db-8d79-401448c8ef1a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b11439ac-25b3-45db-8d79-401448c8ef1a\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 17:42:01 crc kubenswrapper[4720]: I1013 17:42:01.923474 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsrsw\" (UniqueName: \"kubernetes.io/projected/b78e3734-742b-488b-ace9-2e0ded0c394e-kube-api-access-rsrsw\") pod \"nova-scheduler-0\" (UID: \"b78e3734-742b-488b-ace9-2e0ded0c394e\") " pod="openstack/nova-scheduler-0" Oct 13 17:42:01 crc kubenswrapper[4720]: I1013 17:42:01.923525 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9abd7a5-ccac-4a7e-b48e-4737a32d755b-config-data\") pod \"nova-metadata-0\" (UID: \"e9abd7a5-ccac-4a7e-b48e-4737a32d755b\") " pod="openstack/nova-metadata-0" Oct 13 17:42:01 crc kubenswrapper[4720]: I1013 17:42:01.923558 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8794551-360d-4269-b42d-9133a92f05c1-logs\") pod \"nova-api-0\" (UID: \"f8794551-360d-4269-b42d-9133a92f05c1\") " pod="openstack/nova-api-0" Oct 13 17:42:01 crc kubenswrapper[4720]: I1013 17:42:01.923584 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b78e3734-742b-488b-ace9-2e0ded0c394e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b78e3734-742b-488b-ace9-2e0ded0c394e\") " pod="openstack/nova-scheduler-0" Oct 13 17:42:01 crc kubenswrapper[4720]: I1013 17:42:01.923627 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8794551-360d-4269-b42d-9133a92f05c1-config-data\") pod \"nova-api-0\" (UID: \"f8794551-360d-4269-b42d-9133a92f05c1\") " pod="openstack/nova-api-0" Oct 13 17:42:01 crc kubenswrapper[4720]: I1013 17:42:01.923649 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b11439ac-25b3-45db-8d79-401448c8ef1a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b11439ac-25b3-45db-8d79-401448c8ef1a\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 17:42:01 crc kubenswrapper[4720]: I1013 17:42:01.923676 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9abd7a5-ccac-4a7e-b48e-4737a32d755b-logs\") pod \"nova-metadata-0\" (UID: \"e9abd7a5-ccac-4a7e-b48e-4737a32d755b\") " pod="openstack/nova-metadata-0" Oct 13 17:42:01 crc kubenswrapper[4720]: I1013 17:42:01.923698 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vvf2\" (UniqueName: \"kubernetes.io/projected/b11439ac-25b3-45db-8d79-401448c8ef1a-kube-api-access-2vvf2\") pod \"nova-cell1-novncproxy-0\" (UID: \"b11439ac-25b3-45db-8d79-401448c8ef1a\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 17:42:01 crc kubenswrapper[4720]: I1013 17:42:01.923714 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc8sj\" (UniqueName: \"kubernetes.io/projected/e9abd7a5-ccac-4a7e-b48e-4737a32d755b-kube-api-access-xc8sj\") pod \"nova-metadata-0\" (UID: \"e9abd7a5-ccac-4a7e-b48e-4737a32d755b\") " pod="openstack/nova-metadata-0" Oct 13 17:42:01 crc kubenswrapper[4720]: I1013 17:42:01.923751 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8794551-360d-4269-b42d-9133a92f05c1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f8794551-360d-4269-b42d-9133a92f05c1\") " pod="openstack/nova-api-0" Oct 13 17:42:01 crc kubenswrapper[4720]: I1013 17:42:01.923777 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbq2c\" (UniqueName: \"kubernetes.io/projected/f8794551-360d-4269-b42d-9133a92f05c1-kube-api-access-pbq2c\") pod \"nova-api-0\" (UID: \"f8794551-360d-4269-b42d-9133a92f05c1\") " pod="openstack/nova-api-0" Oct 13 17:42:01 crc kubenswrapper[4720]: I1013 17:42:01.923799 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9abd7a5-ccac-4a7e-b48e-4737a32d755b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e9abd7a5-ccac-4a7e-b48e-4737a32d755b\") " pod="openstack/nova-metadata-0" Oct 13 17:42:01 crc kubenswrapper[4720]: I1013 17:42:01.923823 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b78e3734-742b-488b-ace9-2e0ded0c394e-config-data\") pod \"nova-scheduler-0\" (UID: \"b78e3734-742b-488b-ace9-2e0ded0c394e\") " pod="openstack/nova-scheduler-0" Oct 13 17:42:01 crc kubenswrapper[4720]: I1013 17:42:01.924703 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8794551-360d-4269-b42d-9133a92f05c1-logs\") pod \"nova-api-0\" (UID: \"f8794551-360d-4269-b42d-9133a92f05c1\") " pod="openstack/nova-api-0" Oct 13 17:42:01 crc kubenswrapper[4720]: I1013 17:42:01.929443 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b78e3734-742b-488b-ace9-2e0ded0c394e-config-data\") pod \"nova-scheduler-0\" (UID: \"b78e3734-742b-488b-ace9-2e0ded0c394e\") " pod="openstack/nova-scheduler-0" Oct 13 17:42:01 crc kubenswrapper[4720]: I1013 17:42:01.929927 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b78e3734-742b-488b-ace9-2e0ded0c394e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b78e3734-742b-488b-ace9-2e0ded0c394e\") " pod="openstack/nova-scheduler-0" Oct 13 17:42:01 crc kubenswrapper[4720]: I1013 17:42:01.943603 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8794551-360d-4269-b42d-9133a92f05c1-config-data\") pod \"nova-api-0\" (UID: \"f8794551-360d-4269-b42d-9133a92f05c1\") " pod="openstack/nova-api-0" Oct 13 17:42:01 crc kubenswrapper[4720]: I1013 17:42:01.943819 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsrsw\" (UniqueName: \"kubernetes.io/projected/b78e3734-742b-488b-ace9-2e0ded0c394e-kube-api-access-rsrsw\") pod \"nova-scheduler-0\" (UID: \"b78e3734-742b-488b-ace9-2e0ded0c394e\") " pod="openstack/nova-scheduler-0" Oct 13 17:42:01 crc kubenswrapper[4720]: I1013 17:42:01.944564 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8794551-360d-4269-b42d-9133a92f05c1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f8794551-360d-4269-b42d-9133a92f05c1\") " pod="openstack/nova-api-0" Oct 13 17:42:01 crc kubenswrapper[4720]: I1013 17:42:01.947636 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbq2c\" (UniqueName: \"kubernetes.io/projected/f8794551-360d-4269-b42d-9133a92f05c1-kube-api-access-pbq2c\") pod \"nova-api-0\" (UID: \"f8794551-360d-4269-b42d-9133a92f05c1\") " pod="openstack/nova-api-0" Oct 13 17:42:01 crc kubenswrapper[4720]: I1013 17:42:01.963442 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 13 17:42:02 crc kubenswrapper[4720]: I1013 17:42:02.007813 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 13 17:42:02 crc kubenswrapper[4720]: I1013 17:42:02.025340 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2r2p\" (UniqueName: \"kubernetes.io/projected/2eefeacb-a660-43cc-8091-718f61e76f26-kube-api-access-x2r2p\") pod \"dnsmasq-dns-845d6d6f59-trkfz\" (UID: \"2eefeacb-a660-43cc-8091-718f61e76f26\") " pod="openstack/dnsmasq-dns-845d6d6f59-trkfz" Oct 13 17:42:02 crc kubenswrapper[4720]: I1013 17:42:02.025398 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b11439ac-25b3-45db-8d79-401448c8ef1a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b11439ac-25b3-45db-8d79-401448c8ef1a\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 17:42:02 crc kubenswrapper[4720]: I1013 17:42:02.025424 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2eefeacb-a660-43cc-8091-718f61e76f26-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-trkfz\" (UID: \"2eefeacb-a660-43cc-8091-718f61e76f26\") " pod="openstack/dnsmasq-dns-845d6d6f59-trkfz" Oct 13 17:42:02 crc kubenswrapper[4720]: I1013 17:42:02.025449 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9abd7a5-ccac-4a7e-b48e-4737a32d755b-logs\") pod \"nova-metadata-0\" (UID: \"e9abd7a5-ccac-4a7e-b48e-4737a32d755b\") " pod="openstack/nova-metadata-0" Oct 13 17:42:02 crc kubenswrapper[4720]: I1013 17:42:02.025471 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vvf2\" (UniqueName: \"kubernetes.io/projected/b11439ac-25b3-45db-8d79-401448c8ef1a-kube-api-access-2vvf2\") pod \"nova-cell1-novncproxy-0\" (UID: \"b11439ac-25b3-45db-8d79-401448c8ef1a\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 17:42:02 crc kubenswrapper[4720]: I1013 17:42:02.025490 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xc8sj\" (UniqueName: \"kubernetes.io/projected/e9abd7a5-ccac-4a7e-b48e-4737a32d755b-kube-api-access-xc8sj\") pod \"nova-metadata-0\" (UID: \"e9abd7a5-ccac-4a7e-b48e-4737a32d755b\") " pod="openstack/nova-metadata-0" Oct 13 17:42:02 crc kubenswrapper[4720]: I1013 17:42:02.025535 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2eefeacb-a660-43cc-8091-718f61e76f26-config\") pod \"dnsmasq-dns-845d6d6f59-trkfz\" (UID: \"2eefeacb-a660-43cc-8091-718f61e76f26\") " pod="openstack/dnsmasq-dns-845d6d6f59-trkfz" Oct 13 17:42:02 crc kubenswrapper[4720]: I1013 17:42:02.025565 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9abd7a5-ccac-4a7e-b48e-4737a32d755b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e9abd7a5-ccac-4a7e-b48e-4737a32d755b\") " pod="openstack/nova-metadata-0" Oct 13 17:42:02 crc kubenswrapper[4720]: I1013 17:42:02.025590 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b11439ac-25b3-45db-8d79-401448c8ef1a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b11439ac-25b3-45db-8d79-401448c8ef1a\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 17:42:02 crc kubenswrapper[4720]: I1013 17:42:02.025606 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2eefeacb-a660-43cc-8091-718f61e76f26-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-trkfz\" (UID: \"2eefeacb-a660-43cc-8091-718f61e76f26\") " pod="openstack/dnsmasq-dns-845d6d6f59-trkfz" Oct 13 17:42:02 crc kubenswrapper[4720]: I1013 17:42:02.025644 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2eefeacb-a660-43cc-8091-718f61e76f26-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-trkfz\" (UID: \"2eefeacb-a660-43cc-8091-718f61e76f26\") " pod="openstack/dnsmasq-dns-845d6d6f59-trkfz" Oct 13 17:42:02 crc kubenswrapper[4720]: I1013 17:42:02.025663 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9abd7a5-ccac-4a7e-b48e-4737a32d755b-config-data\") pod \"nova-metadata-0\" (UID: \"e9abd7a5-ccac-4a7e-b48e-4737a32d755b\") " pod="openstack/nova-metadata-0" Oct 13 17:42:02 crc kubenswrapper[4720]: I1013 17:42:02.025689 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2eefeacb-a660-43cc-8091-718f61e76f26-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-trkfz\" (UID: \"2eefeacb-a660-43cc-8091-718f61e76f26\") " pod="openstack/dnsmasq-dns-845d6d6f59-trkfz" Oct 13 17:42:02 crc kubenswrapper[4720]: I1013 17:42:02.027992 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9abd7a5-ccac-4a7e-b48e-4737a32d755b-logs\") pod \"nova-metadata-0\" (UID: \"e9abd7a5-ccac-4a7e-b48e-4737a32d755b\") " pod="openstack/nova-metadata-0" Oct 13 17:42:02 crc kubenswrapper[4720]: I1013 17:42:02.031059 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b11439ac-25b3-45db-8d79-401448c8ef1a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b11439ac-25b3-45db-8d79-401448c8ef1a\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 17:42:02 crc kubenswrapper[4720]: I1013 17:42:02.033885 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9abd7a5-ccac-4a7e-b48e-4737a32d755b-config-data\") pod \"nova-metadata-0\" (UID: \"e9abd7a5-ccac-4a7e-b48e-4737a32d755b\") " pod="openstack/nova-metadata-0" Oct 13 17:42:02 crc kubenswrapper[4720]: I1013 17:42:02.037743 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b11439ac-25b3-45db-8d79-401448c8ef1a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b11439ac-25b3-45db-8d79-401448c8ef1a\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 17:42:02 crc kubenswrapper[4720]: I1013 17:42:02.039000 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9abd7a5-ccac-4a7e-b48e-4737a32d755b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e9abd7a5-ccac-4a7e-b48e-4737a32d755b\") " pod="openstack/nova-metadata-0" Oct 13 17:42:02 crc kubenswrapper[4720]: I1013 17:42:02.044252 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vvf2\" (UniqueName: \"kubernetes.io/projected/b11439ac-25b3-45db-8d79-401448c8ef1a-kube-api-access-2vvf2\") pod \"nova-cell1-novncproxy-0\" (UID: \"b11439ac-25b3-45db-8d79-401448c8ef1a\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 17:42:02 crc kubenswrapper[4720]: I1013 17:42:02.047874 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xc8sj\" (UniqueName: \"kubernetes.io/projected/e9abd7a5-ccac-4a7e-b48e-4737a32d755b-kube-api-access-xc8sj\") pod \"nova-metadata-0\" (UID: \"e9abd7a5-ccac-4a7e-b48e-4737a32d755b\") " pod="openstack/nova-metadata-0" Oct 13 17:42:02 crc kubenswrapper[4720]: I1013 17:42:02.126917 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2eefeacb-a660-43cc-8091-718f61e76f26-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-trkfz\" (UID: \"2eefeacb-a660-43cc-8091-718f61e76f26\") " pod="openstack/dnsmasq-dns-845d6d6f59-trkfz" Oct 13 17:42:02 crc kubenswrapper[4720]: I1013 17:42:02.127236 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2eefeacb-a660-43cc-8091-718f61e76f26-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-trkfz\" (UID: \"2eefeacb-a660-43cc-8091-718f61e76f26\") " pod="openstack/dnsmasq-dns-845d6d6f59-trkfz" Oct 13 17:42:02 crc kubenswrapper[4720]: I1013 17:42:02.127278 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2eefeacb-a660-43cc-8091-718f61e76f26-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-trkfz\" (UID: \"2eefeacb-a660-43cc-8091-718f61e76f26\") " pod="openstack/dnsmasq-dns-845d6d6f59-trkfz" Oct 13 17:42:02 crc kubenswrapper[4720]: I1013 17:42:02.127310 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2r2p\" (UniqueName: \"kubernetes.io/projected/2eefeacb-a660-43cc-8091-718f61e76f26-kube-api-access-x2r2p\") pod \"dnsmasq-dns-845d6d6f59-trkfz\" (UID: \"2eefeacb-a660-43cc-8091-718f61e76f26\") " pod="openstack/dnsmasq-dns-845d6d6f59-trkfz" Oct 13 17:42:02 crc kubenswrapper[4720]: I1013 17:42:02.127342 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2eefeacb-a660-43cc-8091-718f61e76f26-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-trkfz\" (UID: \"2eefeacb-a660-43cc-8091-718f61e76f26\") " pod="openstack/dnsmasq-dns-845d6d6f59-trkfz" Oct 13 17:42:02 crc kubenswrapper[4720]: I1013 17:42:02.127405 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2eefeacb-a660-43cc-8091-718f61e76f26-config\") pod \"dnsmasq-dns-845d6d6f59-trkfz\" (UID: \"2eefeacb-a660-43cc-8091-718f61e76f26\") " pod="openstack/dnsmasq-dns-845d6d6f59-trkfz" Oct 13 17:42:02 crc kubenswrapper[4720]: I1013 17:42:02.128577 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2eefeacb-a660-43cc-8091-718f61e76f26-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-trkfz\" (UID: \"2eefeacb-a660-43cc-8091-718f61e76f26\") " pod="openstack/dnsmasq-dns-845d6d6f59-trkfz" Oct 13 17:42:02 crc kubenswrapper[4720]: I1013 17:42:02.129055 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2eefeacb-a660-43cc-8091-718f61e76f26-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-trkfz\" (UID: \"2eefeacb-a660-43cc-8091-718f61e76f26\") " pod="openstack/dnsmasq-dns-845d6d6f59-trkfz" Oct 13 17:42:02 crc kubenswrapper[4720]: I1013 17:42:02.129822 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2eefeacb-a660-43cc-8091-718f61e76f26-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-trkfz\" (UID: \"2eefeacb-a660-43cc-8091-718f61e76f26\") " pod="openstack/dnsmasq-dns-845d6d6f59-trkfz" Oct 13 17:42:02 crc kubenswrapper[4720]: I1013 17:42:02.131616 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2eefeacb-a660-43cc-8091-718f61e76f26-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-trkfz\" (UID: \"2eefeacb-a660-43cc-8091-718f61e76f26\") " pod="openstack/dnsmasq-dns-845d6d6f59-trkfz" Oct 13 17:42:02 crc kubenswrapper[4720]: I1013 17:42:02.131871 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2eefeacb-a660-43cc-8091-718f61e76f26-config\") pod \"dnsmasq-dns-845d6d6f59-trkfz\" (UID: \"2eefeacb-a660-43cc-8091-718f61e76f26\") " pod="openstack/dnsmasq-dns-845d6d6f59-trkfz" Oct 13 17:42:02 crc kubenswrapper[4720]: I1013 17:42:02.145401 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-5hfwq"] Oct 13 17:42:02 crc kubenswrapper[4720]: I1013 17:42:02.153072 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2r2p\" (UniqueName: \"kubernetes.io/projected/2eefeacb-a660-43cc-8091-718f61e76f26-kube-api-access-x2r2p\") pod \"dnsmasq-dns-845d6d6f59-trkfz\" (UID: \"2eefeacb-a660-43cc-8091-718f61e76f26\") " pod="openstack/dnsmasq-dns-845d6d6f59-trkfz" Oct 13 17:42:02 crc kubenswrapper[4720]: I1013 17:42:02.178624 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 13 17:42:02 crc kubenswrapper[4720]: I1013 17:42:02.276685 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 13 17:42:02 crc kubenswrapper[4720]: I1013 17:42:02.294214 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-trkfz" Oct 13 17:42:02 crc kubenswrapper[4720]: I1013 17:42:02.471027 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 13 17:42:02 crc kubenswrapper[4720]: W1013 17:42:02.572082 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb78e3734_742b_488b_ace9_2e0ded0c394e.slice/crio-43028ba84052ee7078ea04eacc096d70c109e8257abb5abb9e4d8ab2e903f67d WatchSource:0}: Error finding container 43028ba84052ee7078ea04eacc096d70c109e8257abb5abb9e4d8ab2e903f67d: Status 404 returned error can't find the container with id 43028ba84052ee7078ea04eacc096d70c109e8257abb5abb9e4d8ab2e903f67d Oct 13 17:42:02 crc kubenswrapper[4720]: I1013 17:42:02.580759 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 13 17:42:02 crc kubenswrapper[4720]: I1013 17:42:02.643143 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b78e3734-742b-488b-ace9-2e0ded0c394e","Type":"ContainerStarted","Data":"43028ba84052ee7078ea04eacc096d70c109e8257abb5abb9e4d8ab2e903f67d"} Oct 13 17:42:02 crc kubenswrapper[4720]: I1013 17:42:02.645774 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f8794551-360d-4269-b42d-9133a92f05c1","Type":"ContainerStarted","Data":"e933ee6ae760d26e4a248af21820dcabb6072808e29251e008457774ea2c2572"} Oct 13 17:42:02 crc kubenswrapper[4720]: I1013 17:42:02.649750 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-5hfwq" event={"ID":"05ec558f-1d81-4986-a415-06281c3cff62","Type":"ContainerStarted","Data":"1d7425afd8731fb7f75336af7275836c0fec787e6fa79a5dab2675615665268d"} Oct 13 17:42:02 crc kubenswrapper[4720]: I1013 17:42:02.649782 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-5hfwq" event={"ID":"05ec558f-1d81-4986-a415-06281c3cff62","Type":"ContainerStarted","Data":"fbdfb2b638297b095144f7319afba9416e9b4f27fbbb17f4610386040dd73f7c"} Oct 13 17:42:02 crc kubenswrapper[4720]: I1013 17:42:02.651405 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-4lv7c"] Oct 13 17:42:02 crc kubenswrapper[4720]: I1013 17:42:02.652549 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-4lv7c" Oct 13 17:42:02 crc kubenswrapper[4720]: I1013 17:42:02.654343 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Oct 13 17:42:02 crc kubenswrapper[4720]: I1013 17:42:02.656324 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Oct 13 17:42:02 crc kubenswrapper[4720]: I1013 17:42:02.672875 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-4lv7c"] Oct 13 17:42:02 crc kubenswrapper[4720]: I1013 17:42:02.681084 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 13 17:42:02 crc kubenswrapper[4720]: I1013 17:42:02.689630 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-5hfwq" podStartSLOduration=1.68960893 podStartE2EDuration="1.68960893s" podCreationTimestamp="2025-10-13 17:42:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:42:02.672049467 +0000 UTC m=+1068.129299599" watchObservedRunningTime="2025-10-13 17:42:02.68960893 +0000 UTC m=+1068.146859052" Oct 13 17:42:02 crc kubenswrapper[4720]: I1013 17:42:02.755268 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b0268da-5ad5-40c5-8e86-4e99da5245e8-config-data\") pod \"nova-cell1-conductor-db-sync-4lv7c\" (UID: \"2b0268da-5ad5-40c5-8e86-4e99da5245e8\") " pod="openstack/nova-cell1-conductor-db-sync-4lv7c" Oct 13 17:42:02 crc kubenswrapper[4720]: I1013 17:42:02.755321 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b0268da-5ad5-40c5-8e86-4e99da5245e8-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-4lv7c\" (UID: \"2b0268da-5ad5-40c5-8e86-4e99da5245e8\") " pod="openstack/nova-cell1-conductor-db-sync-4lv7c" Oct 13 17:42:02 crc kubenswrapper[4720]: I1013 17:42:02.755384 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwgjw\" (UniqueName: \"kubernetes.io/projected/2b0268da-5ad5-40c5-8e86-4e99da5245e8-kube-api-access-vwgjw\") pod \"nova-cell1-conductor-db-sync-4lv7c\" (UID: \"2b0268da-5ad5-40c5-8e86-4e99da5245e8\") " pod="openstack/nova-cell1-conductor-db-sync-4lv7c" Oct 13 17:42:02 crc kubenswrapper[4720]: I1013 17:42:02.755464 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b0268da-5ad5-40c5-8e86-4e99da5245e8-scripts\") pod \"nova-cell1-conductor-db-sync-4lv7c\" (UID: \"2b0268da-5ad5-40c5-8e86-4e99da5245e8\") " pod="openstack/nova-cell1-conductor-db-sync-4lv7c" Oct 13 17:42:02 crc kubenswrapper[4720]: I1013 17:42:02.824447 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-trkfz"] Oct 13 17:42:02 crc kubenswrapper[4720]: I1013 17:42:02.857075 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b0268da-5ad5-40c5-8e86-4e99da5245e8-scripts\") pod \"nova-cell1-conductor-db-sync-4lv7c\" (UID: \"2b0268da-5ad5-40c5-8e86-4e99da5245e8\") " pod="openstack/nova-cell1-conductor-db-sync-4lv7c" Oct 13 17:42:02 crc kubenswrapper[4720]: I1013 17:42:02.857179 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b0268da-5ad5-40c5-8e86-4e99da5245e8-config-data\") pod \"nova-cell1-conductor-db-sync-4lv7c\" (UID: \"2b0268da-5ad5-40c5-8e86-4e99da5245e8\") " pod="openstack/nova-cell1-conductor-db-sync-4lv7c" Oct 13 17:42:02 crc kubenswrapper[4720]: I1013 17:42:02.857217 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b0268da-5ad5-40c5-8e86-4e99da5245e8-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-4lv7c\" (UID: \"2b0268da-5ad5-40c5-8e86-4e99da5245e8\") " pod="openstack/nova-cell1-conductor-db-sync-4lv7c" Oct 13 17:42:02 crc kubenswrapper[4720]: I1013 17:42:02.857304 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwgjw\" (UniqueName: \"kubernetes.io/projected/2b0268da-5ad5-40c5-8e86-4e99da5245e8-kube-api-access-vwgjw\") pod \"nova-cell1-conductor-db-sync-4lv7c\" (UID: \"2b0268da-5ad5-40c5-8e86-4e99da5245e8\") " pod="openstack/nova-cell1-conductor-db-sync-4lv7c" Oct 13 17:42:02 crc kubenswrapper[4720]: I1013 17:42:02.860148 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b0268da-5ad5-40c5-8e86-4e99da5245e8-scripts\") pod \"nova-cell1-conductor-db-sync-4lv7c\" (UID: \"2b0268da-5ad5-40c5-8e86-4e99da5245e8\") " pod="openstack/nova-cell1-conductor-db-sync-4lv7c" Oct 13 17:42:02 crc kubenswrapper[4720]: I1013 17:42:02.865765 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b0268da-5ad5-40c5-8e86-4e99da5245e8-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-4lv7c\" (UID: \"2b0268da-5ad5-40c5-8e86-4e99da5245e8\") " pod="openstack/nova-cell1-conductor-db-sync-4lv7c" Oct 13 17:42:02 crc kubenswrapper[4720]: I1013 17:42:02.867649 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b0268da-5ad5-40c5-8e86-4e99da5245e8-config-data\") pod \"nova-cell1-conductor-db-sync-4lv7c\" (UID: \"2b0268da-5ad5-40c5-8e86-4e99da5245e8\") " pod="openstack/nova-cell1-conductor-db-sync-4lv7c" Oct 13 17:42:02 crc kubenswrapper[4720]: I1013 17:42:02.877385 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwgjw\" (UniqueName: \"kubernetes.io/projected/2b0268da-5ad5-40c5-8e86-4e99da5245e8-kube-api-access-vwgjw\") pod \"nova-cell1-conductor-db-sync-4lv7c\" (UID: \"2b0268da-5ad5-40c5-8e86-4e99da5245e8\") " pod="openstack/nova-cell1-conductor-db-sync-4lv7c" Oct 13 17:42:02 crc kubenswrapper[4720]: I1013 17:42:02.952722 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 13 17:42:02 crc kubenswrapper[4720]: I1013 17:42:02.985687 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-4lv7c" Oct 13 17:42:03 crc kubenswrapper[4720]: I1013 17:42:03.453235 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-4lv7c"] Oct 13 17:42:03 crc kubenswrapper[4720]: I1013 17:42:03.665752 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-4lv7c" event={"ID":"2b0268da-5ad5-40c5-8e86-4e99da5245e8","Type":"ContainerStarted","Data":"afa3e18365edad36869359a60171dbdc9fccc58972e3ec35ce0c98ee42b23a9c"} Oct 13 17:42:03 crc kubenswrapper[4720]: I1013 17:42:03.668325 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b11439ac-25b3-45db-8d79-401448c8ef1a","Type":"ContainerStarted","Data":"e7184b36afdc6ace9a0a7b546fa2157a1b87919923cd7e54033c208a865bcb6a"} Oct 13 17:42:03 crc kubenswrapper[4720]: I1013 17:42:03.673643 4720 generic.go:334] "Generic (PLEG): container finished" podID="2eefeacb-a660-43cc-8091-718f61e76f26" containerID="82f7066f84536a9fa0604513e10a98f4cae63c0c63a06c95f9eb54be138df06f" exitCode=0 Oct 13 17:42:03 crc kubenswrapper[4720]: I1013 17:42:03.673705 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-trkfz" event={"ID":"2eefeacb-a660-43cc-8091-718f61e76f26","Type":"ContainerDied","Data":"82f7066f84536a9fa0604513e10a98f4cae63c0c63a06c95f9eb54be138df06f"} Oct 13 17:42:03 crc kubenswrapper[4720]: I1013 17:42:03.673734 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-trkfz" event={"ID":"2eefeacb-a660-43cc-8091-718f61e76f26","Type":"ContainerStarted","Data":"b610de700cdc246d205fa7bde4851fefc086d8cf4dc554b3bb0248e02934e940"} Oct 13 17:42:03 crc kubenswrapper[4720]: I1013 17:42:03.676261 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e9abd7a5-ccac-4a7e-b48e-4737a32d755b","Type":"ContainerStarted","Data":"36711017ba7da8c7bb628622c7c250607e0ad89116d27511a7308b30986441d8"} Oct 13 17:42:04 crc kubenswrapper[4720]: I1013 17:42:04.691742 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-trkfz" event={"ID":"2eefeacb-a660-43cc-8091-718f61e76f26","Type":"ContainerStarted","Data":"41af5150c8e788e6a22ac99aea402a479b26a387d339a3c7a5055ddb60c60453"} Oct 13 17:42:04 crc kubenswrapper[4720]: I1013 17:42:04.692114 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-845d6d6f59-trkfz" Oct 13 17:42:04 crc kubenswrapper[4720]: I1013 17:42:04.697757 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-4lv7c" event={"ID":"2b0268da-5ad5-40c5-8e86-4e99da5245e8","Type":"ContainerStarted","Data":"75f0c0b1d96c7216334aca5291634900346ca7d7c6f02397a4e800227cdaba71"} Oct 13 17:42:04 crc kubenswrapper[4720]: I1013 17:42:04.710342 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-845d6d6f59-trkfz" podStartSLOduration=3.7103240509999997 podStartE2EDuration="3.710324051s" podCreationTimestamp="2025-10-13 17:42:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:42:04.708278828 +0000 UTC m=+1070.165528980" watchObservedRunningTime="2025-10-13 17:42:04.710324051 +0000 UTC m=+1070.167574193" Oct 13 17:42:04 crc kubenswrapper[4720]: I1013 17:42:04.731661 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-4lv7c" podStartSLOduration=2.7316427599999997 podStartE2EDuration="2.73164276s" podCreationTimestamp="2025-10-13 17:42:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:42:04.72817197 +0000 UTC m=+1070.185422112" watchObservedRunningTime="2025-10-13 17:42:04.73164276 +0000 UTC m=+1070.188892912" Oct 13 17:42:05 crc kubenswrapper[4720]: I1013 17:42:05.260955 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 13 17:42:05 crc kubenswrapper[4720]: I1013 17:42:05.300620 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 13 17:42:05 crc kubenswrapper[4720]: I1013 17:42:05.714878 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e9abd7a5-ccac-4a7e-b48e-4737a32d755b","Type":"ContainerStarted","Data":"39a11fa50b6df1b3689847a966cea4b41e65202bdb9a7ef535b60b791b86c2d0"} Oct 13 17:42:05 crc kubenswrapper[4720]: I1013 17:42:05.717734 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b11439ac-25b3-45db-8d79-401448c8ef1a","Type":"ContainerStarted","Data":"cc578c42821f693c85b96858485605c733349afc8d2e9926e955016ecb2b2f5e"} Oct 13 17:42:05 crc kubenswrapper[4720]: I1013 17:42:05.717850 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="b11439ac-25b3-45db-8d79-401448c8ef1a" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://cc578c42821f693c85b96858485605c733349afc8d2e9926e955016ecb2b2f5e" gracePeriod=30 Oct 13 17:42:05 crc kubenswrapper[4720]: I1013 17:42:05.724721 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b78e3734-742b-488b-ace9-2e0ded0c394e","Type":"ContainerStarted","Data":"6d6b50e99892521a7b06c8a0ed3797bbe2e31a526a345726c2840d020c98e7d0"} Oct 13 17:42:05 crc kubenswrapper[4720]: I1013 17:42:05.729595 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f8794551-360d-4269-b42d-9133a92f05c1","Type":"ContainerStarted","Data":"2b2ef4c6b88ee6795ecda16ca973f8b2f6e2ee9b41664f4d741dae2f9a9d8531"} Oct 13 17:42:05 crc kubenswrapper[4720]: I1013 17:42:05.736003 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.068771656 podStartE2EDuration="4.73597948s" podCreationTimestamp="2025-10-13 17:42:01 +0000 UTC" firstStartedPulling="2025-10-13 17:42:02.674446789 +0000 UTC m=+1068.131696921" lastFinishedPulling="2025-10-13 17:42:05.341654613 +0000 UTC m=+1070.798904745" observedRunningTime="2025-10-13 17:42:05.730989151 +0000 UTC m=+1071.188239283" watchObservedRunningTime="2025-10-13 17:42:05.73597948 +0000 UTC m=+1071.193229612" Oct 13 17:42:05 crc kubenswrapper[4720]: I1013 17:42:05.755308 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.026395382 podStartE2EDuration="4.755291028s" podCreationTimestamp="2025-10-13 17:42:01 +0000 UTC" firstStartedPulling="2025-10-13 17:42:02.579709434 +0000 UTC m=+1068.036959566" lastFinishedPulling="2025-10-13 17:42:05.30860508 +0000 UTC m=+1070.765855212" observedRunningTime="2025-10-13 17:42:05.751333706 +0000 UTC m=+1071.208583838" watchObservedRunningTime="2025-10-13 17:42:05.755291028 +0000 UTC m=+1071.212541160" Oct 13 17:42:06 crc kubenswrapper[4720]: I1013 17:42:06.738909 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f8794551-360d-4269-b42d-9133a92f05c1","Type":"ContainerStarted","Data":"7c4c5f1590b2f168a608050563eaf35db7e4e5e4e1dc746d19e8475f60466225"} Oct 13 17:42:06 crc kubenswrapper[4720]: I1013 17:42:06.740730 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e9abd7a5-ccac-4a7e-b48e-4737a32d755b","Type":"ContainerStarted","Data":"a9666171acb8e4fc1cb9bfed5118cba949571bd4a0ff2a008fb50b2797795430"} Oct 13 17:42:06 crc kubenswrapper[4720]: I1013 17:42:06.740913 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e9abd7a5-ccac-4a7e-b48e-4737a32d755b" containerName="nova-metadata-log" containerID="cri-o://39a11fa50b6df1b3689847a966cea4b41e65202bdb9a7ef535b60b791b86c2d0" gracePeriod=30 Oct 13 17:42:06 crc kubenswrapper[4720]: I1013 17:42:06.740963 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e9abd7a5-ccac-4a7e-b48e-4737a32d755b" containerName="nova-metadata-metadata" containerID="cri-o://a9666171acb8e4fc1cb9bfed5118cba949571bd4a0ff2a008fb50b2797795430" gracePeriod=30 Oct 13 17:42:06 crc kubenswrapper[4720]: I1013 17:42:06.766818 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.925093945 podStartE2EDuration="5.766799633s" podCreationTimestamp="2025-10-13 17:42:01 +0000 UTC" firstStartedPulling="2025-10-13 17:42:02.496299021 +0000 UTC m=+1067.953549153" lastFinishedPulling="2025-10-13 17:42:05.338004709 +0000 UTC m=+1070.795254841" observedRunningTime="2025-10-13 17:42:06.762247206 +0000 UTC m=+1072.219497378" watchObservedRunningTime="2025-10-13 17:42:06.766799633 +0000 UTC m=+1072.224049765" Oct 13 17:42:06 crc kubenswrapper[4720]: I1013 17:42:06.803245 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.4553678310000002 podStartE2EDuration="5.803226213s" podCreationTimestamp="2025-10-13 17:42:01 +0000 UTC" firstStartedPulling="2025-10-13 17:42:02.961763564 +0000 UTC m=+1068.419013696" lastFinishedPulling="2025-10-13 17:42:05.309621946 +0000 UTC m=+1070.766872078" observedRunningTime="2025-10-13 17:42:06.798474231 +0000 UTC m=+1072.255724393" watchObservedRunningTime="2025-10-13 17:42:06.803226213 +0000 UTC m=+1072.260476345" Oct 13 17:42:07 crc kubenswrapper[4720]: I1013 17:42:07.008376 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 13 17:42:07 crc kubenswrapper[4720]: I1013 17:42:07.179476 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Oct 13 17:42:07 crc kubenswrapper[4720]: I1013 17:42:07.276904 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 13 17:42:07 crc kubenswrapper[4720]: I1013 17:42:07.277174 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 13 17:42:07 crc kubenswrapper[4720]: I1013 17:42:07.425410 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 13 17:42:07 crc kubenswrapper[4720]: I1013 17:42:07.554719 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9abd7a5-ccac-4a7e-b48e-4737a32d755b-logs\") pod \"e9abd7a5-ccac-4a7e-b48e-4737a32d755b\" (UID: \"e9abd7a5-ccac-4a7e-b48e-4737a32d755b\") " Oct 13 17:42:07 crc kubenswrapper[4720]: I1013 17:42:07.554822 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9abd7a5-ccac-4a7e-b48e-4737a32d755b-config-data\") pod \"e9abd7a5-ccac-4a7e-b48e-4737a32d755b\" (UID: \"e9abd7a5-ccac-4a7e-b48e-4737a32d755b\") " Oct 13 17:42:07 crc kubenswrapper[4720]: I1013 17:42:07.554935 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9abd7a5-ccac-4a7e-b48e-4737a32d755b-combined-ca-bundle\") pod \"e9abd7a5-ccac-4a7e-b48e-4737a32d755b\" (UID: \"e9abd7a5-ccac-4a7e-b48e-4737a32d755b\") " Oct 13 17:42:07 crc kubenswrapper[4720]: I1013 17:42:07.555052 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xc8sj\" (UniqueName: \"kubernetes.io/projected/e9abd7a5-ccac-4a7e-b48e-4737a32d755b-kube-api-access-xc8sj\") pod \"e9abd7a5-ccac-4a7e-b48e-4737a32d755b\" (UID: \"e9abd7a5-ccac-4a7e-b48e-4737a32d755b\") " Oct 13 17:42:07 crc kubenswrapper[4720]: I1013 17:42:07.555051 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9abd7a5-ccac-4a7e-b48e-4737a32d755b-logs" (OuterVolumeSpecName: "logs") pod "e9abd7a5-ccac-4a7e-b48e-4737a32d755b" (UID: "e9abd7a5-ccac-4a7e-b48e-4737a32d755b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:42:07 crc kubenswrapper[4720]: I1013 17:42:07.555597 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9abd7a5-ccac-4a7e-b48e-4737a32d755b-logs\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:07 crc kubenswrapper[4720]: I1013 17:42:07.564556 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9abd7a5-ccac-4a7e-b48e-4737a32d755b-kube-api-access-xc8sj" (OuterVolumeSpecName: "kube-api-access-xc8sj") pod "e9abd7a5-ccac-4a7e-b48e-4737a32d755b" (UID: "e9abd7a5-ccac-4a7e-b48e-4737a32d755b"). InnerVolumeSpecName "kube-api-access-xc8sj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:42:07 crc kubenswrapper[4720]: I1013 17:42:07.607588 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9abd7a5-ccac-4a7e-b48e-4737a32d755b-config-data" (OuterVolumeSpecName: "config-data") pod "e9abd7a5-ccac-4a7e-b48e-4737a32d755b" (UID: "e9abd7a5-ccac-4a7e-b48e-4737a32d755b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:42:07 crc kubenswrapper[4720]: I1013 17:42:07.610309 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9abd7a5-ccac-4a7e-b48e-4737a32d755b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e9abd7a5-ccac-4a7e-b48e-4737a32d755b" (UID: "e9abd7a5-ccac-4a7e-b48e-4737a32d755b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:42:07 crc kubenswrapper[4720]: I1013 17:42:07.657477 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xc8sj\" (UniqueName: \"kubernetes.io/projected/e9abd7a5-ccac-4a7e-b48e-4737a32d755b-kube-api-access-xc8sj\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:07 crc kubenswrapper[4720]: I1013 17:42:07.657521 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9abd7a5-ccac-4a7e-b48e-4737a32d755b-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:07 crc kubenswrapper[4720]: I1013 17:42:07.657542 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9abd7a5-ccac-4a7e-b48e-4737a32d755b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:07 crc kubenswrapper[4720]: I1013 17:42:07.757602 4720 generic.go:334] "Generic (PLEG): container finished" podID="e9abd7a5-ccac-4a7e-b48e-4737a32d755b" containerID="a9666171acb8e4fc1cb9bfed5118cba949571bd4a0ff2a008fb50b2797795430" exitCode=0 Oct 13 17:42:07 crc kubenswrapper[4720]: I1013 17:42:07.757650 4720 generic.go:334] "Generic (PLEG): container finished" podID="e9abd7a5-ccac-4a7e-b48e-4737a32d755b" containerID="39a11fa50b6df1b3689847a966cea4b41e65202bdb9a7ef535b60b791b86c2d0" exitCode=143 Oct 13 17:42:07 crc kubenswrapper[4720]: I1013 17:42:07.757711 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 13 17:42:07 crc kubenswrapper[4720]: I1013 17:42:07.757692 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e9abd7a5-ccac-4a7e-b48e-4737a32d755b","Type":"ContainerDied","Data":"a9666171acb8e4fc1cb9bfed5118cba949571bd4a0ff2a008fb50b2797795430"} Oct 13 17:42:07 crc kubenswrapper[4720]: I1013 17:42:07.757797 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e9abd7a5-ccac-4a7e-b48e-4737a32d755b","Type":"ContainerDied","Data":"39a11fa50b6df1b3689847a966cea4b41e65202bdb9a7ef535b60b791b86c2d0"} Oct 13 17:42:07 crc kubenswrapper[4720]: I1013 17:42:07.757832 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e9abd7a5-ccac-4a7e-b48e-4737a32d755b","Type":"ContainerDied","Data":"36711017ba7da8c7bb628622c7c250607e0ad89116d27511a7308b30986441d8"} Oct 13 17:42:07 crc kubenswrapper[4720]: I1013 17:42:07.757863 4720 scope.go:117] "RemoveContainer" containerID="a9666171acb8e4fc1cb9bfed5118cba949571bd4a0ff2a008fb50b2797795430" Oct 13 17:42:07 crc kubenswrapper[4720]: I1013 17:42:07.796028 4720 scope.go:117] "RemoveContainer" containerID="39a11fa50b6df1b3689847a966cea4b41e65202bdb9a7ef535b60b791b86c2d0" Oct 13 17:42:07 crc kubenswrapper[4720]: I1013 17:42:07.830049 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 13 17:42:07 crc kubenswrapper[4720]: I1013 17:42:07.841099 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 13 17:42:07 crc kubenswrapper[4720]: I1013 17:42:07.847001 4720 scope.go:117] "RemoveContainer" containerID="a9666171acb8e4fc1cb9bfed5118cba949571bd4a0ff2a008fb50b2797795430" Oct 13 17:42:07 crc kubenswrapper[4720]: E1013 17:42:07.848571 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9666171acb8e4fc1cb9bfed5118cba949571bd4a0ff2a008fb50b2797795430\": container with ID starting with a9666171acb8e4fc1cb9bfed5118cba949571bd4a0ff2a008fb50b2797795430 not found: ID does not exist" containerID="a9666171acb8e4fc1cb9bfed5118cba949571bd4a0ff2a008fb50b2797795430" Oct 13 17:42:07 crc kubenswrapper[4720]: I1013 17:42:07.848620 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9666171acb8e4fc1cb9bfed5118cba949571bd4a0ff2a008fb50b2797795430"} err="failed to get container status \"a9666171acb8e4fc1cb9bfed5118cba949571bd4a0ff2a008fb50b2797795430\": rpc error: code = NotFound desc = could not find container \"a9666171acb8e4fc1cb9bfed5118cba949571bd4a0ff2a008fb50b2797795430\": container with ID starting with a9666171acb8e4fc1cb9bfed5118cba949571bd4a0ff2a008fb50b2797795430 not found: ID does not exist" Oct 13 17:42:07 crc kubenswrapper[4720]: I1013 17:42:07.848655 4720 scope.go:117] "RemoveContainer" containerID="39a11fa50b6df1b3689847a966cea4b41e65202bdb9a7ef535b60b791b86c2d0" Oct 13 17:42:07 crc kubenswrapper[4720]: E1013 17:42:07.849140 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39a11fa50b6df1b3689847a966cea4b41e65202bdb9a7ef535b60b791b86c2d0\": container with ID starting with 39a11fa50b6df1b3689847a966cea4b41e65202bdb9a7ef535b60b791b86c2d0 not found: ID does not exist" containerID="39a11fa50b6df1b3689847a966cea4b41e65202bdb9a7ef535b60b791b86c2d0" Oct 13 17:42:07 crc kubenswrapper[4720]: I1013 17:42:07.849232 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39a11fa50b6df1b3689847a966cea4b41e65202bdb9a7ef535b60b791b86c2d0"} err="failed to get container status \"39a11fa50b6df1b3689847a966cea4b41e65202bdb9a7ef535b60b791b86c2d0\": rpc error: code = NotFound desc = could not find container \"39a11fa50b6df1b3689847a966cea4b41e65202bdb9a7ef535b60b791b86c2d0\": container with ID starting with 39a11fa50b6df1b3689847a966cea4b41e65202bdb9a7ef535b60b791b86c2d0 not found: ID does not exist" Oct 13 17:42:07 crc kubenswrapper[4720]: I1013 17:42:07.849260 4720 scope.go:117] "RemoveContainer" containerID="a9666171acb8e4fc1cb9bfed5118cba949571bd4a0ff2a008fb50b2797795430" Oct 13 17:42:07 crc kubenswrapper[4720]: I1013 17:42:07.849812 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9666171acb8e4fc1cb9bfed5118cba949571bd4a0ff2a008fb50b2797795430"} err="failed to get container status \"a9666171acb8e4fc1cb9bfed5118cba949571bd4a0ff2a008fb50b2797795430\": rpc error: code = NotFound desc = could not find container \"a9666171acb8e4fc1cb9bfed5118cba949571bd4a0ff2a008fb50b2797795430\": container with ID starting with a9666171acb8e4fc1cb9bfed5118cba949571bd4a0ff2a008fb50b2797795430 not found: ID does not exist" Oct 13 17:42:07 crc kubenswrapper[4720]: I1013 17:42:07.849890 4720 scope.go:117] "RemoveContainer" containerID="39a11fa50b6df1b3689847a966cea4b41e65202bdb9a7ef535b60b791b86c2d0" Oct 13 17:42:07 crc kubenswrapper[4720]: I1013 17:42:07.850679 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39a11fa50b6df1b3689847a966cea4b41e65202bdb9a7ef535b60b791b86c2d0"} err="failed to get container status \"39a11fa50b6df1b3689847a966cea4b41e65202bdb9a7ef535b60b791b86c2d0\": rpc error: code = NotFound desc = could not find container \"39a11fa50b6df1b3689847a966cea4b41e65202bdb9a7ef535b60b791b86c2d0\": container with ID starting with 39a11fa50b6df1b3689847a966cea4b41e65202bdb9a7ef535b60b791b86c2d0 not found: ID does not exist" Oct 13 17:42:07 crc kubenswrapper[4720]: I1013 17:42:07.854259 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 13 17:42:07 crc kubenswrapper[4720]: E1013 17:42:07.854886 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9abd7a5-ccac-4a7e-b48e-4737a32d755b" containerName="nova-metadata-metadata" Oct 13 17:42:07 crc kubenswrapper[4720]: I1013 17:42:07.854907 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9abd7a5-ccac-4a7e-b48e-4737a32d755b" containerName="nova-metadata-metadata" Oct 13 17:42:07 crc kubenswrapper[4720]: E1013 17:42:07.854939 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9abd7a5-ccac-4a7e-b48e-4737a32d755b" containerName="nova-metadata-log" Oct 13 17:42:07 crc kubenswrapper[4720]: I1013 17:42:07.854948 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9abd7a5-ccac-4a7e-b48e-4737a32d755b" containerName="nova-metadata-log" Oct 13 17:42:07 crc kubenswrapper[4720]: I1013 17:42:07.855218 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9abd7a5-ccac-4a7e-b48e-4737a32d755b" containerName="nova-metadata-log" Oct 13 17:42:07 crc kubenswrapper[4720]: I1013 17:42:07.855259 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9abd7a5-ccac-4a7e-b48e-4737a32d755b" containerName="nova-metadata-metadata" Oct 13 17:42:07 crc kubenswrapper[4720]: I1013 17:42:07.856508 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 13 17:42:07 crc kubenswrapper[4720]: I1013 17:42:07.859700 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 13 17:42:07 crc kubenswrapper[4720]: I1013 17:42:07.860081 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 13 17:42:07 crc kubenswrapper[4720]: I1013 17:42:07.864656 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 13 17:42:07 crc kubenswrapper[4720]: I1013 17:42:07.962611 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea016dd0-f81e-4bb0-8cad-df33b5db6d35-logs\") pod \"nova-metadata-0\" (UID: \"ea016dd0-f81e-4bb0-8cad-df33b5db6d35\") " pod="openstack/nova-metadata-0" Oct 13 17:42:07 crc kubenswrapper[4720]: I1013 17:42:07.962659 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea016dd0-f81e-4bb0-8cad-df33b5db6d35-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ea016dd0-f81e-4bb0-8cad-df33b5db6d35\") " pod="openstack/nova-metadata-0" Oct 13 17:42:07 crc kubenswrapper[4720]: I1013 17:42:07.962691 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgzgb\" (UniqueName: \"kubernetes.io/projected/ea016dd0-f81e-4bb0-8cad-df33b5db6d35-kube-api-access-vgzgb\") pod \"nova-metadata-0\" (UID: \"ea016dd0-f81e-4bb0-8cad-df33b5db6d35\") " pod="openstack/nova-metadata-0" Oct 13 17:42:07 crc kubenswrapper[4720]: I1013 17:42:07.962934 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea016dd0-f81e-4bb0-8cad-df33b5db6d35-config-data\") pod \"nova-metadata-0\" (UID: \"ea016dd0-f81e-4bb0-8cad-df33b5db6d35\") " pod="openstack/nova-metadata-0" Oct 13 17:42:07 crc kubenswrapper[4720]: I1013 17:42:07.963043 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea016dd0-f81e-4bb0-8cad-df33b5db6d35-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ea016dd0-f81e-4bb0-8cad-df33b5db6d35\") " pod="openstack/nova-metadata-0" Oct 13 17:42:08 crc kubenswrapper[4720]: I1013 17:42:08.086410 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea016dd0-f81e-4bb0-8cad-df33b5db6d35-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ea016dd0-f81e-4bb0-8cad-df33b5db6d35\") " pod="openstack/nova-metadata-0" Oct 13 17:42:08 crc kubenswrapper[4720]: I1013 17:42:08.086479 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgzgb\" (UniqueName: \"kubernetes.io/projected/ea016dd0-f81e-4bb0-8cad-df33b5db6d35-kube-api-access-vgzgb\") pod \"nova-metadata-0\" (UID: \"ea016dd0-f81e-4bb0-8cad-df33b5db6d35\") " pod="openstack/nova-metadata-0" Oct 13 17:42:08 crc kubenswrapper[4720]: I1013 17:42:08.086568 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea016dd0-f81e-4bb0-8cad-df33b5db6d35-config-data\") pod \"nova-metadata-0\" (UID: \"ea016dd0-f81e-4bb0-8cad-df33b5db6d35\") " pod="openstack/nova-metadata-0" Oct 13 17:42:08 crc kubenswrapper[4720]: I1013 17:42:08.086629 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea016dd0-f81e-4bb0-8cad-df33b5db6d35-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ea016dd0-f81e-4bb0-8cad-df33b5db6d35\") " pod="openstack/nova-metadata-0" Oct 13 17:42:08 crc kubenswrapper[4720]: I1013 17:42:08.087239 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea016dd0-f81e-4bb0-8cad-df33b5db6d35-logs\") pod \"nova-metadata-0\" (UID: \"ea016dd0-f81e-4bb0-8cad-df33b5db6d35\") " pod="openstack/nova-metadata-0" Oct 13 17:42:08 crc kubenswrapper[4720]: I1013 17:42:08.087675 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea016dd0-f81e-4bb0-8cad-df33b5db6d35-logs\") pod \"nova-metadata-0\" (UID: \"ea016dd0-f81e-4bb0-8cad-df33b5db6d35\") " pod="openstack/nova-metadata-0" Oct 13 17:42:08 crc kubenswrapper[4720]: I1013 17:42:08.093793 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea016dd0-f81e-4bb0-8cad-df33b5db6d35-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ea016dd0-f81e-4bb0-8cad-df33b5db6d35\") " pod="openstack/nova-metadata-0" Oct 13 17:42:08 crc kubenswrapper[4720]: I1013 17:42:08.106398 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea016dd0-f81e-4bb0-8cad-df33b5db6d35-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ea016dd0-f81e-4bb0-8cad-df33b5db6d35\") " pod="openstack/nova-metadata-0" Oct 13 17:42:08 crc kubenswrapper[4720]: I1013 17:42:08.121904 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgzgb\" (UniqueName: \"kubernetes.io/projected/ea016dd0-f81e-4bb0-8cad-df33b5db6d35-kube-api-access-vgzgb\") pod \"nova-metadata-0\" (UID: \"ea016dd0-f81e-4bb0-8cad-df33b5db6d35\") " pod="openstack/nova-metadata-0" Oct 13 17:42:08 crc kubenswrapper[4720]: I1013 17:42:08.123310 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea016dd0-f81e-4bb0-8cad-df33b5db6d35-config-data\") pod \"nova-metadata-0\" (UID: \"ea016dd0-f81e-4bb0-8cad-df33b5db6d35\") " pod="openstack/nova-metadata-0" Oct 13 17:42:08 crc kubenswrapper[4720]: I1013 17:42:08.187840 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 13 17:42:08 crc kubenswrapper[4720]: I1013 17:42:08.668777 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 13 17:42:08 crc kubenswrapper[4720]: I1013 17:42:08.769024 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ea016dd0-f81e-4bb0-8cad-df33b5db6d35","Type":"ContainerStarted","Data":"80ccb1ad59ed6e170c34e382d01be5c3cd9ef1d13d6e999acf8db22befa09caa"} Oct 13 17:42:09 crc kubenswrapper[4720]: I1013 17:42:09.184728 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9abd7a5-ccac-4a7e-b48e-4737a32d755b" path="/var/lib/kubelet/pods/e9abd7a5-ccac-4a7e-b48e-4737a32d755b/volumes" Oct 13 17:42:09 crc kubenswrapper[4720]: I1013 17:42:09.788367 4720 generic.go:334] "Generic (PLEG): container finished" podID="05ec558f-1d81-4986-a415-06281c3cff62" containerID="1d7425afd8731fb7f75336af7275836c0fec787e6fa79a5dab2675615665268d" exitCode=0 Oct 13 17:42:09 crc kubenswrapper[4720]: I1013 17:42:09.788446 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-5hfwq" event={"ID":"05ec558f-1d81-4986-a415-06281c3cff62","Type":"ContainerDied","Data":"1d7425afd8731fb7f75336af7275836c0fec787e6fa79a5dab2675615665268d"} Oct 13 17:42:09 crc kubenswrapper[4720]: I1013 17:42:09.791061 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ea016dd0-f81e-4bb0-8cad-df33b5db6d35","Type":"ContainerStarted","Data":"34ad10a355ebf8bc6adf2465f96efcc0660a5c7a99d9ed3106b5b7b52d7fcfa8"} Oct 13 17:42:09 crc kubenswrapper[4720]: I1013 17:42:09.791095 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ea016dd0-f81e-4bb0-8cad-df33b5db6d35","Type":"ContainerStarted","Data":"28702ffd2a4b1f21581c85e6642e32335194b01feca32ce5fd6a9ebaa5ab6391"} Oct 13 17:42:09 crc kubenswrapper[4720]: I1013 17:42:09.850749 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.850726692 podStartE2EDuration="2.850726692s" podCreationTimestamp="2025-10-13 17:42:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:42:09.831338632 +0000 UTC m=+1075.288588774" watchObservedRunningTime="2025-10-13 17:42:09.850726692 +0000 UTC m=+1075.307976844" Oct 13 17:42:10 crc kubenswrapper[4720]: I1013 17:42:10.805524 4720 generic.go:334] "Generic (PLEG): container finished" podID="2b0268da-5ad5-40c5-8e86-4e99da5245e8" containerID="75f0c0b1d96c7216334aca5291634900346ca7d7c6f02397a4e800227cdaba71" exitCode=0 Oct 13 17:42:10 crc kubenswrapper[4720]: I1013 17:42:10.805614 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-4lv7c" event={"ID":"2b0268da-5ad5-40c5-8e86-4e99da5245e8","Type":"ContainerDied","Data":"75f0c0b1d96c7216334aca5291634900346ca7d7c6f02397a4e800227cdaba71"} Oct 13 17:42:11 crc kubenswrapper[4720]: I1013 17:42:11.214798 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-5hfwq" Oct 13 17:42:11 crc kubenswrapper[4720]: I1013 17:42:11.279074 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rql5j\" (UniqueName: \"kubernetes.io/projected/05ec558f-1d81-4986-a415-06281c3cff62-kube-api-access-rql5j\") pod \"05ec558f-1d81-4986-a415-06281c3cff62\" (UID: \"05ec558f-1d81-4986-a415-06281c3cff62\") " Oct 13 17:42:11 crc kubenswrapper[4720]: I1013 17:42:11.279398 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05ec558f-1d81-4986-a415-06281c3cff62-scripts\") pod \"05ec558f-1d81-4986-a415-06281c3cff62\" (UID: \"05ec558f-1d81-4986-a415-06281c3cff62\") " Oct 13 17:42:11 crc kubenswrapper[4720]: I1013 17:42:11.280054 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05ec558f-1d81-4986-a415-06281c3cff62-config-data\") pod \"05ec558f-1d81-4986-a415-06281c3cff62\" (UID: \"05ec558f-1d81-4986-a415-06281c3cff62\") " Oct 13 17:42:11 crc kubenswrapper[4720]: I1013 17:42:11.280102 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05ec558f-1d81-4986-a415-06281c3cff62-combined-ca-bundle\") pod \"05ec558f-1d81-4986-a415-06281c3cff62\" (UID: \"05ec558f-1d81-4986-a415-06281c3cff62\") " Oct 13 17:42:11 crc kubenswrapper[4720]: I1013 17:42:11.284453 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05ec558f-1d81-4986-a415-06281c3cff62-kube-api-access-rql5j" (OuterVolumeSpecName: "kube-api-access-rql5j") pod "05ec558f-1d81-4986-a415-06281c3cff62" (UID: "05ec558f-1d81-4986-a415-06281c3cff62"). InnerVolumeSpecName "kube-api-access-rql5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:42:11 crc kubenswrapper[4720]: I1013 17:42:11.286467 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05ec558f-1d81-4986-a415-06281c3cff62-scripts" (OuterVolumeSpecName: "scripts") pod "05ec558f-1d81-4986-a415-06281c3cff62" (UID: "05ec558f-1d81-4986-a415-06281c3cff62"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:42:11 crc kubenswrapper[4720]: I1013 17:42:11.307694 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05ec558f-1d81-4986-a415-06281c3cff62-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "05ec558f-1d81-4986-a415-06281c3cff62" (UID: "05ec558f-1d81-4986-a415-06281c3cff62"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:42:11 crc kubenswrapper[4720]: I1013 17:42:11.310917 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05ec558f-1d81-4986-a415-06281c3cff62-config-data" (OuterVolumeSpecName: "config-data") pod "05ec558f-1d81-4986-a415-06281c3cff62" (UID: "05ec558f-1d81-4986-a415-06281c3cff62"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:42:11 crc kubenswrapper[4720]: I1013 17:42:11.388766 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05ec558f-1d81-4986-a415-06281c3cff62-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:11 crc kubenswrapper[4720]: I1013 17:42:11.388794 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05ec558f-1d81-4986-a415-06281c3cff62-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:11 crc kubenswrapper[4720]: I1013 17:42:11.388804 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05ec558f-1d81-4986-a415-06281c3cff62-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:11 crc kubenswrapper[4720]: I1013 17:42:11.388815 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rql5j\" (UniqueName: \"kubernetes.io/projected/05ec558f-1d81-4986-a415-06281c3cff62-kube-api-access-rql5j\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:11 crc kubenswrapper[4720]: I1013 17:42:11.864209 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-5hfwq" event={"ID":"05ec558f-1d81-4986-a415-06281c3cff62","Type":"ContainerDied","Data":"fbdfb2b638297b095144f7319afba9416e9b4f27fbbb17f4610386040dd73f7c"} Oct 13 17:42:11 crc kubenswrapper[4720]: I1013 17:42:11.864583 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbdfb2b638297b095144f7319afba9416e9b4f27fbbb17f4610386040dd73f7c" Oct 13 17:42:11 crc kubenswrapper[4720]: I1013 17:42:11.864303 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-5hfwq" Oct 13 17:42:11 crc kubenswrapper[4720]: I1013 17:42:11.964807 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 13 17:42:11 crc kubenswrapper[4720]: I1013 17:42:11.964885 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 13 17:42:12 crc kubenswrapper[4720]: I1013 17:42:12.009420 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 13 17:42:12 crc kubenswrapper[4720]: I1013 17:42:12.024844 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 13 17:42:12 crc kubenswrapper[4720]: I1013 17:42:12.063127 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 13 17:42:12 crc kubenswrapper[4720]: I1013 17:42:12.063172 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 13 17:42:12 crc kubenswrapper[4720]: I1013 17:42:12.093434 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 13 17:42:12 crc kubenswrapper[4720]: I1013 17:42:12.093817 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ea016dd0-f81e-4bb0-8cad-df33b5db6d35" containerName="nova-metadata-log" containerID="cri-o://28702ffd2a4b1f21581c85e6642e32335194b01feca32ce5fd6a9ebaa5ab6391" gracePeriod=30 Oct 13 17:42:12 crc kubenswrapper[4720]: I1013 17:42:12.094146 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ea016dd0-f81e-4bb0-8cad-df33b5db6d35" containerName="nova-metadata-metadata" containerID="cri-o://34ad10a355ebf8bc6adf2465f96efcc0660a5c7a99d9ed3106b5b7b52d7fcfa8" gracePeriod=30 Oct 13 17:42:12 crc kubenswrapper[4720]: I1013 17:42:12.297338 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-845d6d6f59-trkfz" Oct 13 17:42:12 crc kubenswrapper[4720]: I1013 17:42:12.382898 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-bm4z9"] Oct 13 17:42:12 crc kubenswrapper[4720]: I1013 17:42:12.383116 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5784cf869f-bm4z9" podUID="75f08079-c609-4da3-81c4-7c761992396a" containerName="dnsmasq-dns" containerID="cri-o://653ae5b4745a9760568b74b57428ba90af224ba8510179fb2709cf67bae317dc" gracePeriod=10 Oct 13 17:42:12 crc kubenswrapper[4720]: I1013 17:42:12.401091 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-4lv7c" Oct 13 17:42:12 crc kubenswrapper[4720]: I1013 17:42:12.522697 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b0268da-5ad5-40c5-8e86-4e99da5245e8-scripts\") pod \"2b0268da-5ad5-40c5-8e86-4e99da5245e8\" (UID: \"2b0268da-5ad5-40c5-8e86-4e99da5245e8\") " Oct 13 17:42:12 crc kubenswrapper[4720]: I1013 17:42:12.522787 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b0268da-5ad5-40c5-8e86-4e99da5245e8-combined-ca-bundle\") pod \"2b0268da-5ad5-40c5-8e86-4e99da5245e8\" (UID: \"2b0268da-5ad5-40c5-8e86-4e99da5245e8\") " Oct 13 17:42:12 crc kubenswrapper[4720]: I1013 17:42:12.522910 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwgjw\" (UniqueName: \"kubernetes.io/projected/2b0268da-5ad5-40c5-8e86-4e99da5245e8-kube-api-access-vwgjw\") pod \"2b0268da-5ad5-40c5-8e86-4e99da5245e8\" (UID: \"2b0268da-5ad5-40c5-8e86-4e99da5245e8\") " Oct 13 17:42:12 crc kubenswrapper[4720]: I1013 17:42:12.522942 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b0268da-5ad5-40c5-8e86-4e99da5245e8-config-data\") pod \"2b0268da-5ad5-40c5-8e86-4e99da5245e8\" (UID: \"2b0268da-5ad5-40c5-8e86-4e99da5245e8\") " Oct 13 17:42:12 crc kubenswrapper[4720]: I1013 17:42:12.533928 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b0268da-5ad5-40c5-8e86-4e99da5245e8-kube-api-access-vwgjw" (OuterVolumeSpecName: "kube-api-access-vwgjw") pod "2b0268da-5ad5-40c5-8e86-4e99da5245e8" (UID: "2b0268da-5ad5-40c5-8e86-4e99da5245e8"). InnerVolumeSpecName "kube-api-access-vwgjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:42:12 crc kubenswrapper[4720]: I1013 17:42:12.536266 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b0268da-5ad5-40c5-8e86-4e99da5245e8-scripts" (OuterVolumeSpecName: "scripts") pod "2b0268da-5ad5-40c5-8e86-4e99da5245e8" (UID: "2b0268da-5ad5-40c5-8e86-4e99da5245e8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:42:12 crc kubenswrapper[4720]: I1013 17:42:12.560273 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b0268da-5ad5-40c5-8e86-4e99da5245e8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2b0268da-5ad5-40c5-8e86-4e99da5245e8" (UID: "2b0268da-5ad5-40c5-8e86-4e99da5245e8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:42:12 crc kubenswrapper[4720]: I1013 17:42:12.580347 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b0268da-5ad5-40c5-8e86-4e99da5245e8-config-data" (OuterVolumeSpecName: "config-data") pod "2b0268da-5ad5-40c5-8e86-4e99da5245e8" (UID: "2b0268da-5ad5-40c5-8e86-4e99da5245e8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:42:12 crc kubenswrapper[4720]: I1013 17:42:12.624510 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b0268da-5ad5-40c5-8e86-4e99da5245e8-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:12 crc kubenswrapper[4720]: I1013 17:42:12.624538 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b0268da-5ad5-40c5-8e86-4e99da5245e8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:12 crc kubenswrapper[4720]: I1013 17:42:12.624549 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwgjw\" (UniqueName: \"kubernetes.io/projected/2b0268da-5ad5-40c5-8e86-4e99da5245e8-kube-api-access-vwgjw\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:12 crc kubenswrapper[4720]: I1013 17:42:12.624557 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b0268da-5ad5-40c5-8e86-4e99da5245e8-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:12 crc kubenswrapper[4720]: I1013 17:42:12.895525 4720 generic.go:334] "Generic (PLEG): container finished" podID="75f08079-c609-4da3-81c4-7c761992396a" containerID="653ae5b4745a9760568b74b57428ba90af224ba8510179fb2709cf67bae317dc" exitCode=0 Oct 13 17:42:12 crc kubenswrapper[4720]: I1013 17:42:12.895613 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-bm4z9" event={"ID":"75f08079-c609-4da3-81c4-7c761992396a","Type":"ContainerDied","Data":"653ae5b4745a9760568b74b57428ba90af224ba8510179fb2709cf67bae317dc"} Oct 13 17:42:12 crc kubenswrapper[4720]: I1013 17:42:12.916013 4720 generic.go:334] "Generic (PLEG): container finished" podID="ea016dd0-f81e-4bb0-8cad-df33b5db6d35" containerID="34ad10a355ebf8bc6adf2465f96efcc0660a5c7a99d9ed3106b5b7b52d7fcfa8" exitCode=0 Oct 13 17:42:12 crc kubenswrapper[4720]: I1013 17:42:12.916049 4720 generic.go:334] "Generic (PLEG): container finished" podID="ea016dd0-f81e-4bb0-8cad-df33b5db6d35" containerID="28702ffd2a4b1f21581c85e6642e32335194b01feca32ce5fd6a9ebaa5ab6391" exitCode=143 Oct 13 17:42:12 crc kubenswrapper[4720]: I1013 17:42:12.916126 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ea016dd0-f81e-4bb0-8cad-df33b5db6d35","Type":"ContainerDied","Data":"34ad10a355ebf8bc6adf2465f96efcc0660a5c7a99d9ed3106b5b7b52d7fcfa8"} Oct 13 17:42:12 crc kubenswrapper[4720]: I1013 17:42:12.916158 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ea016dd0-f81e-4bb0-8cad-df33b5db6d35","Type":"ContainerDied","Data":"28702ffd2a4b1f21581c85e6642e32335194b01feca32ce5fd6a9ebaa5ab6391"} Oct 13 17:42:12 crc kubenswrapper[4720]: I1013 17:42:12.921826 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 13 17:42:12 crc kubenswrapper[4720]: E1013 17:42:12.922294 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b0268da-5ad5-40c5-8e86-4e99da5245e8" containerName="nova-cell1-conductor-db-sync" Oct 13 17:42:12 crc kubenswrapper[4720]: I1013 17:42:12.922308 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b0268da-5ad5-40c5-8e86-4e99da5245e8" containerName="nova-cell1-conductor-db-sync" Oct 13 17:42:12 crc kubenswrapper[4720]: E1013 17:42:12.922332 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05ec558f-1d81-4986-a415-06281c3cff62" containerName="nova-manage" Oct 13 17:42:12 crc kubenswrapper[4720]: I1013 17:42:12.922338 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="05ec558f-1d81-4986-a415-06281c3cff62" containerName="nova-manage" Oct 13 17:42:12 crc kubenswrapper[4720]: I1013 17:42:12.922524 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b0268da-5ad5-40c5-8e86-4e99da5245e8" containerName="nova-cell1-conductor-db-sync" Oct 13 17:42:12 crc kubenswrapper[4720]: I1013 17:42:12.922536 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="05ec558f-1d81-4986-a415-06281c3cff62" containerName="nova-manage" Oct 13 17:42:12 crc kubenswrapper[4720]: I1013 17:42:12.922929 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-bm4z9" Oct 13 17:42:12 crc kubenswrapper[4720]: I1013 17:42:12.923150 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Oct 13 17:42:12 crc kubenswrapper[4720]: I1013 17:42:12.927581 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-4lv7c" Oct 13 17:42:12 crc kubenswrapper[4720]: I1013 17:42:12.930197 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f8794551-360d-4269-b42d-9133a92f05c1" containerName="nova-api-log" containerID="cri-o://2b2ef4c6b88ee6795ecda16ca973f8b2f6e2ee9b41664f4d741dae2f9a9d8531" gracePeriod=30 Oct 13 17:42:12 crc kubenswrapper[4720]: I1013 17:42:12.930322 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-4lv7c" event={"ID":"2b0268da-5ad5-40c5-8e86-4e99da5245e8","Type":"ContainerDied","Data":"afa3e18365edad36869359a60171dbdc9fccc58972e3ec35ce0c98ee42b23a9c"} Oct 13 17:42:12 crc kubenswrapper[4720]: I1013 17:42:12.930344 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afa3e18365edad36869359a60171dbdc9fccc58972e3ec35ce0c98ee42b23a9c" Oct 13 17:42:12 crc kubenswrapper[4720]: I1013 17:42:12.930379 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f8794551-360d-4269-b42d-9133a92f05c1" containerName="nova-api-api" containerID="cri-o://7c4c5f1590b2f168a608050563eaf35db7e4e5e4e1dc746d19e8475f60466225" gracePeriod=30 Oct 13 17:42:12 crc kubenswrapper[4720]: I1013 17:42:12.941601 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f8794551-360d-4269-b42d-9133a92f05c1" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.189:8774/\": EOF" Oct 13 17:42:12 crc kubenswrapper[4720]: I1013 17:42:12.941634 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f8794551-360d-4269-b42d-9133a92f05c1" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.189:8774/\": EOF" Oct 13 17:42:12 crc kubenswrapper[4720]: I1013 17:42:12.979248 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 13 17:42:12 crc kubenswrapper[4720]: I1013 17:42:12.980634 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 13 17:42:13 crc kubenswrapper[4720]: I1013 17:42:13.006371 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 13 17:42:13 crc kubenswrapper[4720]: I1013 17:42:13.039724 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea016dd0-f81e-4bb0-8cad-df33b5db6d35-logs\") pod \"ea016dd0-f81e-4bb0-8cad-df33b5db6d35\" (UID: \"ea016dd0-f81e-4bb0-8cad-df33b5db6d35\") " Oct 13 17:42:13 crc kubenswrapper[4720]: I1013 17:42:13.039804 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6s4b\" (UniqueName: \"kubernetes.io/projected/75f08079-c609-4da3-81c4-7c761992396a-kube-api-access-g6s4b\") pod \"75f08079-c609-4da3-81c4-7c761992396a\" (UID: \"75f08079-c609-4da3-81c4-7c761992396a\") " Oct 13 17:42:13 crc kubenswrapper[4720]: I1013 17:42:13.039835 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/75f08079-c609-4da3-81c4-7c761992396a-ovsdbserver-nb\") pod \"75f08079-c609-4da3-81c4-7c761992396a\" (UID: \"75f08079-c609-4da3-81c4-7c761992396a\") " Oct 13 17:42:13 crc kubenswrapper[4720]: I1013 17:42:13.039948 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75f08079-c609-4da3-81c4-7c761992396a-ovsdbserver-sb\") pod \"75f08079-c609-4da3-81c4-7c761992396a\" (UID: \"75f08079-c609-4da3-81c4-7c761992396a\") " Oct 13 17:42:13 crc kubenswrapper[4720]: I1013 17:42:13.039984 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/75f08079-c609-4da3-81c4-7c761992396a-dns-swift-storage-0\") pod \"75f08079-c609-4da3-81c4-7c761992396a\" (UID: \"75f08079-c609-4da3-81c4-7c761992396a\") " Oct 13 17:42:13 crc kubenswrapper[4720]: I1013 17:42:13.040021 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75f08079-c609-4da3-81c4-7c761992396a-config\") pod \"75f08079-c609-4da3-81c4-7c761992396a\" (UID: \"75f08079-c609-4da3-81c4-7c761992396a\") " Oct 13 17:42:13 crc kubenswrapper[4720]: I1013 17:42:13.040038 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75f08079-c609-4da3-81c4-7c761992396a-dns-svc\") pod \"75f08079-c609-4da3-81c4-7c761992396a\" (UID: \"75f08079-c609-4da3-81c4-7c761992396a\") " Oct 13 17:42:13 crc kubenswrapper[4720]: I1013 17:42:13.040071 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea016dd0-f81e-4bb0-8cad-df33b5db6d35-nova-metadata-tls-certs\") pod \"ea016dd0-f81e-4bb0-8cad-df33b5db6d35\" (UID: \"ea016dd0-f81e-4bb0-8cad-df33b5db6d35\") " Oct 13 17:42:13 crc kubenswrapper[4720]: I1013 17:42:13.040119 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea016dd0-f81e-4bb0-8cad-df33b5db6d35-combined-ca-bundle\") pod \"ea016dd0-f81e-4bb0-8cad-df33b5db6d35\" (UID: \"ea016dd0-f81e-4bb0-8cad-df33b5db6d35\") " Oct 13 17:42:13 crc kubenswrapper[4720]: I1013 17:42:13.040152 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea016dd0-f81e-4bb0-8cad-df33b5db6d35-config-data\") pod \"ea016dd0-f81e-4bb0-8cad-df33b5db6d35\" (UID: \"ea016dd0-f81e-4bb0-8cad-df33b5db6d35\") " Oct 13 17:42:13 crc kubenswrapper[4720]: I1013 17:42:13.040177 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgzgb\" (UniqueName: \"kubernetes.io/projected/ea016dd0-f81e-4bb0-8cad-df33b5db6d35-kube-api-access-vgzgb\") pod \"ea016dd0-f81e-4bb0-8cad-df33b5db6d35\" (UID: \"ea016dd0-f81e-4bb0-8cad-df33b5db6d35\") " Oct 13 17:42:13 crc kubenswrapper[4720]: I1013 17:42:13.040528 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcwb9\" (UniqueName: \"kubernetes.io/projected/4b19f8c7-2583-4a2a-89e0-6a036d0e63a5-kube-api-access-xcwb9\") pod \"nova-cell1-conductor-0\" (UID: \"4b19f8c7-2583-4a2a-89e0-6a036d0e63a5\") " pod="openstack/nova-cell1-conductor-0" Oct 13 17:42:13 crc kubenswrapper[4720]: I1013 17:42:13.040560 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b19f8c7-2583-4a2a-89e0-6a036d0e63a5-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"4b19f8c7-2583-4a2a-89e0-6a036d0e63a5\") " pod="openstack/nova-cell1-conductor-0" Oct 13 17:42:13 crc kubenswrapper[4720]: I1013 17:42:13.040589 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b19f8c7-2583-4a2a-89e0-6a036d0e63a5-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"4b19f8c7-2583-4a2a-89e0-6a036d0e63a5\") " pod="openstack/nova-cell1-conductor-0" Oct 13 17:42:13 crc kubenswrapper[4720]: I1013 17:42:13.044661 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea016dd0-f81e-4bb0-8cad-df33b5db6d35-logs" (OuterVolumeSpecName: "logs") pod "ea016dd0-f81e-4bb0-8cad-df33b5db6d35" (UID: "ea016dd0-f81e-4bb0-8cad-df33b5db6d35"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:42:13 crc kubenswrapper[4720]: I1013 17:42:13.055770 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea016dd0-f81e-4bb0-8cad-df33b5db6d35-kube-api-access-vgzgb" (OuterVolumeSpecName: "kube-api-access-vgzgb") pod "ea016dd0-f81e-4bb0-8cad-df33b5db6d35" (UID: "ea016dd0-f81e-4bb0-8cad-df33b5db6d35"). InnerVolumeSpecName "kube-api-access-vgzgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:42:13 crc kubenswrapper[4720]: I1013 17:42:13.060953 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75f08079-c609-4da3-81c4-7c761992396a-kube-api-access-g6s4b" (OuterVolumeSpecName: "kube-api-access-g6s4b") pod "75f08079-c609-4da3-81c4-7c761992396a" (UID: "75f08079-c609-4da3-81c4-7c761992396a"). InnerVolumeSpecName "kube-api-access-g6s4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:42:13 crc kubenswrapper[4720]: I1013 17:42:13.096578 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea016dd0-f81e-4bb0-8cad-df33b5db6d35-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ea016dd0-f81e-4bb0-8cad-df33b5db6d35" (UID: "ea016dd0-f81e-4bb0-8cad-df33b5db6d35"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:42:13 crc kubenswrapper[4720]: I1013 17:42:13.132399 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea016dd0-f81e-4bb0-8cad-df33b5db6d35-config-data" (OuterVolumeSpecName: "config-data") pod "ea016dd0-f81e-4bb0-8cad-df33b5db6d35" (UID: "ea016dd0-f81e-4bb0-8cad-df33b5db6d35"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:42:13 crc kubenswrapper[4720]: I1013 17:42:13.139854 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75f08079-c609-4da3-81c4-7c761992396a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "75f08079-c609-4da3-81c4-7c761992396a" (UID: "75f08079-c609-4da3-81c4-7c761992396a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:42:13 crc kubenswrapper[4720]: I1013 17:42:13.142321 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcwb9\" (UniqueName: \"kubernetes.io/projected/4b19f8c7-2583-4a2a-89e0-6a036d0e63a5-kube-api-access-xcwb9\") pod \"nova-cell1-conductor-0\" (UID: \"4b19f8c7-2583-4a2a-89e0-6a036d0e63a5\") " pod="openstack/nova-cell1-conductor-0" Oct 13 17:42:13 crc kubenswrapper[4720]: I1013 17:42:13.142381 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b19f8c7-2583-4a2a-89e0-6a036d0e63a5-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"4b19f8c7-2583-4a2a-89e0-6a036d0e63a5\") " pod="openstack/nova-cell1-conductor-0" Oct 13 17:42:13 crc kubenswrapper[4720]: I1013 17:42:13.142414 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b19f8c7-2583-4a2a-89e0-6a036d0e63a5-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"4b19f8c7-2583-4a2a-89e0-6a036d0e63a5\") " pod="openstack/nova-cell1-conductor-0" Oct 13 17:42:13 crc kubenswrapper[4720]: I1013 17:42:13.142562 4720 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75f08079-c609-4da3-81c4-7c761992396a-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:13 crc kubenswrapper[4720]: I1013 17:42:13.142577 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea016dd0-f81e-4bb0-8cad-df33b5db6d35-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:13 crc kubenswrapper[4720]: I1013 17:42:13.142587 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea016dd0-f81e-4bb0-8cad-df33b5db6d35-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:13 crc kubenswrapper[4720]: I1013 17:42:13.142596 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgzgb\" (UniqueName: \"kubernetes.io/projected/ea016dd0-f81e-4bb0-8cad-df33b5db6d35-kube-api-access-vgzgb\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:13 crc kubenswrapper[4720]: I1013 17:42:13.142603 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea016dd0-f81e-4bb0-8cad-df33b5db6d35-logs\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:13 crc kubenswrapper[4720]: I1013 17:42:13.142612 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6s4b\" (UniqueName: \"kubernetes.io/projected/75f08079-c609-4da3-81c4-7c761992396a-kube-api-access-g6s4b\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:13 crc kubenswrapper[4720]: I1013 17:42:13.147693 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75f08079-c609-4da3-81c4-7c761992396a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "75f08079-c609-4da3-81c4-7c761992396a" (UID: "75f08079-c609-4da3-81c4-7c761992396a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:42:13 crc kubenswrapper[4720]: I1013 17:42:13.147725 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b19f8c7-2583-4a2a-89e0-6a036d0e63a5-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"4b19f8c7-2583-4a2a-89e0-6a036d0e63a5\") " pod="openstack/nova-cell1-conductor-0" Oct 13 17:42:13 crc kubenswrapper[4720]: I1013 17:42:13.148326 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b19f8c7-2583-4a2a-89e0-6a036d0e63a5-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"4b19f8c7-2583-4a2a-89e0-6a036d0e63a5\") " pod="openstack/nova-cell1-conductor-0" Oct 13 17:42:13 crc kubenswrapper[4720]: I1013 17:42:13.156723 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75f08079-c609-4da3-81c4-7c761992396a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "75f08079-c609-4da3-81c4-7c761992396a" (UID: "75f08079-c609-4da3-81c4-7c761992396a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:42:13 crc kubenswrapper[4720]: I1013 17:42:13.158079 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75f08079-c609-4da3-81c4-7c761992396a-config" (OuterVolumeSpecName: "config") pod "75f08079-c609-4da3-81c4-7c761992396a" (UID: "75f08079-c609-4da3-81c4-7c761992396a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:42:13 crc kubenswrapper[4720]: I1013 17:42:13.160133 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcwb9\" (UniqueName: \"kubernetes.io/projected/4b19f8c7-2583-4a2a-89e0-6a036d0e63a5-kube-api-access-xcwb9\") pod \"nova-cell1-conductor-0\" (UID: \"4b19f8c7-2583-4a2a-89e0-6a036d0e63a5\") " pod="openstack/nova-cell1-conductor-0" Oct 13 17:42:13 crc kubenswrapper[4720]: I1013 17:42:13.182709 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea016dd0-f81e-4bb0-8cad-df33b5db6d35-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "ea016dd0-f81e-4bb0-8cad-df33b5db6d35" (UID: "ea016dd0-f81e-4bb0-8cad-df33b5db6d35"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:42:13 crc kubenswrapper[4720]: I1013 17:42:13.183549 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75f08079-c609-4da3-81c4-7c761992396a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "75f08079-c609-4da3-81c4-7c761992396a" (UID: "75f08079-c609-4da3-81c4-7c761992396a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:42:13 crc kubenswrapper[4720]: I1013 17:42:13.244300 4720 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75f08079-c609-4da3-81c4-7c761992396a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:13 crc kubenswrapper[4720]: I1013 17:42:13.244324 4720 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/75f08079-c609-4da3-81c4-7c761992396a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:13 crc kubenswrapper[4720]: I1013 17:42:13.244334 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75f08079-c609-4da3-81c4-7c761992396a-config\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:13 crc kubenswrapper[4720]: I1013 17:42:13.244343 4720 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea016dd0-f81e-4bb0-8cad-df33b5db6d35-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:13 crc kubenswrapper[4720]: I1013 17:42:13.244351 4720 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/75f08079-c609-4da3-81c4-7c761992396a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:13 crc kubenswrapper[4720]: I1013 17:42:13.282430 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Oct 13 17:42:13 crc kubenswrapper[4720]: I1013 17:42:13.742161 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 13 17:42:13 crc kubenswrapper[4720]: W1013 17:42:13.747751 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b19f8c7_2583_4a2a_89e0_6a036d0e63a5.slice/crio-b09f56848fdebe68e6bfbdb688b493ea86011fb9eb7d1a15c5c068dee98bd9c3 WatchSource:0}: Error finding container b09f56848fdebe68e6bfbdb688b493ea86011fb9eb7d1a15c5c068dee98bd9c3: Status 404 returned error can't find the container with id b09f56848fdebe68e6bfbdb688b493ea86011fb9eb7d1a15c5c068dee98bd9c3 Oct 13 17:42:13 crc kubenswrapper[4720]: I1013 17:42:13.937547 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ea016dd0-f81e-4bb0-8cad-df33b5db6d35","Type":"ContainerDied","Data":"80ccb1ad59ed6e170c34e382d01be5c3cd9ef1d13d6e999acf8db22befa09caa"} Oct 13 17:42:13 crc kubenswrapper[4720]: I1013 17:42:13.937635 4720 scope.go:117] "RemoveContainer" containerID="34ad10a355ebf8bc6adf2465f96efcc0660a5c7a99d9ed3106b5b7b52d7fcfa8" Oct 13 17:42:13 crc kubenswrapper[4720]: I1013 17:42:13.937560 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 13 17:42:13 crc kubenswrapper[4720]: I1013 17:42:13.939283 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"4b19f8c7-2583-4a2a-89e0-6a036d0e63a5","Type":"ContainerStarted","Data":"b09f56848fdebe68e6bfbdb688b493ea86011fb9eb7d1a15c5c068dee98bd9c3"} Oct 13 17:42:13 crc kubenswrapper[4720]: I1013 17:42:13.942853 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-bm4z9" event={"ID":"75f08079-c609-4da3-81c4-7c761992396a","Type":"ContainerDied","Data":"dca69628a2fe1a524262d9cbaf96351dd208a8851b9c26e9e1b172eb46a40a09"} Oct 13 17:42:13 crc kubenswrapper[4720]: I1013 17:42:13.942922 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-bm4z9" Oct 13 17:42:13 crc kubenswrapper[4720]: I1013 17:42:13.944971 4720 generic.go:334] "Generic (PLEG): container finished" podID="f8794551-360d-4269-b42d-9133a92f05c1" containerID="2b2ef4c6b88ee6795ecda16ca973f8b2f6e2ee9b41664f4d741dae2f9a9d8531" exitCode=143 Oct 13 17:42:13 crc kubenswrapper[4720]: I1013 17:42:13.948002 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f8794551-360d-4269-b42d-9133a92f05c1","Type":"ContainerDied","Data":"2b2ef4c6b88ee6795ecda16ca973f8b2f6e2ee9b41664f4d741dae2f9a9d8531"} Oct 13 17:42:13 crc kubenswrapper[4720]: I1013 17:42:13.948008 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="b78e3734-742b-488b-ace9-2e0ded0c394e" containerName="nova-scheduler-scheduler" containerID="cri-o://6d6b50e99892521a7b06c8a0ed3797bbe2e31a526a345726c2840d020c98e7d0" gracePeriod=30 Oct 13 17:42:13 crc kubenswrapper[4720]: I1013 17:42:13.970965 4720 scope.go:117] "RemoveContainer" containerID="28702ffd2a4b1f21581c85e6642e32335194b01feca32ce5fd6a9ebaa5ab6391" Oct 13 17:42:14 crc kubenswrapper[4720]: I1013 17:42:14.020238 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-bm4z9"] Oct 13 17:42:14 crc kubenswrapper[4720]: I1013 17:42:14.029733 4720 scope.go:117] "RemoveContainer" containerID="653ae5b4745a9760568b74b57428ba90af224ba8510179fb2709cf67bae317dc" Oct 13 17:42:14 crc kubenswrapper[4720]: I1013 17:42:14.034370 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-bm4z9"] Oct 13 17:42:14 crc kubenswrapper[4720]: I1013 17:42:14.044239 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 13 17:42:14 crc kubenswrapper[4720]: I1013 17:42:14.051488 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 13 17:42:14 crc kubenswrapper[4720]: I1013 17:42:14.055090 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 13 17:42:14 crc kubenswrapper[4720]: E1013 17:42:14.057808 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75f08079-c609-4da3-81c4-7c761992396a" containerName="dnsmasq-dns" Oct 13 17:42:14 crc kubenswrapper[4720]: I1013 17:42:14.057867 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="75f08079-c609-4da3-81c4-7c761992396a" containerName="dnsmasq-dns" Oct 13 17:42:14 crc kubenswrapper[4720]: E1013 17:42:14.057881 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea016dd0-f81e-4bb0-8cad-df33b5db6d35" containerName="nova-metadata-log" Oct 13 17:42:14 crc kubenswrapper[4720]: I1013 17:42:14.057888 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea016dd0-f81e-4bb0-8cad-df33b5db6d35" containerName="nova-metadata-log" Oct 13 17:42:14 crc kubenswrapper[4720]: E1013 17:42:14.057899 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75f08079-c609-4da3-81c4-7c761992396a" containerName="init" Oct 13 17:42:14 crc kubenswrapper[4720]: I1013 17:42:14.057904 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="75f08079-c609-4da3-81c4-7c761992396a" containerName="init" Oct 13 17:42:14 crc kubenswrapper[4720]: E1013 17:42:14.057917 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea016dd0-f81e-4bb0-8cad-df33b5db6d35" containerName="nova-metadata-metadata" Oct 13 17:42:14 crc kubenswrapper[4720]: I1013 17:42:14.057923 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea016dd0-f81e-4bb0-8cad-df33b5db6d35" containerName="nova-metadata-metadata" Oct 13 17:42:14 crc kubenswrapper[4720]: I1013 17:42:14.058243 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="75f08079-c609-4da3-81c4-7c761992396a" containerName="dnsmasq-dns" Oct 13 17:42:14 crc kubenswrapper[4720]: I1013 17:42:14.058266 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea016dd0-f81e-4bb0-8cad-df33b5db6d35" containerName="nova-metadata-log" Oct 13 17:42:14 crc kubenswrapper[4720]: I1013 17:42:14.058277 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea016dd0-f81e-4bb0-8cad-df33b5db6d35" containerName="nova-metadata-metadata" Oct 13 17:42:14 crc kubenswrapper[4720]: I1013 17:42:14.062693 4720 scope.go:117] "RemoveContainer" containerID="a386a7d73c6e81551b63d0bd8b5919279512925e8d91b58685e26b42e405b745" Oct 13 17:42:14 crc kubenswrapper[4720]: I1013 17:42:14.062759 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 13 17:42:14 crc kubenswrapper[4720]: I1013 17:42:14.062857 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 13 17:42:14 crc kubenswrapper[4720]: I1013 17:42:14.064935 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 13 17:42:14 crc kubenswrapper[4720]: I1013 17:42:14.066584 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 13 17:42:14 crc kubenswrapper[4720]: I1013 17:42:14.181457 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxp47\" (UniqueName: \"kubernetes.io/projected/4c54bcbf-ffd9-4595-b51d-0efcfcafd52a-kube-api-access-zxp47\") pod \"nova-metadata-0\" (UID: \"4c54bcbf-ffd9-4595-b51d-0efcfcafd52a\") " pod="openstack/nova-metadata-0" Oct 13 17:42:14 crc kubenswrapper[4720]: I1013 17:42:14.181893 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c54bcbf-ffd9-4595-b51d-0efcfcafd52a-logs\") pod \"nova-metadata-0\" (UID: \"4c54bcbf-ffd9-4595-b51d-0efcfcafd52a\") " pod="openstack/nova-metadata-0" Oct 13 17:42:14 crc kubenswrapper[4720]: I1013 17:42:14.182037 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c54bcbf-ffd9-4595-b51d-0efcfcafd52a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4c54bcbf-ffd9-4595-b51d-0efcfcafd52a\") " pod="openstack/nova-metadata-0" Oct 13 17:42:14 crc kubenswrapper[4720]: I1013 17:42:14.182140 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c54bcbf-ffd9-4595-b51d-0efcfcafd52a-config-data\") pod \"nova-metadata-0\" (UID: \"4c54bcbf-ffd9-4595-b51d-0efcfcafd52a\") " pod="openstack/nova-metadata-0" Oct 13 17:42:14 crc kubenswrapper[4720]: I1013 17:42:14.182318 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c54bcbf-ffd9-4595-b51d-0efcfcafd52a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4c54bcbf-ffd9-4595-b51d-0efcfcafd52a\") " pod="openstack/nova-metadata-0" Oct 13 17:42:14 crc kubenswrapper[4720]: I1013 17:42:14.284599 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c54bcbf-ffd9-4595-b51d-0efcfcafd52a-config-data\") pod \"nova-metadata-0\" (UID: \"4c54bcbf-ffd9-4595-b51d-0efcfcafd52a\") " pod="openstack/nova-metadata-0" Oct 13 17:42:14 crc kubenswrapper[4720]: I1013 17:42:14.284722 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c54bcbf-ffd9-4595-b51d-0efcfcafd52a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4c54bcbf-ffd9-4595-b51d-0efcfcafd52a\") " pod="openstack/nova-metadata-0" Oct 13 17:42:14 crc kubenswrapper[4720]: I1013 17:42:14.284791 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxp47\" (UniqueName: \"kubernetes.io/projected/4c54bcbf-ffd9-4595-b51d-0efcfcafd52a-kube-api-access-zxp47\") pod \"nova-metadata-0\" (UID: \"4c54bcbf-ffd9-4595-b51d-0efcfcafd52a\") " pod="openstack/nova-metadata-0" Oct 13 17:42:14 crc kubenswrapper[4720]: I1013 17:42:14.284873 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c54bcbf-ffd9-4595-b51d-0efcfcafd52a-logs\") pod \"nova-metadata-0\" (UID: \"4c54bcbf-ffd9-4595-b51d-0efcfcafd52a\") " pod="openstack/nova-metadata-0" Oct 13 17:42:14 crc kubenswrapper[4720]: I1013 17:42:14.284949 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c54bcbf-ffd9-4595-b51d-0efcfcafd52a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4c54bcbf-ffd9-4595-b51d-0efcfcafd52a\") " pod="openstack/nova-metadata-0" Oct 13 17:42:14 crc kubenswrapper[4720]: I1013 17:42:14.286055 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c54bcbf-ffd9-4595-b51d-0efcfcafd52a-logs\") pod \"nova-metadata-0\" (UID: \"4c54bcbf-ffd9-4595-b51d-0efcfcafd52a\") " pod="openstack/nova-metadata-0" Oct 13 17:42:14 crc kubenswrapper[4720]: I1013 17:42:14.290024 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c54bcbf-ffd9-4595-b51d-0efcfcafd52a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4c54bcbf-ffd9-4595-b51d-0efcfcafd52a\") " pod="openstack/nova-metadata-0" Oct 13 17:42:14 crc kubenswrapper[4720]: I1013 17:42:14.290234 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c54bcbf-ffd9-4595-b51d-0efcfcafd52a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4c54bcbf-ffd9-4595-b51d-0efcfcafd52a\") " pod="openstack/nova-metadata-0" Oct 13 17:42:14 crc kubenswrapper[4720]: I1013 17:42:14.295249 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c54bcbf-ffd9-4595-b51d-0efcfcafd52a-config-data\") pod \"nova-metadata-0\" (UID: \"4c54bcbf-ffd9-4595-b51d-0efcfcafd52a\") " pod="openstack/nova-metadata-0" Oct 13 17:42:14 crc kubenswrapper[4720]: I1013 17:42:14.304939 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxp47\" (UniqueName: \"kubernetes.io/projected/4c54bcbf-ffd9-4595-b51d-0efcfcafd52a-kube-api-access-zxp47\") pod \"nova-metadata-0\" (UID: \"4c54bcbf-ffd9-4595-b51d-0efcfcafd52a\") " pod="openstack/nova-metadata-0" Oct 13 17:42:14 crc kubenswrapper[4720]: I1013 17:42:14.415742 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 13 17:42:14 crc kubenswrapper[4720]: I1013 17:42:14.858226 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 13 17:42:14 crc kubenswrapper[4720]: I1013 17:42:14.968493 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4c54bcbf-ffd9-4595-b51d-0efcfcafd52a","Type":"ContainerStarted","Data":"8373b2354c0dbedb8f44bae289007fe32f0939e8db7b6f4cde3c0e0b5812d5f6"} Oct 13 17:42:14 crc kubenswrapper[4720]: I1013 17:42:14.974703 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"4b19f8c7-2583-4a2a-89e0-6a036d0e63a5","Type":"ContainerStarted","Data":"ac06ec1dfae97830839734a25a38e6da6acacaecf9b2cc97f66633a381677670"} Oct 13 17:42:14 crc kubenswrapper[4720]: I1013 17:42:14.974949 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Oct 13 17:42:14 crc kubenswrapper[4720]: I1013 17:42:14.993470 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.993453745 podStartE2EDuration="2.993453745s" podCreationTimestamp="2025-10-13 17:42:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:42:14.992738346 +0000 UTC m=+1080.449988508" watchObservedRunningTime="2025-10-13 17:42:14.993453745 +0000 UTC m=+1080.450703877" Oct 13 17:42:15 crc kubenswrapper[4720]: I1013 17:42:15.190578 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75f08079-c609-4da3-81c4-7c761992396a" path="/var/lib/kubelet/pods/75f08079-c609-4da3-81c4-7c761992396a/volumes" Oct 13 17:42:15 crc kubenswrapper[4720]: I1013 17:42:15.203001 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea016dd0-f81e-4bb0-8cad-df33b5db6d35" path="/var/lib/kubelet/pods/ea016dd0-f81e-4bb0-8cad-df33b5db6d35/volumes" Oct 13 17:42:15 crc kubenswrapper[4720]: I1013 17:42:15.213018 4720 patch_prober.go:28] interesting pod/machine-config-daemon-htwnl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 17:42:15 crc kubenswrapper[4720]: I1013 17:42:15.213087 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 17:42:15 crc kubenswrapper[4720]: I1013 17:42:15.986861 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4c54bcbf-ffd9-4595-b51d-0efcfcafd52a","Type":"ContainerStarted","Data":"a848c4d10f2b62cf6c973bdde1aa6d0cb1b4a57ef6f5e204223976e823fcb3cf"} Oct 13 17:42:15 crc kubenswrapper[4720]: I1013 17:42:15.986896 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4c54bcbf-ffd9-4595-b51d-0efcfcafd52a","Type":"ContainerStarted","Data":"c50a054551174514909d400930146f247c90ce7b9ec89a0aaf7a3bf9c6732252"} Oct 13 17:42:16 crc kubenswrapper[4720]: I1013 17:42:16.024426 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.02439872 podStartE2EDuration="2.02439872s" podCreationTimestamp="2025-10-13 17:42:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:42:16.010764608 +0000 UTC m=+1081.468014740" watchObservedRunningTime="2025-10-13 17:42:16.02439872 +0000 UTC m=+1081.481648892" Oct 13 17:42:17 crc kubenswrapper[4720]: E1013 17:42:17.010653 4720 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6d6b50e99892521a7b06c8a0ed3797bbe2e31a526a345726c2840d020c98e7d0" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 13 17:42:17 crc kubenswrapper[4720]: E1013 17:42:17.012150 4720 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6d6b50e99892521a7b06c8a0ed3797bbe2e31a526a345726c2840d020c98e7d0" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 13 17:42:17 crc kubenswrapper[4720]: E1013 17:42:17.014167 4720 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6d6b50e99892521a7b06c8a0ed3797bbe2e31a526a345726c2840d020c98e7d0" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 13 17:42:17 crc kubenswrapper[4720]: E1013 17:42:17.014252 4720 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="b78e3734-742b-488b-ace9-2e0ded0c394e" containerName="nova-scheduler-scheduler" Oct 13 17:42:17 crc kubenswrapper[4720]: W1013 17:42:17.160427 4720 helpers.go:245] readString: Failed to read "/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb78e3734_742b_488b_ace9_2e0ded0c394e.slice/crio-43028ba84052ee7078ea04eacc096d70c109e8257abb5abb9e4d8ab2e903f67d/memory.swap.max": read /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb78e3734_742b_488b_ace9_2e0ded0c394e.slice/crio-43028ba84052ee7078ea04eacc096d70c109e8257abb5abb9e4d8ab2e903f67d/memory.swap.max: no such device Oct 13 17:42:17 crc kubenswrapper[4720]: I1013 17:42:17.666049 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 13 17:42:17 crc kubenswrapper[4720]: I1013 17:42:17.760011 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b78e3734-742b-488b-ace9-2e0ded0c394e-config-data\") pod \"b78e3734-742b-488b-ace9-2e0ded0c394e\" (UID: \"b78e3734-742b-488b-ace9-2e0ded0c394e\") " Oct 13 17:42:17 crc kubenswrapper[4720]: I1013 17:42:17.760134 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsrsw\" (UniqueName: \"kubernetes.io/projected/b78e3734-742b-488b-ace9-2e0ded0c394e-kube-api-access-rsrsw\") pod \"b78e3734-742b-488b-ace9-2e0ded0c394e\" (UID: \"b78e3734-742b-488b-ace9-2e0ded0c394e\") " Oct 13 17:42:17 crc kubenswrapper[4720]: I1013 17:42:17.760293 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b78e3734-742b-488b-ace9-2e0ded0c394e-combined-ca-bundle\") pod \"b78e3734-742b-488b-ace9-2e0ded0c394e\" (UID: \"b78e3734-742b-488b-ace9-2e0ded0c394e\") " Oct 13 17:42:17 crc kubenswrapper[4720]: I1013 17:42:17.775527 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b78e3734-742b-488b-ace9-2e0ded0c394e-kube-api-access-rsrsw" (OuterVolumeSpecName: "kube-api-access-rsrsw") pod "b78e3734-742b-488b-ace9-2e0ded0c394e" (UID: "b78e3734-742b-488b-ace9-2e0ded0c394e"). InnerVolumeSpecName "kube-api-access-rsrsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:42:17 crc kubenswrapper[4720]: I1013 17:42:17.804158 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b78e3734-742b-488b-ace9-2e0ded0c394e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b78e3734-742b-488b-ace9-2e0ded0c394e" (UID: "b78e3734-742b-488b-ace9-2e0ded0c394e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:42:17 crc kubenswrapper[4720]: I1013 17:42:17.811421 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b78e3734-742b-488b-ace9-2e0ded0c394e-config-data" (OuterVolumeSpecName: "config-data") pod "b78e3734-742b-488b-ace9-2e0ded0c394e" (UID: "b78e3734-742b-488b-ace9-2e0ded0c394e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:42:17 crc kubenswrapper[4720]: I1013 17:42:17.862693 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b78e3734-742b-488b-ace9-2e0ded0c394e-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:17 crc kubenswrapper[4720]: I1013 17:42:17.862739 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rsrsw\" (UniqueName: \"kubernetes.io/projected/b78e3734-742b-488b-ace9-2e0ded0c394e-kube-api-access-rsrsw\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:17 crc kubenswrapper[4720]: I1013 17:42:17.862751 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b78e3734-742b-488b-ace9-2e0ded0c394e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:18 crc kubenswrapper[4720]: I1013 17:42:18.028808 4720 generic.go:334] "Generic (PLEG): container finished" podID="b78e3734-742b-488b-ace9-2e0ded0c394e" containerID="6d6b50e99892521a7b06c8a0ed3797bbe2e31a526a345726c2840d020c98e7d0" exitCode=0 Oct 13 17:42:18 crc kubenswrapper[4720]: I1013 17:42:18.028921 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 13 17:42:18 crc kubenswrapper[4720]: I1013 17:42:18.029996 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b78e3734-742b-488b-ace9-2e0ded0c394e","Type":"ContainerDied","Data":"6d6b50e99892521a7b06c8a0ed3797bbe2e31a526a345726c2840d020c98e7d0"} Oct 13 17:42:18 crc kubenswrapper[4720]: I1013 17:42:18.030175 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b78e3734-742b-488b-ace9-2e0ded0c394e","Type":"ContainerDied","Data":"43028ba84052ee7078ea04eacc096d70c109e8257abb5abb9e4d8ab2e903f67d"} Oct 13 17:42:18 crc kubenswrapper[4720]: I1013 17:42:18.030254 4720 scope.go:117] "RemoveContainer" containerID="6d6b50e99892521a7b06c8a0ed3797bbe2e31a526a345726c2840d020c98e7d0" Oct 13 17:42:18 crc kubenswrapper[4720]: I1013 17:42:18.072330 4720 scope.go:117] "RemoveContainer" containerID="6d6b50e99892521a7b06c8a0ed3797bbe2e31a526a345726c2840d020c98e7d0" Oct 13 17:42:18 crc kubenswrapper[4720]: E1013 17:42:18.076161 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d6b50e99892521a7b06c8a0ed3797bbe2e31a526a345726c2840d020c98e7d0\": container with ID starting with 6d6b50e99892521a7b06c8a0ed3797bbe2e31a526a345726c2840d020c98e7d0 not found: ID does not exist" containerID="6d6b50e99892521a7b06c8a0ed3797bbe2e31a526a345726c2840d020c98e7d0" Oct 13 17:42:18 crc kubenswrapper[4720]: I1013 17:42:18.076231 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d6b50e99892521a7b06c8a0ed3797bbe2e31a526a345726c2840d020c98e7d0"} err="failed to get container status \"6d6b50e99892521a7b06c8a0ed3797bbe2e31a526a345726c2840d020c98e7d0\": rpc error: code = NotFound desc = could not find container \"6d6b50e99892521a7b06c8a0ed3797bbe2e31a526a345726c2840d020c98e7d0\": container with ID starting with 6d6b50e99892521a7b06c8a0ed3797bbe2e31a526a345726c2840d020c98e7d0 not found: ID does not exist" Oct 13 17:42:18 crc kubenswrapper[4720]: I1013 17:42:18.084229 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 13 17:42:18 crc kubenswrapper[4720]: I1013 17:42:18.115774 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Oct 13 17:42:18 crc kubenswrapper[4720]: I1013 17:42:18.146276 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 13 17:42:18 crc kubenswrapper[4720]: E1013 17:42:18.146728 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b78e3734-742b-488b-ace9-2e0ded0c394e" containerName="nova-scheduler-scheduler" Oct 13 17:42:18 crc kubenswrapper[4720]: I1013 17:42:18.146746 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="b78e3734-742b-488b-ace9-2e0ded0c394e" containerName="nova-scheduler-scheduler" Oct 13 17:42:18 crc kubenswrapper[4720]: I1013 17:42:18.146945 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="b78e3734-742b-488b-ace9-2e0ded0c394e" containerName="nova-scheduler-scheduler" Oct 13 17:42:18 crc kubenswrapper[4720]: I1013 17:42:18.147761 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 13 17:42:18 crc kubenswrapper[4720]: I1013 17:42:18.151352 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 13 17:42:18 crc kubenswrapper[4720]: I1013 17:42:18.157248 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 13 17:42:18 crc kubenswrapper[4720]: I1013 17:42:18.269682 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f65be040-a3da-4f04-a883-995351ba908b-config-data\") pod \"nova-scheduler-0\" (UID: \"f65be040-a3da-4f04-a883-995351ba908b\") " pod="openstack/nova-scheduler-0" Oct 13 17:42:18 crc kubenswrapper[4720]: I1013 17:42:18.269787 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f65be040-a3da-4f04-a883-995351ba908b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f65be040-a3da-4f04-a883-995351ba908b\") " pod="openstack/nova-scheduler-0" Oct 13 17:42:18 crc kubenswrapper[4720]: I1013 17:42:18.269948 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6t8z\" (UniqueName: \"kubernetes.io/projected/f65be040-a3da-4f04-a883-995351ba908b-kube-api-access-t6t8z\") pod \"nova-scheduler-0\" (UID: \"f65be040-a3da-4f04-a883-995351ba908b\") " pod="openstack/nova-scheduler-0" Oct 13 17:42:18 crc kubenswrapper[4720]: I1013 17:42:18.371898 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6t8z\" (UniqueName: \"kubernetes.io/projected/f65be040-a3da-4f04-a883-995351ba908b-kube-api-access-t6t8z\") pod \"nova-scheduler-0\" (UID: \"f65be040-a3da-4f04-a883-995351ba908b\") " pod="openstack/nova-scheduler-0" Oct 13 17:42:18 crc kubenswrapper[4720]: I1013 17:42:18.372034 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f65be040-a3da-4f04-a883-995351ba908b-config-data\") pod \"nova-scheduler-0\" (UID: \"f65be040-a3da-4f04-a883-995351ba908b\") " pod="openstack/nova-scheduler-0" Oct 13 17:42:18 crc kubenswrapper[4720]: I1013 17:42:18.372127 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f65be040-a3da-4f04-a883-995351ba908b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f65be040-a3da-4f04-a883-995351ba908b\") " pod="openstack/nova-scheduler-0" Oct 13 17:42:18 crc kubenswrapper[4720]: I1013 17:42:18.378364 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f65be040-a3da-4f04-a883-995351ba908b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f65be040-a3da-4f04-a883-995351ba908b\") " pod="openstack/nova-scheduler-0" Oct 13 17:42:18 crc kubenswrapper[4720]: I1013 17:42:18.378959 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f65be040-a3da-4f04-a883-995351ba908b-config-data\") pod \"nova-scheduler-0\" (UID: \"f65be040-a3da-4f04-a883-995351ba908b\") " pod="openstack/nova-scheduler-0" Oct 13 17:42:18 crc kubenswrapper[4720]: I1013 17:42:18.392289 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6t8z\" (UniqueName: \"kubernetes.io/projected/f65be040-a3da-4f04-a883-995351ba908b-kube-api-access-t6t8z\") pod \"nova-scheduler-0\" (UID: \"f65be040-a3da-4f04-a883-995351ba908b\") " pod="openstack/nova-scheduler-0" Oct 13 17:42:18 crc kubenswrapper[4720]: I1013 17:42:18.477952 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 13 17:42:18 crc kubenswrapper[4720]: W1013 17:42:18.981989 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf65be040_a3da_4f04_a883_995351ba908b.slice/crio-638d4f7cb1983b23a71064cb84445508a948c1e4fad1703a10208865c534aaed WatchSource:0}: Error finding container 638d4f7cb1983b23a71064cb84445508a948c1e4fad1703a10208865c534aaed: Status 404 returned error can't find the container with id 638d4f7cb1983b23a71064cb84445508a948c1e4fad1703a10208865c534aaed Oct 13 17:42:18 crc kubenswrapper[4720]: I1013 17:42:18.988682 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 13 17:42:19 crc kubenswrapper[4720]: I1013 17:42:19.045523 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f65be040-a3da-4f04-a883-995351ba908b","Type":"ContainerStarted","Data":"638d4f7cb1983b23a71064cb84445508a948c1e4fad1703a10208865c534aaed"} Oct 13 17:42:19 crc kubenswrapper[4720]: I1013 17:42:19.183882 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b78e3734-742b-488b-ace9-2e0ded0c394e" path="/var/lib/kubelet/pods/b78e3734-742b-488b-ace9-2e0ded0c394e/volumes" Oct 13 17:42:19 crc kubenswrapper[4720]: I1013 17:42:19.416640 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 13 17:42:19 crc kubenswrapper[4720]: I1013 17:42:19.416698 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 13 17:42:19 crc kubenswrapper[4720]: I1013 17:42:19.905114 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 13 17:42:20 crc kubenswrapper[4720]: I1013 17:42:20.008717 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8794551-360d-4269-b42d-9133a92f05c1-config-data\") pod \"f8794551-360d-4269-b42d-9133a92f05c1\" (UID: \"f8794551-360d-4269-b42d-9133a92f05c1\") " Oct 13 17:42:20 crc kubenswrapper[4720]: I1013 17:42:20.009289 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8794551-360d-4269-b42d-9133a92f05c1-logs\") pod \"f8794551-360d-4269-b42d-9133a92f05c1\" (UID: \"f8794551-360d-4269-b42d-9133a92f05c1\") " Oct 13 17:42:20 crc kubenswrapper[4720]: I1013 17:42:20.009474 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbq2c\" (UniqueName: \"kubernetes.io/projected/f8794551-360d-4269-b42d-9133a92f05c1-kube-api-access-pbq2c\") pod \"f8794551-360d-4269-b42d-9133a92f05c1\" (UID: \"f8794551-360d-4269-b42d-9133a92f05c1\") " Oct 13 17:42:20 crc kubenswrapper[4720]: I1013 17:42:20.009522 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8794551-360d-4269-b42d-9133a92f05c1-combined-ca-bundle\") pod \"f8794551-360d-4269-b42d-9133a92f05c1\" (UID: \"f8794551-360d-4269-b42d-9133a92f05c1\") " Oct 13 17:42:20 crc kubenswrapper[4720]: I1013 17:42:20.010089 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8794551-360d-4269-b42d-9133a92f05c1-logs" (OuterVolumeSpecName: "logs") pod "f8794551-360d-4269-b42d-9133a92f05c1" (UID: "f8794551-360d-4269-b42d-9133a92f05c1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:42:20 crc kubenswrapper[4720]: I1013 17:42:20.015103 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8794551-360d-4269-b42d-9133a92f05c1-kube-api-access-pbq2c" (OuterVolumeSpecName: "kube-api-access-pbq2c") pod "f8794551-360d-4269-b42d-9133a92f05c1" (UID: "f8794551-360d-4269-b42d-9133a92f05c1"). InnerVolumeSpecName "kube-api-access-pbq2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:42:20 crc kubenswrapper[4720]: E1013 17:42:20.049154 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f8794551-360d-4269-b42d-9133a92f05c1-combined-ca-bundle podName:f8794551-360d-4269-b42d-9133a92f05c1 nodeName:}" failed. No retries permitted until 2025-10-13 17:42:20.5491253 +0000 UTC m=+1086.006375442 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "combined-ca-bundle" (UniqueName: "kubernetes.io/secret/f8794551-360d-4269-b42d-9133a92f05c1-combined-ca-bundle") pod "f8794551-360d-4269-b42d-9133a92f05c1" (UID: "f8794551-360d-4269-b42d-9133a92f05c1") : error deleting /var/lib/kubelet/pods/f8794551-360d-4269-b42d-9133a92f05c1/volume-subpaths: remove /var/lib/kubelet/pods/f8794551-360d-4269-b42d-9133a92f05c1/volume-subpaths: no such file or directory Oct 13 17:42:20 crc kubenswrapper[4720]: I1013 17:42:20.053989 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8794551-360d-4269-b42d-9133a92f05c1-config-data" (OuterVolumeSpecName: "config-data") pod "f8794551-360d-4269-b42d-9133a92f05c1" (UID: "f8794551-360d-4269-b42d-9133a92f05c1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:42:20 crc kubenswrapper[4720]: I1013 17:42:20.065318 4720 generic.go:334] "Generic (PLEG): container finished" podID="f8794551-360d-4269-b42d-9133a92f05c1" containerID="7c4c5f1590b2f168a608050563eaf35db7e4e5e4e1dc746d19e8475f60466225" exitCode=0 Oct 13 17:42:20 crc kubenswrapper[4720]: I1013 17:42:20.065398 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f8794551-360d-4269-b42d-9133a92f05c1","Type":"ContainerDied","Data":"7c4c5f1590b2f168a608050563eaf35db7e4e5e4e1dc746d19e8475f60466225"} Oct 13 17:42:20 crc kubenswrapper[4720]: I1013 17:42:20.065429 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f8794551-360d-4269-b42d-9133a92f05c1","Type":"ContainerDied","Data":"e933ee6ae760d26e4a248af21820dcabb6072808e29251e008457774ea2c2572"} Oct 13 17:42:20 crc kubenswrapper[4720]: I1013 17:42:20.065449 4720 scope.go:117] "RemoveContainer" containerID="7c4c5f1590b2f168a608050563eaf35db7e4e5e4e1dc746d19e8475f60466225" Oct 13 17:42:20 crc kubenswrapper[4720]: I1013 17:42:20.065536 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 13 17:42:20 crc kubenswrapper[4720]: I1013 17:42:20.070407 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f65be040-a3da-4f04-a883-995351ba908b","Type":"ContainerStarted","Data":"babf2748262064788db22e65543d4efd9f77b93e063718a0c45a9e5ba1e36402"} Oct 13 17:42:20 crc kubenswrapper[4720]: I1013 17:42:20.104527 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.104503109 podStartE2EDuration="2.104503109s" podCreationTimestamp="2025-10-13 17:42:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:42:20.085863088 +0000 UTC m=+1085.543113230" watchObservedRunningTime="2025-10-13 17:42:20.104503109 +0000 UTC m=+1085.561753241" Oct 13 17:42:20 crc kubenswrapper[4720]: I1013 17:42:20.105634 4720 scope.go:117] "RemoveContainer" containerID="2b2ef4c6b88ee6795ecda16ca973f8b2f6e2ee9b41664f4d741dae2f9a9d8531" Oct 13 17:42:20 crc kubenswrapper[4720]: I1013 17:42:20.111391 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8794551-360d-4269-b42d-9133a92f05c1-logs\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:20 crc kubenswrapper[4720]: I1013 17:42:20.111494 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbq2c\" (UniqueName: \"kubernetes.io/projected/f8794551-360d-4269-b42d-9133a92f05c1-kube-api-access-pbq2c\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:20 crc kubenswrapper[4720]: I1013 17:42:20.111563 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8794551-360d-4269-b42d-9133a92f05c1-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:20 crc kubenswrapper[4720]: I1013 17:42:20.152087 4720 scope.go:117] "RemoveContainer" containerID="7c4c5f1590b2f168a608050563eaf35db7e4e5e4e1dc746d19e8475f60466225" Oct 13 17:42:20 crc kubenswrapper[4720]: E1013 17:42:20.152706 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c4c5f1590b2f168a608050563eaf35db7e4e5e4e1dc746d19e8475f60466225\": container with ID starting with 7c4c5f1590b2f168a608050563eaf35db7e4e5e4e1dc746d19e8475f60466225 not found: ID does not exist" containerID="7c4c5f1590b2f168a608050563eaf35db7e4e5e4e1dc746d19e8475f60466225" Oct 13 17:42:20 crc kubenswrapper[4720]: I1013 17:42:20.152749 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c4c5f1590b2f168a608050563eaf35db7e4e5e4e1dc746d19e8475f60466225"} err="failed to get container status \"7c4c5f1590b2f168a608050563eaf35db7e4e5e4e1dc746d19e8475f60466225\": rpc error: code = NotFound desc = could not find container \"7c4c5f1590b2f168a608050563eaf35db7e4e5e4e1dc746d19e8475f60466225\": container with ID starting with 7c4c5f1590b2f168a608050563eaf35db7e4e5e4e1dc746d19e8475f60466225 not found: ID does not exist" Oct 13 17:42:20 crc kubenswrapper[4720]: I1013 17:42:20.152775 4720 scope.go:117] "RemoveContainer" containerID="2b2ef4c6b88ee6795ecda16ca973f8b2f6e2ee9b41664f4d741dae2f9a9d8531" Oct 13 17:42:20 crc kubenswrapper[4720]: E1013 17:42:20.155802 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b2ef4c6b88ee6795ecda16ca973f8b2f6e2ee9b41664f4d741dae2f9a9d8531\": container with ID starting with 2b2ef4c6b88ee6795ecda16ca973f8b2f6e2ee9b41664f4d741dae2f9a9d8531 not found: ID does not exist" containerID="2b2ef4c6b88ee6795ecda16ca973f8b2f6e2ee9b41664f4d741dae2f9a9d8531" Oct 13 17:42:20 crc kubenswrapper[4720]: I1013 17:42:20.155829 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b2ef4c6b88ee6795ecda16ca973f8b2f6e2ee9b41664f4d741dae2f9a9d8531"} err="failed to get container status \"2b2ef4c6b88ee6795ecda16ca973f8b2f6e2ee9b41664f4d741dae2f9a9d8531\": rpc error: code = NotFound desc = could not find container \"2b2ef4c6b88ee6795ecda16ca973f8b2f6e2ee9b41664f4d741dae2f9a9d8531\": container with ID starting with 2b2ef4c6b88ee6795ecda16ca973f8b2f6e2ee9b41664f4d741dae2f9a9d8531 not found: ID does not exist" Oct 13 17:42:20 crc kubenswrapper[4720]: I1013 17:42:20.619942 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8794551-360d-4269-b42d-9133a92f05c1-combined-ca-bundle\") pod \"f8794551-360d-4269-b42d-9133a92f05c1\" (UID: \"f8794551-360d-4269-b42d-9133a92f05c1\") " Oct 13 17:42:20 crc kubenswrapper[4720]: I1013 17:42:20.627639 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8794551-360d-4269-b42d-9133a92f05c1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f8794551-360d-4269-b42d-9133a92f05c1" (UID: "f8794551-360d-4269-b42d-9133a92f05c1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:42:20 crc kubenswrapper[4720]: I1013 17:42:20.722224 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8794551-360d-4269-b42d-9133a92f05c1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:20 crc kubenswrapper[4720]: I1013 17:42:20.739243 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 13 17:42:20 crc kubenswrapper[4720]: I1013 17:42:20.749053 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 13 17:42:20 crc kubenswrapper[4720]: I1013 17:42:20.771643 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 13 17:42:20 crc kubenswrapper[4720]: E1013 17:42:20.772098 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8794551-360d-4269-b42d-9133a92f05c1" containerName="nova-api-log" Oct 13 17:42:20 crc kubenswrapper[4720]: I1013 17:42:20.772122 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8794551-360d-4269-b42d-9133a92f05c1" containerName="nova-api-log" Oct 13 17:42:20 crc kubenswrapper[4720]: E1013 17:42:20.772156 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8794551-360d-4269-b42d-9133a92f05c1" containerName="nova-api-api" Oct 13 17:42:20 crc kubenswrapper[4720]: I1013 17:42:20.772163 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8794551-360d-4269-b42d-9133a92f05c1" containerName="nova-api-api" Oct 13 17:42:20 crc kubenswrapper[4720]: I1013 17:42:20.772342 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8794551-360d-4269-b42d-9133a92f05c1" containerName="nova-api-log" Oct 13 17:42:20 crc kubenswrapper[4720]: I1013 17:42:20.772367 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8794551-360d-4269-b42d-9133a92f05c1" containerName="nova-api-api" Oct 13 17:42:20 crc kubenswrapper[4720]: I1013 17:42:20.773436 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 13 17:42:20 crc kubenswrapper[4720]: I1013 17:42:20.775362 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 13 17:42:20 crc kubenswrapper[4720]: I1013 17:42:20.786769 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 13 17:42:20 crc kubenswrapper[4720]: I1013 17:42:20.823492 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4d97deb-0f09-435e-9ade-51e87b0ded99-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e4d97deb-0f09-435e-9ade-51e87b0ded99\") " pod="openstack/nova-api-0" Oct 13 17:42:20 crc kubenswrapper[4720]: I1013 17:42:20.823585 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4d97deb-0f09-435e-9ade-51e87b0ded99-logs\") pod \"nova-api-0\" (UID: \"e4d97deb-0f09-435e-9ade-51e87b0ded99\") " pod="openstack/nova-api-0" Oct 13 17:42:20 crc kubenswrapper[4720]: I1013 17:42:20.823607 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4d97deb-0f09-435e-9ade-51e87b0ded99-config-data\") pod \"nova-api-0\" (UID: \"e4d97deb-0f09-435e-9ade-51e87b0ded99\") " pod="openstack/nova-api-0" Oct 13 17:42:20 crc kubenswrapper[4720]: I1013 17:42:20.823664 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccvr2\" (UniqueName: \"kubernetes.io/projected/e4d97deb-0f09-435e-9ade-51e87b0ded99-kube-api-access-ccvr2\") pod \"nova-api-0\" (UID: \"e4d97deb-0f09-435e-9ade-51e87b0ded99\") " pod="openstack/nova-api-0" Oct 13 17:42:20 crc kubenswrapper[4720]: I1013 17:42:20.925225 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4d97deb-0f09-435e-9ade-51e87b0ded99-logs\") pod \"nova-api-0\" (UID: \"e4d97deb-0f09-435e-9ade-51e87b0ded99\") " pod="openstack/nova-api-0" Oct 13 17:42:20 crc kubenswrapper[4720]: I1013 17:42:20.925300 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4d97deb-0f09-435e-9ade-51e87b0ded99-config-data\") pod \"nova-api-0\" (UID: \"e4d97deb-0f09-435e-9ade-51e87b0ded99\") " pod="openstack/nova-api-0" Oct 13 17:42:20 crc kubenswrapper[4720]: I1013 17:42:20.925385 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccvr2\" (UniqueName: \"kubernetes.io/projected/e4d97deb-0f09-435e-9ade-51e87b0ded99-kube-api-access-ccvr2\") pod \"nova-api-0\" (UID: \"e4d97deb-0f09-435e-9ade-51e87b0ded99\") " pod="openstack/nova-api-0" Oct 13 17:42:20 crc kubenswrapper[4720]: I1013 17:42:20.925567 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4d97deb-0f09-435e-9ade-51e87b0ded99-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e4d97deb-0f09-435e-9ade-51e87b0ded99\") " pod="openstack/nova-api-0" Oct 13 17:42:20 crc kubenswrapper[4720]: I1013 17:42:20.925966 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4d97deb-0f09-435e-9ade-51e87b0ded99-logs\") pod \"nova-api-0\" (UID: \"e4d97deb-0f09-435e-9ade-51e87b0ded99\") " pod="openstack/nova-api-0" Oct 13 17:42:20 crc kubenswrapper[4720]: I1013 17:42:20.938432 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4d97deb-0f09-435e-9ade-51e87b0ded99-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e4d97deb-0f09-435e-9ade-51e87b0ded99\") " pod="openstack/nova-api-0" Oct 13 17:42:20 crc kubenswrapper[4720]: I1013 17:42:20.938447 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4d97deb-0f09-435e-9ade-51e87b0ded99-config-data\") pod \"nova-api-0\" (UID: \"e4d97deb-0f09-435e-9ade-51e87b0ded99\") " pod="openstack/nova-api-0" Oct 13 17:42:20 crc kubenswrapper[4720]: I1013 17:42:20.959107 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccvr2\" (UniqueName: \"kubernetes.io/projected/e4d97deb-0f09-435e-9ade-51e87b0ded99-kube-api-access-ccvr2\") pod \"nova-api-0\" (UID: \"e4d97deb-0f09-435e-9ade-51e87b0ded99\") " pod="openstack/nova-api-0" Oct 13 17:42:21 crc kubenswrapper[4720]: I1013 17:42:21.094089 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 13 17:42:21 crc kubenswrapper[4720]: I1013 17:42:21.193603 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8794551-360d-4269-b42d-9133a92f05c1" path="/var/lib/kubelet/pods/f8794551-360d-4269-b42d-9133a92f05c1/volumes" Oct 13 17:42:21 crc kubenswrapper[4720]: W1013 17:42:21.577827 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4d97deb_0f09_435e_9ade_51e87b0ded99.slice/crio-767acbfadc9636c298430fbdc876c480cd6bf791dacc569e12b1e84dfe0aa2a4 WatchSource:0}: Error finding container 767acbfadc9636c298430fbdc876c480cd6bf791dacc569e12b1e84dfe0aa2a4: Status 404 returned error can't find the container with id 767acbfadc9636c298430fbdc876c480cd6bf791dacc569e12b1e84dfe0aa2a4 Oct 13 17:42:21 crc kubenswrapper[4720]: I1013 17:42:21.583408 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 13 17:42:22 crc kubenswrapper[4720]: I1013 17:42:22.097376 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e4d97deb-0f09-435e-9ade-51e87b0ded99","Type":"ContainerStarted","Data":"36037d780cae7a1d82bd53e92e053a1fa3fa934796419c1153acfe6edf3a365a"} Oct 13 17:42:22 crc kubenswrapper[4720]: I1013 17:42:22.097852 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e4d97deb-0f09-435e-9ade-51e87b0ded99","Type":"ContainerStarted","Data":"e95f4f7bc093f23ec5e46ae6b78b68ef94974c8be0fb00bedd7b7993e9599a32"} Oct 13 17:42:22 crc kubenswrapper[4720]: I1013 17:42:22.097924 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e4d97deb-0f09-435e-9ade-51e87b0ded99","Type":"ContainerStarted","Data":"767acbfadc9636c298430fbdc876c480cd6bf791dacc569e12b1e84dfe0aa2a4"} Oct 13 17:42:22 crc kubenswrapper[4720]: I1013 17:42:22.125125 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.125097146 podStartE2EDuration="2.125097146s" podCreationTimestamp="2025-10-13 17:42:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:42:22.119322017 +0000 UTC m=+1087.576572149" watchObservedRunningTime="2025-10-13 17:42:22.125097146 +0000 UTC m=+1087.582347288" Oct 13 17:42:23 crc kubenswrapper[4720]: I1013 17:42:23.318070 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Oct 13 17:42:23 crc kubenswrapper[4720]: I1013 17:42:23.478648 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 13 17:42:24 crc kubenswrapper[4720]: I1013 17:42:24.256388 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Oct 13 17:42:24 crc kubenswrapper[4720]: I1013 17:42:24.416600 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 13 17:42:24 crc kubenswrapper[4720]: I1013 17:42:24.416948 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 13 17:42:25 crc kubenswrapper[4720]: I1013 17:42:25.430812 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="4c54bcbf-ffd9-4595-b51d-0efcfcafd52a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 13 17:42:25 crc kubenswrapper[4720]: I1013 17:42:25.430841 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="4c54bcbf-ffd9-4595-b51d-0efcfcafd52a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 13 17:42:28 crc kubenswrapper[4720]: I1013 17:42:28.184381 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 13 17:42:28 crc kubenswrapper[4720]: I1013 17:42:28.184568 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="0b352587-867c-4276-93b5-89c6f922a2cc" containerName="kube-state-metrics" containerID="cri-o://efa665659a38bb5c901ba2e3ceb63eebfea09bf62d86dbbfe61157bad0e61c90" gracePeriod=30 Oct 13 17:42:28 crc kubenswrapper[4720]: I1013 17:42:28.479048 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 13 17:42:28 crc kubenswrapper[4720]: I1013 17:42:28.513922 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 13 17:42:28 crc kubenswrapper[4720]: I1013 17:42:28.675008 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 13 17:42:28 crc kubenswrapper[4720]: I1013 17:42:28.689442 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6rg8\" (UniqueName: \"kubernetes.io/projected/0b352587-867c-4276-93b5-89c6f922a2cc-kube-api-access-c6rg8\") pod \"0b352587-867c-4276-93b5-89c6f922a2cc\" (UID: \"0b352587-867c-4276-93b5-89c6f922a2cc\") " Oct 13 17:42:28 crc kubenswrapper[4720]: I1013 17:42:28.698506 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b352587-867c-4276-93b5-89c6f922a2cc-kube-api-access-c6rg8" (OuterVolumeSpecName: "kube-api-access-c6rg8") pod "0b352587-867c-4276-93b5-89c6f922a2cc" (UID: "0b352587-867c-4276-93b5-89c6f922a2cc"). InnerVolumeSpecName "kube-api-access-c6rg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:42:28 crc kubenswrapper[4720]: I1013 17:42:28.792114 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6rg8\" (UniqueName: \"kubernetes.io/projected/0b352587-867c-4276-93b5-89c6f922a2cc-kube-api-access-c6rg8\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:29 crc kubenswrapper[4720]: I1013 17:42:29.186484 4720 generic.go:334] "Generic (PLEG): container finished" podID="0b352587-867c-4276-93b5-89c6f922a2cc" containerID="efa665659a38bb5c901ba2e3ceb63eebfea09bf62d86dbbfe61157bad0e61c90" exitCode=2 Oct 13 17:42:29 crc kubenswrapper[4720]: I1013 17:42:29.186566 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 13 17:42:29 crc kubenswrapper[4720]: I1013 17:42:29.186557 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0b352587-867c-4276-93b5-89c6f922a2cc","Type":"ContainerDied","Data":"efa665659a38bb5c901ba2e3ceb63eebfea09bf62d86dbbfe61157bad0e61c90"} Oct 13 17:42:29 crc kubenswrapper[4720]: I1013 17:42:29.186766 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0b352587-867c-4276-93b5-89c6f922a2cc","Type":"ContainerDied","Data":"a0ec76a6b4375e14c758f893bba2c358bd5dad3c5f4766ea38354e96252993a9"} Oct 13 17:42:29 crc kubenswrapper[4720]: I1013 17:42:29.186803 4720 scope.go:117] "RemoveContainer" containerID="efa665659a38bb5c901ba2e3ceb63eebfea09bf62d86dbbfe61157bad0e61c90" Oct 13 17:42:29 crc kubenswrapper[4720]: I1013 17:42:29.225808 4720 scope.go:117] "RemoveContainer" containerID="efa665659a38bb5c901ba2e3ceb63eebfea09bf62d86dbbfe61157bad0e61c90" Oct 13 17:42:29 crc kubenswrapper[4720]: E1013 17:42:29.233467 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efa665659a38bb5c901ba2e3ceb63eebfea09bf62d86dbbfe61157bad0e61c90\": container with ID starting with efa665659a38bb5c901ba2e3ceb63eebfea09bf62d86dbbfe61157bad0e61c90 not found: ID does not exist" containerID="efa665659a38bb5c901ba2e3ceb63eebfea09bf62d86dbbfe61157bad0e61c90" Oct 13 17:42:29 crc kubenswrapper[4720]: I1013 17:42:29.233516 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efa665659a38bb5c901ba2e3ceb63eebfea09bf62d86dbbfe61157bad0e61c90"} err="failed to get container status \"efa665659a38bb5c901ba2e3ceb63eebfea09bf62d86dbbfe61157bad0e61c90\": rpc error: code = NotFound desc = could not find container \"efa665659a38bb5c901ba2e3ceb63eebfea09bf62d86dbbfe61157bad0e61c90\": container with ID starting with efa665659a38bb5c901ba2e3ceb63eebfea09bf62d86dbbfe61157bad0e61c90 not found: ID does not exist" Oct 13 17:42:29 crc kubenswrapper[4720]: I1013 17:42:29.241400 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 13 17:42:29 crc kubenswrapper[4720]: I1013 17:42:29.247249 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 13 17:42:29 crc kubenswrapper[4720]: I1013 17:42:29.266113 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 13 17:42:29 crc kubenswrapper[4720]: I1013 17:42:29.293438 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Oct 13 17:42:29 crc kubenswrapper[4720]: E1013 17:42:29.293901 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b352587-867c-4276-93b5-89c6f922a2cc" containerName="kube-state-metrics" Oct 13 17:42:29 crc kubenswrapper[4720]: I1013 17:42:29.293920 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b352587-867c-4276-93b5-89c6f922a2cc" containerName="kube-state-metrics" Oct 13 17:42:29 crc kubenswrapper[4720]: I1013 17:42:29.294230 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b352587-867c-4276-93b5-89c6f922a2cc" containerName="kube-state-metrics" Oct 13 17:42:29 crc kubenswrapper[4720]: I1013 17:42:29.295018 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 13 17:42:29 crc kubenswrapper[4720]: I1013 17:42:29.297355 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Oct 13 17:42:29 crc kubenswrapper[4720]: I1013 17:42:29.297870 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Oct 13 17:42:29 crc kubenswrapper[4720]: I1013 17:42:29.304927 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 13 17:42:29 crc kubenswrapper[4720]: I1013 17:42:29.400579 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvsww\" (UniqueName: \"kubernetes.io/projected/3ce4e7b1-7ffc-4444-8e0c-2bc9779d4ef1-kube-api-access-jvsww\") pod \"kube-state-metrics-0\" (UID: \"3ce4e7b1-7ffc-4444-8e0c-2bc9779d4ef1\") " pod="openstack/kube-state-metrics-0" Oct 13 17:42:29 crc kubenswrapper[4720]: I1013 17:42:29.400680 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ce4e7b1-7ffc-4444-8e0c-2bc9779d4ef1-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"3ce4e7b1-7ffc-4444-8e0c-2bc9779d4ef1\") " pod="openstack/kube-state-metrics-0" Oct 13 17:42:29 crc kubenswrapper[4720]: I1013 17:42:29.400722 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ce4e7b1-7ffc-4444-8e0c-2bc9779d4ef1-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"3ce4e7b1-7ffc-4444-8e0c-2bc9779d4ef1\") " pod="openstack/kube-state-metrics-0" Oct 13 17:42:29 crc kubenswrapper[4720]: I1013 17:42:29.400977 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/3ce4e7b1-7ffc-4444-8e0c-2bc9779d4ef1-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"3ce4e7b1-7ffc-4444-8e0c-2bc9779d4ef1\") " pod="openstack/kube-state-metrics-0" Oct 13 17:42:29 crc kubenswrapper[4720]: I1013 17:42:29.502532 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/3ce4e7b1-7ffc-4444-8e0c-2bc9779d4ef1-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"3ce4e7b1-7ffc-4444-8e0c-2bc9779d4ef1\") " pod="openstack/kube-state-metrics-0" Oct 13 17:42:29 crc kubenswrapper[4720]: I1013 17:42:29.502575 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvsww\" (UniqueName: \"kubernetes.io/projected/3ce4e7b1-7ffc-4444-8e0c-2bc9779d4ef1-kube-api-access-jvsww\") pod \"kube-state-metrics-0\" (UID: \"3ce4e7b1-7ffc-4444-8e0c-2bc9779d4ef1\") " pod="openstack/kube-state-metrics-0" Oct 13 17:42:29 crc kubenswrapper[4720]: I1013 17:42:29.502650 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ce4e7b1-7ffc-4444-8e0c-2bc9779d4ef1-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"3ce4e7b1-7ffc-4444-8e0c-2bc9779d4ef1\") " pod="openstack/kube-state-metrics-0" Oct 13 17:42:29 crc kubenswrapper[4720]: I1013 17:42:29.503017 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ce4e7b1-7ffc-4444-8e0c-2bc9779d4ef1-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"3ce4e7b1-7ffc-4444-8e0c-2bc9779d4ef1\") " pod="openstack/kube-state-metrics-0" Oct 13 17:42:29 crc kubenswrapper[4720]: I1013 17:42:29.506822 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ce4e7b1-7ffc-4444-8e0c-2bc9779d4ef1-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"3ce4e7b1-7ffc-4444-8e0c-2bc9779d4ef1\") " pod="openstack/kube-state-metrics-0" Oct 13 17:42:29 crc kubenswrapper[4720]: I1013 17:42:29.507098 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/3ce4e7b1-7ffc-4444-8e0c-2bc9779d4ef1-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"3ce4e7b1-7ffc-4444-8e0c-2bc9779d4ef1\") " pod="openstack/kube-state-metrics-0" Oct 13 17:42:29 crc kubenswrapper[4720]: I1013 17:42:29.517966 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ce4e7b1-7ffc-4444-8e0c-2bc9779d4ef1-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"3ce4e7b1-7ffc-4444-8e0c-2bc9779d4ef1\") " pod="openstack/kube-state-metrics-0" Oct 13 17:42:29 crc kubenswrapper[4720]: I1013 17:42:29.521688 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvsww\" (UniqueName: \"kubernetes.io/projected/3ce4e7b1-7ffc-4444-8e0c-2bc9779d4ef1-kube-api-access-jvsww\") pod \"kube-state-metrics-0\" (UID: \"3ce4e7b1-7ffc-4444-8e0c-2bc9779d4ef1\") " pod="openstack/kube-state-metrics-0" Oct 13 17:42:29 crc kubenswrapper[4720]: I1013 17:42:29.620207 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 13 17:42:29 crc kubenswrapper[4720]: I1013 17:42:29.951780 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 13 17:42:29 crc kubenswrapper[4720]: I1013 17:42:29.952508 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6e562d7d-19f2-4b5d-82a9-129d8128f66f" containerName="sg-core" containerID="cri-o://52e4948cd05dd0f58932a8ee391786e5bf454f487420016896bc53b4f4164bfa" gracePeriod=30 Oct 13 17:42:29 crc kubenswrapper[4720]: I1013 17:42:29.952559 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6e562d7d-19f2-4b5d-82a9-129d8128f66f" containerName="ceilometer-notification-agent" containerID="cri-o://a5c3001d1504ef7a974b2871e80e14c20eb321021e69f6c7a93a7ba98e613e9f" gracePeriod=30 Oct 13 17:42:29 crc kubenswrapper[4720]: I1013 17:42:29.952519 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6e562d7d-19f2-4b5d-82a9-129d8128f66f" containerName="proxy-httpd" containerID="cri-o://3ebf0b71f7adf2a473991fde6ef983541ecfc4bc5fc63ef4a3e18e5746125e11" gracePeriod=30 Oct 13 17:42:29 crc kubenswrapper[4720]: I1013 17:42:29.953031 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6e562d7d-19f2-4b5d-82a9-129d8128f66f" containerName="ceilometer-central-agent" containerID="cri-o://6719bdac9c22b77bc6474d466c3f90512df9624bf9a3c652cee60f5fdd7c1d1c" gracePeriod=30 Oct 13 17:42:30 crc kubenswrapper[4720]: I1013 17:42:30.095876 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 13 17:42:30 crc kubenswrapper[4720]: W1013 17:42:30.106299 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ce4e7b1_7ffc_4444_8e0c_2bc9779d4ef1.slice/crio-e183788bc2b69046c40a8053374fe2623743b39a7c150e2829f127a4cbaf9785 WatchSource:0}: Error finding container e183788bc2b69046c40a8053374fe2623743b39a7c150e2829f127a4cbaf9785: Status 404 returned error can't find the container with id e183788bc2b69046c40a8053374fe2623743b39a7c150e2829f127a4cbaf9785 Oct 13 17:42:30 crc kubenswrapper[4720]: I1013 17:42:30.202099 4720 generic.go:334] "Generic (PLEG): container finished" podID="6e562d7d-19f2-4b5d-82a9-129d8128f66f" containerID="3ebf0b71f7adf2a473991fde6ef983541ecfc4bc5fc63ef4a3e18e5746125e11" exitCode=0 Oct 13 17:42:30 crc kubenswrapper[4720]: I1013 17:42:30.202137 4720 generic.go:334] "Generic (PLEG): container finished" podID="6e562d7d-19f2-4b5d-82a9-129d8128f66f" containerID="52e4948cd05dd0f58932a8ee391786e5bf454f487420016896bc53b4f4164bfa" exitCode=2 Oct 13 17:42:30 crc kubenswrapper[4720]: I1013 17:42:30.202185 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e562d7d-19f2-4b5d-82a9-129d8128f66f","Type":"ContainerDied","Data":"3ebf0b71f7adf2a473991fde6ef983541ecfc4bc5fc63ef4a3e18e5746125e11"} Oct 13 17:42:30 crc kubenswrapper[4720]: I1013 17:42:30.202264 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e562d7d-19f2-4b5d-82a9-129d8128f66f","Type":"ContainerDied","Data":"52e4948cd05dd0f58932a8ee391786e5bf454f487420016896bc53b4f4164bfa"} Oct 13 17:42:30 crc kubenswrapper[4720]: I1013 17:42:30.204001 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3ce4e7b1-7ffc-4444-8e0c-2bc9779d4ef1","Type":"ContainerStarted","Data":"e183788bc2b69046c40a8053374fe2623743b39a7c150e2829f127a4cbaf9785"} Oct 13 17:42:31 crc kubenswrapper[4720]: I1013 17:42:31.094863 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 13 17:42:31 crc kubenswrapper[4720]: I1013 17:42:31.096665 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 13 17:42:31 crc kubenswrapper[4720]: I1013 17:42:31.178877 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b352587-867c-4276-93b5-89c6f922a2cc" path="/var/lib/kubelet/pods/0b352587-867c-4276-93b5-89c6f922a2cc/volumes" Oct 13 17:42:31 crc kubenswrapper[4720]: I1013 17:42:31.215728 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3ce4e7b1-7ffc-4444-8e0c-2bc9779d4ef1","Type":"ContainerStarted","Data":"7b1126d1a1dc7e1f4e3c61b38e3b319353108d11e0841f28a1bb00ee842faa33"} Oct 13 17:42:31 crc kubenswrapper[4720]: I1013 17:42:31.215882 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Oct 13 17:42:31 crc kubenswrapper[4720]: I1013 17:42:31.220045 4720 generic.go:334] "Generic (PLEG): container finished" podID="6e562d7d-19f2-4b5d-82a9-129d8128f66f" containerID="6719bdac9c22b77bc6474d466c3f90512df9624bf9a3c652cee60f5fdd7c1d1c" exitCode=0 Oct 13 17:42:31 crc kubenswrapper[4720]: I1013 17:42:31.220088 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e562d7d-19f2-4b5d-82a9-129d8128f66f","Type":"ContainerDied","Data":"6719bdac9c22b77bc6474d466c3f90512df9624bf9a3c652cee60f5fdd7c1d1c"} Oct 13 17:42:31 crc kubenswrapper[4720]: I1013 17:42:31.243915 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.88776034 podStartE2EDuration="2.243893101s" podCreationTimestamp="2025-10-13 17:42:29 +0000 UTC" firstStartedPulling="2025-10-13 17:42:30.109169666 +0000 UTC m=+1095.566419798" lastFinishedPulling="2025-10-13 17:42:30.465302427 +0000 UTC m=+1095.922552559" observedRunningTime="2025-10-13 17:42:31.230965898 +0000 UTC m=+1096.688216030" watchObservedRunningTime="2025-10-13 17:42:31.243893101 +0000 UTC m=+1096.701143233" Oct 13 17:42:32 crc kubenswrapper[4720]: I1013 17:42:32.135502 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e4d97deb-0f09-435e-9ade-51e87b0ded99" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.199:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 13 17:42:32 crc kubenswrapper[4720]: I1013 17:42:32.176420 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e4d97deb-0f09-435e-9ade-51e87b0ded99" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.199:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 13 17:42:32 crc kubenswrapper[4720]: I1013 17:42:32.682796 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 17:42:32 crc kubenswrapper[4720]: I1013 17:42:32.756914 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgdgc\" (UniqueName: \"kubernetes.io/projected/6e562d7d-19f2-4b5d-82a9-129d8128f66f-kube-api-access-fgdgc\") pod \"6e562d7d-19f2-4b5d-82a9-129d8128f66f\" (UID: \"6e562d7d-19f2-4b5d-82a9-129d8128f66f\") " Oct 13 17:42:32 crc kubenswrapper[4720]: I1013 17:42:32.756980 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e562d7d-19f2-4b5d-82a9-129d8128f66f-run-httpd\") pod \"6e562d7d-19f2-4b5d-82a9-129d8128f66f\" (UID: \"6e562d7d-19f2-4b5d-82a9-129d8128f66f\") " Oct 13 17:42:32 crc kubenswrapper[4720]: I1013 17:42:32.757033 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e562d7d-19f2-4b5d-82a9-129d8128f66f-combined-ca-bundle\") pod \"6e562d7d-19f2-4b5d-82a9-129d8128f66f\" (UID: \"6e562d7d-19f2-4b5d-82a9-129d8128f66f\") " Oct 13 17:42:32 crc kubenswrapper[4720]: I1013 17:42:32.757137 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e562d7d-19f2-4b5d-82a9-129d8128f66f-config-data\") pod \"6e562d7d-19f2-4b5d-82a9-129d8128f66f\" (UID: \"6e562d7d-19f2-4b5d-82a9-129d8128f66f\") " Oct 13 17:42:32 crc kubenswrapper[4720]: I1013 17:42:32.757269 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e562d7d-19f2-4b5d-82a9-129d8128f66f-scripts\") pod \"6e562d7d-19f2-4b5d-82a9-129d8128f66f\" (UID: \"6e562d7d-19f2-4b5d-82a9-129d8128f66f\") " Oct 13 17:42:32 crc kubenswrapper[4720]: I1013 17:42:32.757309 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6e562d7d-19f2-4b5d-82a9-129d8128f66f-sg-core-conf-yaml\") pod \"6e562d7d-19f2-4b5d-82a9-129d8128f66f\" (UID: \"6e562d7d-19f2-4b5d-82a9-129d8128f66f\") " Oct 13 17:42:32 crc kubenswrapper[4720]: I1013 17:42:32.757400 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e562d7d-19f2-4b5d-82a9-129d8128f66f-log-httpd\") pod \"6e562d7d-19f2-4b5d-82a9-129d8128f66f\" (UID: \"6e562d7d-19f2-4b5d-82a9-129d8128f66f\") " Oct 13 17:42:32 crc kubenswrapper[4720]: I1013 17:42:32.759808 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e562d7d-19f2-4b5d-82a9-129d8128f66f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6e562d7d-19f2-4b5d-82a9-129d8128f66f" (UID: "6e562d7d-19f2-4b5d-82a9-129d8128f66f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:42:32 crc kubenswrapper[4720]: I1013 17:42:32.762457 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e562d7d-19f2-4b5d-82a9-129d8128f66f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6e562d7d-19f2-4b5d-82a9-129d8128f66f" (UID: "6e562d7d-19f2-4b5d-82a9-129d8128f66f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:42:32 crc kubenswrapper[4720]: I1013 17:42:32.777587 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e562d7d-19f2-4b5d-82a9-129d8128f66f-scripts" (OuterVolumeSpecName: "scripts") pod "6e562d7d-19f2-4b5d-82a9-129d8128f66f" (UID: "6e562d7d-19f2-4b5d-82a9-129d8128f66f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:42:32 crc kubenswrapper[4720]: I1013 17:42:32.779634 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e562d7d-19f2-4b5d-82a9-129d8128f66f-kube-api-access-fgdgc" (OuterVolumeSpecName: "kube-api-access-fgdgc") pod "6e562d7d-19f2-4b5d-82a9-129d8128f66f" (UID: "6e562d7d-19f2-4b5d-82a9-129d8128f66f"). InnerVolumeSpecName "kube-api-access-fgdgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:42:32 crc kubenswrapper[4720]: I1013 17:42:32.817393 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e562d7d-19f2-4b5d-82a9-129d8128f66f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6e562d7d-19f2-4b5d-82a9-129d8128f66f" (UID: "6e562d7d-19f2-4b5d-82a9-129d8128f66f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:42:32 crc kubenswrapper[4720]: I1013 17:42:32.847558 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e562d7d-19f2-4b5d-82a9-129d8128f66f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6e562d7d-19f2-4b5d-82a9-129d8128f66f" (UID: "6e562d7d-19f2-4b5d-82a9-129d8128f66f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:42:32 crc kubenswrapper[4720]: I1013 17:42:32.860620 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e562d7d-19f2-4b5d-82a9-129d8128f66f-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:32 crc kubenswrapper[4720]: I1013 17:42:32.860658 4720 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6e562d7d-19f2-4b5d-82a9-129d8128f66f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:32 crc kubenswrapper[4720]: I1013 17:42:32.860671 4720 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e562d7d-19f2-4b5d-82a9-129d8128f66f-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:32 crc kubenswrapper[4720]: I1013 17:42:32.860684 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgdgc\" (UniqueName: \"kubernetes.io/projected/6e562d7d-19f2-4b5d-82a9-129d8128f66f-kube-api-access-fgdgc\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:32 crc kubenswrapper[4720]: I1013 17:42:32.860695 4720 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e562d7d-19f2-4b5d-82a9-129d8128f66f-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:32 crc kubenswrapper[4720]: I1013 17:42:32.860705 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e562d7d-19f2-4b5d-82a9-129d8128f66f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:32 crc kubenswrapper[4720]: I1013 17:42:32.892384 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e562d7d-19f2-4b5d-82a9-129d8128f66f-config-data" (OuterVolumeSpecName: "config-data") pod "6e562d7d-19f2-4b5d-82a9-129d8128f66f" (UID: "6e562d7d-19f2-4b5d-82a9-129d8128f66f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:42:32 crc kubenswrapper[4720]: I1013 17:42:32.963569 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e562d7d-19f2-4b5d-82a9-129d8128f66f-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:33 crc kubenswrapper[4720]: I1013 17:42:33.243231 4720 generic.go:334] "Generic (PLEG): container finished" podID="6e562d7d-19f2-4b5d-82a9-129d8128f66f" containerID="a5c3001d1504ef7a974b2871e80e14c20eb321021e69f6c7a93a7ba98e613e9f" exitCode=0 Oct 13 17:42:33 crc kubenswrapper[4720]: I1013 17:42:33.243328 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 17:42:33 crc kubenswrapper[4720]: I1013 17:42:33.243366 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e562d7d-19f2-4b5d-82a9-129d8128f66f","Type":"ContainerDied","Data":"a5c3001d1504ef7a974b2871e80e14c20eb321021e69f6c7a93a7ba98e613e9f"} Oct 13 17:42:33 crc kubenswrapper[4720]: I1013 17:42:33.243657 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e562d7d-19f2-4b5d-82a9-129d8128f66f","Type":"ContainerDied","Data":"a91a9df1bc1a3a79d410437d32935c9302d5bafa0be08ec1b4abe6bca901c454"} Oct 13 17:42:33 crc kubenswrapper[4720]: I1013 17:42:33.243682 4720 scope.go:117] "RemoveContainer" containerID="3ebf0b71f7adf2a473991fde6ef983541ecfc4bc5fc63ef4a3e18e5746125e11" Oct 13 17:42:33 crc kubenswrapper[4720]: I1013 17:42:33.266180 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 13 17:42:33 crc kubenswrapper[4720]: I1013 17:42:33.276080 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 13 17:42:33 crc kubenswrapper[4720]: I1013 17:42:33.295608 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 13 17:42:33 crc kubenswrapper[4720]: E1013 17:42:33.295968 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e562d7d-19f2-4b5d-82a9-129d8128f66f" containerName="proxy-httpd" Oct 13 17:42:33 crc kubenswrapper[4720]: I1013 17:42:33.295980 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e562d7d-19f2-4b5d-82a9-129d8128f66f" containerName="proxy-httpd" Oct 13 17:42:33 crc kubenswrapper[4720]: E1013 17:42:33.296001 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e562d7d-19f2-4b5d-82a9-129d8128f66f" containerName="ceilometer-notification-agent" Oct 13 17:42:33 crc kubenswrapper[4720]: I1013 17:42:33.296007 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e562d7d-19f2-4b5d-82a9-129d8128f66f" containerName="ceilometer-notification-agent" Oct 13 17:42:33 crc kubenswrapper[4720]: E1013 17:42:33.296017 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e562d7d-19f2-4b5d-82a9-129d8128f66f" containerName="ceilometer-central-agent" Oct 13 17:42:33 crc kubenswrapper[4720]: I1013 17:42:33.296022 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e562d7d-19f2-4b5d-82a9-129d8128f66f" containerName="ceilometer-central-agent" Oct 13 17:42:33 crc kubenswrapper[4720]: E1013 17:42:33.296040 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e562d7d-19f2-4b5d-82a9-129d8128f66f" containerName="sg-core" Oct 13 17:42:33 crc kubenswrapper[4720]: I1013 17:42:33.296047 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e562d7d-19f2-4b5d-82a9-129d8128f66f" containerName="sg-core" Oct 13 17:42:33 crc kubenswrapper[4720]: I1013 17:42:33.296228 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e562d7d-19f2-4b5d-82a9-129d8128f66f" containerName="ceilometer-notification-agent" Oct 13 17:42:33 crc kubenswrapper[4720]: I1013 17:42:33.296240 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e562d7d-19f2-4b5d-82a9-129d8128f66f" containerName="proxy-httpd" Oct 13 17:42:33 crc kubenswrapper[4720]: I1013 17:42:33.296255 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e562d7d-19f2-4b5d-82a9-129d8128f66f" containerName="sg-core" Oct 13 17:42:33 crc kubenswrapper[4720]: I1013 17:42:33.296270 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e562d7d-19f2-4b5d-82a9-129d8128f66f" containerName="ceilometer-central-agent" Oct 13 17:42:33 crc kubenswrapper[4720]: I1013 17:42:33.297781 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 17:42:33 crc kubenswrapper[4720]: I1013 17:42:33.301544 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 13 17:42:33 crc kubenswrapper[4720]: I1013 17:42:33.301806 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 13 17:42:33 crc kubenswrapper[4720]: I1013 17:42:33.302211 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Oct 13 17:42:33 crc kubenswrapper[4720]: I1013 17:42:33.302367 4720 scope.go:117] "RemoveContainer" containerID="52e4948cd05dd0f58932a8ee391786e5bf454f487420016896bc53b4f4164bfa" Oct 13 17:42:33 crc kubenswrapper[4720]: I1013 17:42:33.314353 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 13 17:42:33 crc kubenswrapper[4720]: I1013 17:42:33.344485 4720 scope.go:117] "RemoveContainer" containerID="a5c3001d1504ef7a974b2871e80e14c20eb321021e69f6c7a93a7ba98e613e9f" Oct 13 17:42:33 crc kubenswrapper[4720]: I1013 17:42:33.367878 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/04262368-d4ca-4f43-9314-ebca3d3d1c2c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"04262368-d4ca-4f43-9314-ebca3d3d1c2c\") " pod="openstack/ceilometer-0" Oct 13 17:42:33 crc kubenswrapper[4720]: I1013 17:42:33.367926 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-km844\" (UniqueName: \"kubernetes.io/projected/04262368-d4ca-4f43-9314-ebca3d3d1c2c-kube-api-access-km844\") pod \"ceilometer-0\" (UID: \"04262368-d4ca-4f43-9314-ebca3d3d1c2c\") " pod="openstack/ceilometer-0" Oct 13 17:42:33 crc kubenswrapper[4720]: I1013 17:42:33.367950 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/04262368-d4ca-4f43-9314-ebca3d3d1c2c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"04262368-d4ca-4f43-9314-ebca3d3d1c2c\") " pod="openstack/ceilometer-0" Oct 13 17:42:33 crc kubenswrapper[4720]: I1013 17:42:33.367974 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04262368-d4ca-4f43-9314-ebca3d3d1c2c-run-httpd\") pod \"ceilometer-0\" (UID: \"04262368-d4ca-4f43-9314-ebca3d3d1c2c\") " pod="openstack/ceilometer-0" Oct 13 17:42:33 crc kubenswrapper[4720]: I1013 17:42:33.368047 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04262368-d4ca-4f43-9314-ebca3d3d1c2c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"04262368-d4ca-4f43-9314-ebca3d3d1c2c\") " pod="openstack/ceilometer-0" Oct 13 17:42:33 crc kubenswrapper[4720]: I1013 17:42:33.368100 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04262368-d4ca-4f43-9314-ebca3d3d1c2c-config-data\") pod \"ceilometer-0\" (UID: \"04262368-d4ca-4f43-9314-ebca3d3d1c2c\") " pod="openstack/ceilometer-0" Oct 13 17:42:33 crc kubenswrapper[4720]: I1013 17:42:33.368125 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04262368-d4ca-4f43-9314-ebca3d3d1c2c-scripts\") pod \"ceilometer-0\" (UID: \"04262368-d4ca-4f43-9314-ebca3d3d1c2c\") " pod="openstack/ceilometer-0" Oct 13 17:42:33 crc kubenswrapper[4720]: I1013 17:42:33.368150 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04262368-d4ca-4f43-9314-ebca3d3d1c2c-log-httpd\") pod \"ceilometer-0\" (UID: \"04262368-d4ca-4f43-9314-ebca3d3d1c2c\") " pod="openstack/ceilometer-0" Oct 13 17:42:33 crc kubenswrapper[4720]: I1013 17:42:33.370328 4720 scope.go:117] "RemoveContainer" containerID="6719bdac9c22b77bc6474d466c3f90512df9624bf9a3c652cee60f5fdd7c1d1c" Oct 13 17:42:33 crc kubenswrapper[4720]: I1013 17:42:33.388934 4720 scope.go:117] "RemoveContainer" containerID="3ebf0b71f7adf2a473991fde6ef983541ecfc4bc5fc63ef4a3e18e5746125e11" Oct 13 17:42:33 crc kubenswrapper[4720]: E1013 17:42:33.389370 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ebf0b71f7adf2a473991fde6ef983541ecfc4bc5fc63ef4a3e18e5746125e11\": container with ID starting with 3ebf0b71f7adf2a473991fde6ef983541ecfc4bc5fc63ef4a3e18e5746125e11 not found: ID does not exist" containerID="3ebf0b71f7adf2a473991fde6ef983541ecfc4bc5fc63ef4a3e18e5746125e11" Oct 13 17:42:33 crc kubenswrapper[4720]: I1013 17:42:33.389416 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ebf0b71f7adf2a473991fde6ef983541ecfc4bc5fc63ef4a3e18e5746125e11"} err="failed to get container status \"3ebf0b71f7adf2a473991fde6ef983541ecfc4bc5fc63ef4a3e18e5746125e11\": rpc error: code = NotFound desc = could not find container \"3ebf0b71f7adf2a473991fde6ef983541ecfc4bc5fc63ef4a3e18e5746125e11\": container with ID starting with 3ebf0b71f7adf2a473991fde6ef983541ecfc4bc5fc63ef4a3e18e5746125e11 not found: ID does not exist" Oct 13 17:42:33 crc kubenswrapper[4720]: I1013 17:42:33.389444 4720 scope.go:117] "RemoveContainer" containerID="52e4948cd05dd0f58932a8ee391786e5bf454f487420016896bc53b4f4164bfa" Oct 13 17:42:33 crc kubenswrapper[4720]: E1013 17:42:33.389831 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52e4948cd05dd0f58932a8ee391786e5bf454f487420016896bc53b4f4164bfa\": container with ID starting with 52e4948cd05dd0f58932a8ee391786e5bf454f487420016896bc53b4f4164bfa not found: ID does not exist" containerID="52e4948cd05dd0f58932a8ee391786e5bf454f487420016896bc53b4f4164bfa" Oct 13 17:42:33 crc kubenswrapper[4720]: I1013 17:42:33.389867 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52e4948cd05dd0f58932a8ee391786e5bf454f487420016896bc53b4f4164bfa"} err="failed to get container status \"52e4948cd05dd0f58932a8ee391786e5bf454f487420016896bc53b4f4164bfa\": rpc error: code = NotFound desc = could not find container \"52e4948cd05dd0f58932a8ee391786e5bf454f487420016896bc53b4f4164bfa\": container with ID starting with 52e4948cd05dd0f58932a8ee391786e5bf454f487420016896bc53b4f4164bfa not found: ID does not exist" Oct 13 17:42:33 crc kubenswrapper[4720]: I1013 17:42:33.389887 4720 scope.go:117] "RemoveContainer" containerID="a5c3001d1504ef7a974b2871e80e14c20eb321021e69f6c7a93a7ba98e613e9f" Oct 13 17:42:33 crc kubenswrapper[4720]: E1013 17:42:33.390120 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5c3001d1504ef7a974b2871e80e14c20eb321021e69f6c7a93a7ba98e613e9f\": container with ID starting with a5c3001d1504ef7a974b2871e80e14c20eb321021e69f6c7a93a7ba98e613e9f not found: ID does not exist" containerID="a5c3001d1504ef7a974b2871e80e14c20eb321021e69f6c7a93a7ba98e613e9f" Oct 13 17:42:33 crc kubenswrapper[4720]: I1013 17:42:33.390145 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5c3001d1504ef7a974b2871e80e14c20eb321021e69f6c7a93a7ba98e613e9f"} err="failed to get container status \"a5c3001d1504ef7a974b2871e80e14c20eb321021e69f6c7a93a7ba98e613e9f\": rpc error: code = NotFound desc = could not find container \"a5c3001d1504ef7a974b2871e80e14c20eb321021e69f6c7a93a7ba98e613e9f\": container with ID starting with a5c3001d1504ef7a974b2871e80e14c20eb321021e69f6c7a93a7ba98e613e9f not found: ID does not exist" Oct 13 17:42:33 crc kubenswrapper[4720]: I1013 17:42:33.390161 4720 scope.go:117] "RemoveContainer" containerID="6719bdac9c22b77bc6474d466c3f90512df9624bf9a3c652cee60f5fdd7c1d1c" Oct 13 17:42:33 crc kubenswrapper[4720]: E1013 17:42:33.390789 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6719bdac9c22b77bc6474d466c3f90512df9624bf9a3c652cee60f5fdd7c1d1c\": container with ID starting with 6719bdac9c22b77bc6474d466c3f90512df9624bf9a3c652cee60f5fdd7c1d1c not found: ID does not exist" containerID="6719bdac9c22b77bc6474d466c3f90512df9624bf9a3c652cee60f5fdd7c1d1c" Oct 13 17:42:33 crc kubenswrapper[4720]: I1013 17:42:33.390816 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6719bdac9c22b77bc6474d466c3f90512df9624bf9a3c652cee60f5fdd7c1d1c"} err="failed to get container status \"6719bdac9c22b77bc6474d466c3f90512df9624bf9a3c652cee60f5fdd7c1d1c\": rpc error: code = NotFound desc = could not find container \"6719bdac9c22b77bc6474d466c3f90512df9624bf9a3c652cee60f5fdd7c1d1c\": container with ID starting with 6719bdac9c22b77bc6474d466c3f90512df9624bf9a3c652cee60f5fdd7c1d1c not found: ID does not exist" Oct 13 17:42:33 crc kubenswrapper[4720]: I1013 17:42:33.469256 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04262368-d4ca-4f43-9314-ebca3d3d1c2c-config-data\") pod \"ceilometer-0\" (UID: \"04262368-d4ca-4f43-9314-ebca3d3d1c2c\") " pod="openstack/ceilometer-0" Oct 13 17:42:33 crc kubenswrapper[4720]: I1013 17:42:33.469335 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04262368-d4ca-4f43-9314-ebca3d3d1c2c-scripts\") pod \"ceilometer-0\" (UID: \"04262368-d4ca-4f43-9314-ebca3d3d1c2c\") " pod="openstack/ceilometer-0" Oct 13 17:42:33 crc kubenswrapper[4720]: I1013 17:42:33.469388 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04262368-d4ca-4f43-9314-ebca3d3d1c2c-log-httpd\") pod \"ceilometer-0\" (UID: \"04262368-d4ca-4f43-9314-ebca3d3d1c2c\") " pod="openstack/ceilometer-0" Oct 13 17:42:33 crc kubenswrapper[4720]: I1013 17:42:33.469494 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/04262368-d4ca-4f43-9314-ebca3d3d1c2c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"04262368-d4ca-4f43-9314-ebca3d3d1c2c\") " pod="openstack/ceilometer-0" Oct 13 17:42:33 crc kubenswrapper[4720]: I1013 17:42:33.469534 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-km844\" (UniqueName: \"kubernetes.io/projected/04262368-d4ca-4f43-9314-ebca3d3d1c2c-kube-api-access-km844\") pod \"ceilometer-0\" (UID: \"04262368-d4ca-4f43-9314-ebca3d3d1c2c\") " pod="openstack/ceilometer-0" Oct 13 17:42:33 crc kubenswrapper[4720]: I1013 17:42:33.469574 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/04262368-d4ca-4f43-9314-ebca3d3d1c2c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"04262368-d4ca-4f43-9314-ebca3d3d1c2c\") " pod="openstack/ceilometer-0" Oct 13 17:42:33 crc kubenswrapper[4720]: I1013 17:42:33.469612 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04262368-d4ca-4f43-9314-ebca3d3d1c2c-run-httpd\") pod \"ceilometer-0\" (UID: \"04262368-d4ca-4f43-9314-ebca3d3d1c2c\") " pod="openstack/ceilometer-0" Oct 13 17:42:33 crc kubenswrapper[4720]: I1013 17:42:33.469678 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04262368-d4ca-4f43-9314-ebca3d3d1c2c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"04262368-d4ca-4f43-9314-ebca3d3d1c2c\") " pod="openstack/ceilometer-0" Oct 13 17:42:33 crc kubenswrapper[4720]: I1013 17:42:33.469983 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04262368-d4ca-4f43-9314-ebca3d3d1c2c-log-httpd\") pod \"ceilometer-0\" (UID: \"04262368-d4ca-4f43-9314-ebca3d3d1c2c\") " pod="openstack/ceilometer-0" Oct 13 17:42:33 crc kubenswrapper[4720]: I1013 17:42:33.470254 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04262368-d4ca-4f43-9314-ebca3d3d1c2c-run-httpd\") pod \"ceilometer-0\" (UID: \"04262368-d4ca-4f43-9314-ebca3d3d1c2c\") " pod="openstack/ceilometer-0" Oct 13 17:42:33 crc kubenswrapper[4720]: I1013 17:42:33.475084 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/04262368-d4ca-4f43-9314-ebca3d3d1c2c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"04262368-d4ca-4f43-9314-ebca3d3d1c2c\") " pod="openstack/ceilometer-0" Oct 13 17:42:33 crc kubenswrapper[4720]: I1013 17:42:33.475467 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/04262368-d4ca-4f43-9314-ebca3d3d1c2c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"04262368-d4ca-4f43-9314-ebca3d3d1c2c\") " pod="openstack/ceilometer-0" Oct 13 17:42:33 crc kubenswrapper[4720]: I1013 17:42:33.477301 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04262368-d4ca-4f43-9314-ebca3d3d1c2c-config-data\") pod \"ceilometer-0\" (UID: \"04262368-d4ca-4f43-9314-ebca3d3d1c2c\") " pod="openstack/ceilometer-0" Oct 13 17:42:33 crc kubenswrapper[4720]: I1013 17:42:33.477703 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04262368-d4ca-4f43-9314-ebca3d3d1c2c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"04262368-d4ca-4f43-9314-ebca3d3d1c2c\") " pod="openstack/ceilometer-0" Oct 13 17:42:33 crc kubenswrapper[4720]: I1013 17:42:33.480129 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04262368-d4ca-4f43-9314-ebca3d3d1c2c-scripts\") pod \"ceilometer-0\" (UID: \"04262368-d4ca-4f43-9314-ebca3d3d1c2c\") " pod="openstack/ceilometer-0" Oct 13 17:42:33 crc kubenswrapper[4720]: I1013 17:42:33.498908 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-km844\" (UniqueName: \"kubernetes.io/projected/04262368-d4ca-4f43-9314-ebca3d3d1c2c-kube-api-access-km844\") pod \"ceilometer-0\" (UID: \"04262368-d4ca-4f43-9314-ebca3d3d1c2c\") " pod="openstack/ceilometer-0" Oct 13 17:42:33 crc kubenswrapper[4720]: I1013 17:42:33.638652 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 17:42:34 crc kubenswrapper[4720]: I1013 17:42:34.135261 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 13 17:42:34 crc kubenswrapper[4720]: I1013 17:42:34.256715 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04262368-d4ca-4f43-9314-ebca3d3d1c2c","Type":"ContainerStarted","Data":"200e96e22bc36631ca64a5bf13b3ac91a40325a18ecf9f5f1f8a1bd0d60980f4"} Oct 13 17:42:34 crc kubenswrapper[4720]: I1013 17:42:34.427775 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 13 17:42:34 crc kubenswrapper[4720]: I1013 17:42:34.430316 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 13 17:42:34 crc kubenswrapper[4720]: I1013 17:42:34.437468 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 13 17:42:35 crc kubenswrapper[4720]: I1013 17:42:35.185425 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e562d7d-19f2-4b5d-82a9-129d8128f66f" path="/var/lib/kubelet/pods/6e562d7d-19f2-4b5d-82a9-129d8128f66f/volumes" Oct 13 17:42:35 crc kubenswrapper[4720]: I1013 17:42:35.272400 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04262368-d4ca-4f43-9314-ebca3d3d1c2c","Type":"ContainerStarted","Data":"bc3042ab103905f3364870ccee11241170f26f67e99474d66367b901f2092858"} Oct 13 17:42:35 crc kubenswrapper[4720]: I1013 17:42:35.279934 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 13 17:42:36 crc kubenswrapper[4720]: I1013 17:42:36.172991 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 13 17:42:36 crc kubenswrapper[4720]: I1013 17:42:36.225468 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b11439ac-25b3-45db-8d79-401448c8ef1a-config-data\") pod \"b11439ac-25b3-45db-8d79-401448c8ef1a\" (UID: \"b11439ac-25b3-45db-8d79-401448c8ef1a\") " Oct 13 17:42:36 crc kubenswrapper[4720]: I1013 17:42:36.225526 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b11439ac-25b3-45db-8d79-401448c8ef1a-combined-ca-bundle\") pod \"b11439ac-25b3-45db-8d79-401448c8ef1a\" (UID: \"b11439ac-25b3-45db-8d79-401448c8ef1a\") " Oct 13 17:42:36 crc kubenswrapper[4720]: I1013 17:42:36.225586 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vvf2\" (UniqueName: \"kubernetes.io/projected/b11439ac-25b3-45db-8d79-401448c8ef1a-kube-api-access-2vvf2\") pod \"b11439ac-25b3-45db-8d79-401448c8ef1a\" (UID: \"b11439ac-25b3-45db-8d79-401448c8ef1a\") " Oct 13 17:42:36 crc kubenswrapper[4720]: I1013 17:42:36.229939 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11439ac-25b3-45db-8d79-401448c8ef1a-kube-api-access-2vvf2" (OuterVolumeSpecName: "kube-api-access-2vvf2") pod "b11439ac-25b3-45db-8d79-401448c8ef1a" (UID: "b11439ac-25b3-45db-8d79-401448c8ef1a"). InnerVolumeSpecName "kube-api-access-2vvf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:42:36 crc kubenswrapper[4720]: I1013 17:42:36.251769 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b11439ac-25b3-45db-8d79-401448c8ef1a-config-data" (OuterVolumeSpecName: "config-data") pod "b11439ac-25b3-45db-8d79-401448c8ef1a" (UID: "b11439ac-25b3-45db-8d79-401448c8ef1a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:42:36 crc kubenswrapper[4720]: I1013 17:42:36.252796 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b11439ac-25b3-45db-8d79-401448c8ef1a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b11439ac-25b3-45db-8d79-401448c8ef1a" (UID: "b11439ac-25b3-45db-8d79-401448c8ef1a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:42:36 crc kubenswrapper[4720]: I1013 17:42:36.282430 4720 generic.go:334] "Generic (PLEG): container finished" podID="b11439ac-25b3-45db-8d79-401448c8ef1a" containerID="cc578c42821f693c85b96858485605c733349afc8d2e9926e955016ecb2b2f5e" exitCode=137 Oct 13 17:42:36 crc kubenswrapper[4720]: I1013 17:42:36.282471 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b11439ac-25b3-45db-8d79-401448c8ef1a","Type":"ContainerDied","Data":"cc578c42821f693c85b96858485605c733349afc8d2e9926e955016ecb2b2f5e"} Oct 13 17:42:36 crc kubenswrapper[4720]: I1013 17:42:36.282477 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 13 17:42:36 crc kubenswrapper[4720]: I1013 17:42:36.282503 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b11439ac-25b3-45db-8d79-401448c8ef1a","Type":"ContainerDied","Data":"e7184b36afdc6ace9a0a7b546fa2157a1b87919923cd7e54033c208a865bcb6a"} Oct 13 17:42:36 crc kubenswrapper[4720]: I1013 17:42:36.282538 4720 scope.go:117] "RemoveContainer" containerID="cc578c42821f693c85b96858485605c733349afc8d2e9926e955016ecb2b2f5e" Oct 13 17:42:36 crc kubenswrapper[4720]: I1013 17:42:36.286232 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04262368-d4ca-4f43-9314-ebca3d3d1c2c","Type":"ContainerStarted","Data":"bf91cfeb690458f450f7edfd433a4c0a172b7186713b4b1f2aebd21250136ae1"} Oct 13 17:42:36 crc kubenswrapper[4720]: I1013 17:42:36.286264 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04262368-d4ca-4f43-9314-ebca3d3d1c2c","Type":"ContainerStarted","Data":"88e6b377a0a605d9700b8ff6219c87ab6f52d77ac2e2744d11a141e335dfe0e7"} Oct 13 17:42:36 crc kubenswrapper[4720]: I1013 17:42:36.304967 4720 scope.go:117] "RemoveContainer" containerID="cc578c42821f693c85b96858485605c733349afc8d2e9926e955016ecb2b2f5e" Oct 13 17:42:36 crc kubenswrapper[4720]: E1013 17:42:36.305493 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc578c42821f693c85b96858485605c733349afc8d2e9926e955016ecb2b2f5e\": container with ID starting with cc578c42821f693c85b96858485605c733349afc8d2e9926e955016ecb2b2f5e not found: ID does not exist" containerID="cc578c42821f693c85b96858485605c733349afc8d2e9926e955016ecb2b2f5e" Oct 13 17:42:36 crc kubenswrapper[4720]: I1013 17:42:36.305542 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc578c42821f693c85b96858485605c733349afc8d2e9926e955016ecb2b2f5e"} err="failed to get container status \"cc578c42821f693c85b96858485605c733349afc8d2e9926e955016ecb2b2f5e\": rpc error: code = NotFound desc = could not find container \"cc578c42821f693c85b96858485605c733349afc8d2e9926e955016ecb2b2f5e\": container with ID starting with cc578c42821f693c85b96858485605c733349afc8d2e9926e955016ecb2b2f5e not found: ID does not exist" Oct 13 17:42:36 crc kubenswrapper[4720]: I1013 17:42:36.314567 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 13 17:42:36 crc kubenswrapper[4720]: I1013 17:42:36.324861 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 13 17:42:36 crc kubenswrapper[4720]: I1013 17:42:36.327426 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b11439ac-25b3-45db-8d79-401448c8ef1a-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:36 crc kubenswrapper[4720]: I1013 17:42:36.327460 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b11439ac-25b3-45db-8d79-401448c8ef1a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:36 crc kubenswrapper[4720]: I1013 17:42:36.327474 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vvf2\" (UniqueName: \"kubernetes.io/projected/b11439ac-25b3-45db-8d79-401448c8ef1a-kube-api-access-2vvf2\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:36 crc kubenswrapper[4720]: I1013 17:42:36.332845 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 13 17:42:36 crc kubenswrapper[4720]: E1013 17:42:36.333247 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b11439ac-25b3-45db-8d79-401448c8ef1a" containerName="nova-cell1-novncproxy-novncproxy" Oct 13 17:42:36 crc kubenswrapper[4720]: I1013 17:42:36.333262 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="b11439ac-25b3-45db-8d79-401448c8ef1a" containerName="nova-cell1-novncproxy-novncproxy" Oct 13 17:42:36 crc kubenswrapper[4720]: I1013 17:42:36.333913 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="b11439ac-25b3-45db-8d79-401448c8ef1a" containerName="nova-cell1-novncproxy-novncproxy" Oct 13 17:42:36 crc kubenswrapper[4720]: I1013 17:42:36.335169 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 13 17:42:36 crc kubenswrapper[4720]: I1013 17:42:36.337064 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Oct 13 17:42:36 crc kubenswrapper[4720]: I1013 17:42:36.337390 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Oct 13 17:42:36 crc kubenswrapper[4720]: I1013 17:42:36.340049 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Oct 13 17:42:36 crc kubenswrapper[4720]: I1013 17:42:36.350568 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 13 17:42:36 crc kubenswrapper[4720]: I1013 17:42:36.429543 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/3be6a675-72c8-4120-9b8b-458dba2fe7f2-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3be6a675-72c8-4120-9b8b-458dba2fe7f2\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 17:42:36 crc kubenswrapper[4720]: I1013 17:42:36.429597 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/3be6a675-72c8-4120-9b8b-458dba2fe7f2-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3be6a675-72c8-4120-9b8b-458dba2fe7f2\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 17:42:36 crc kubenswrapper[4720]: I1013 17:42:36.429640 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3be6a675-72c8-4120-9b8b-458dba2fe7f2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3be6a675-72c8-4120-9b8b-458dba2fe7f2\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 17:42:36 crc kubenswrapper[4720]: I1013 17:42:36.429726 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3be6a675-72c8-4120-9b8b-458dba2fe7f2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3be6a675-72c8-4120-9b8b-458dba2fe7f2\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 17:42:36 crc kubenswrapper[4720]: I1013 17:42:36.429769 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zzj9\" (UniqueName: \"kubernetes.io/projected/3be6a675-72c8-4120-9b8b-458dba2fe7f2-kube-api-access-5zzj9\") pod \"nova-cell1-novncproxy-0\" (UID: \"3be6a675-72c8-4120-9b8b-458dba2fe7f2\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 17:42:36 crc kubenswrapper[4720]: I1013 17:42:36.530889 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/3be6a675-72c8-4120-9b8b-458dba2fe7f2-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3be6a675-72c8-4120-9b8b-458dba2fe7f2\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 17:42:36 crc kubenswrapper[4720]: I1013 17:42:36.530936 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/3be6a675-72c8-4120-9b8b-458dba2fe7f2-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3be6a675-72c8-4120-9b8b-458dba2fe7f2\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 17:42:36 crc kubenswrapper[4720]: I1013 17:42:36.530976 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3be6a675-72c8-4120-9b8b-458dba2fe7f2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3be6a675-72c8-4120-9b8b-458dba2fe7f2\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 17:42:36 crc kubenswrapper[4720]: I1013 17:42:36.531021 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3be6a675-72c8-4120-9b8b-458dba2fe7f2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3be6a675-72c8-4120-9b8b-458dba2fe7f2\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 17:42:36 crc kubenswrapper[4720]: I1013 17:42:36.531062 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zzj9\" (UniqueName: \"kubernetes.io/projected/3be6a675-72c8-4120-9b8b-458dba2fe7f2-kube-api-access-5zzj9\") pod \"nova-cell1-novncproxy-0\" (UID: \"3be6a675-72c8-4120-9b8b-458dba2fe7f2\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 17:42:36 crc kubenswrapper[4720]: I1013 17:42:36.535087 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/3be6a675-72c8-4120-9b8b-458dba2fe7f2-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3be6a675-72c8-4120-9b8b-458dba2fe7f2\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 17:42:36 crc kubenswrapper[4720]: I1013 17:42:36.535241 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3be6a675-72c8-4120-9b8b-458dba2fe7f2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3be6a675-72c8-4120-9b8b-458dba2fe7f2\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 17:42:36 crc kubenswrapper[4720]: I1013 17:42:36.535910 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3be6a675-72c8-4120-9b8b-458dba2fe7f2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3be6a675-72c8-4120-9b8b-458dba2fe7f2\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 17:42:36 crc kubenswrapper[4720]: I1013 17:42:36.546096 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/3be6a675-72c8-4120-9b8b-458dba2fe7f2-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3be6a675-72c8-4120-9b8b-458dba2fe7f2\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 17:42:36 crc kubenswrapper[4720]: I1013 17:42:36.549575 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zzj9\" (UniqueName: \"kubernetes.io/projected/3be6a675-72c8-4120-9b8b-458dba2fe7f2-kube-api-access-5zzj9\") pod \"nova-cell1-novncproxy-0\" (UID: \"3be6a675-72c8-4120-9b8b-458dba2fe7f2\") " pod="openstack/nova-cell1-novncproxy-0" Oct 13 17:42:36 crc kubenswrapper[4720]: I1013 17:42:36.654809 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 13 17:42:37 crc kubenswrapper[4720]: I1013 17:42:37.117601 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 13 17:42:37 crc kubenswrapper[4720]: W1013 17:42:37.119019 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3be6a675_72c8_4120_9b8b_458dba2fe7f2.slice/crio-65c5b351f420ef291c767db09d9fb40af3a3955325c788f6a51012d41ba75e68 WatchSource:0}: Error finding container 65c5b351f420ef291c767db09d9fb40af3a3955325c788f6a51012d41ba75e68: Status 404 returned error can't find the container with id 65c5b351f420ef291c767db09d9fb40af3a3955325c788f6a51012d41ba75e68 Oct 13 17:42:37 crc kubenswrapper[4720]: I1013 17:42:37.177965 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11439ac-25b3-45db-8d79-401448c8ef1a" path="/var/lib/kubelet/pods/b11439ac-25b3-45db-8d79-401448c8ef1a/volumes" Oct 13 17:42:37 crc kubenswrapper[4720]: I1013 17:42:37.301983 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3be6a675-72c8-4120-9b8b-458dba2fe7f2","Type":"ContainerStarted","Data":"65c5b351f420ef291c767db09d9fb40af3a3955325c788f6a51012d41ba75e68"} Oct 13 17:42:38 crc kubenswrapper[4720]: I1013 17:42:38.312929 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3be6a675-72c8-4120-9b8b-458dba2fe7f2","Type":"ContainerStarted","Data":"77262cce6b9e145a07b1806e72c1e2a358877b0dc3d49d82bdce605066bdf44a"} Oct 13 17:42:38 crc kubenswrapper[4720]: I1013 17:42:38.319046 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04262368-d4ca-4f43-9314-ebca3d3d1c2c","Type":"ContainerStarted","Data":"d220c2da27274bd15395c535e6e52136df267a35b901836a6d79975343dc77ef"} Oct 13 17:42:38 crc kubenswrapper[4720]: I1013 17:42:38.319707 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 13 17:42:38 crc kubenswrapper[4720]: I1013 17:42:38.359704 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.683167152 podStartE2EDuration="5.359684714s" podCreationTimestamp="2025-10-13 17:42:33 +0000 UTC" firstStartedPulling="2025-10-13 17:42:34.128561218 +0000 UTC m=+1099.585811360" lastFinishedPulling="2025-10-13 17:42:37.80507877 +0000 UTC m=+1103.262328922" observedRunningTime="2025-10-13 17:42:38.358209075 +0000 UTC m=+1103.815459217" watchObservedRunningTime="2025-10-13 17:42:38.359684714 +0000 UTC m=+1103.816934866" Oct 13 17:42:38 crc kubenswrapper[4720]: I1013 17:42:38.371919 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.371902619 podStartE2EDuration="2.371902619s" podCreationTimestamp="2025-10-13 17:42:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:42:38.333679762 +0000 UTC m=+1103.790929904" watchObservedRunningTime="2025-10-13 17:42:38.371902619 +0000 UTC m=+1103.829152751" Oct 13 17:42:39 crc kubenswrapper[4720]: I1013 17:42:39.631264 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Oct 13 17:42:41 crc kubenswrapper[4720]: I1013 17:42:41.098515 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 13 17:42:41 crc kubenswrapper[4720]: I1013 17:42:41.100464 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 13 17:42:41 crc kubenswrapper[4720]: I1013 17:42:41.100822 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 13 17:42:41 crc kubenswrapper[4720]: I1013 17:42:41.103064 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 13 17:42:41 crc kubenswrapper[4720]: I1013 17:42:41.350877 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 13 17:42:41 crc kubenswrapper[4720]: I1013 17:42:41.356330 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 13 17:42:41 crc kubenswrapper[4720]: I1013 17:42:41.542823 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-c5s42"] Oct 13 17:42:41 crc kubenswrapper[4720]: I1013 17:42:41.545669 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-c5s42" Oct 13 17:42:41 crc kubenswrapper[4720]: I1013 17:42:41.562333 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-c5s42"] Oct 13 17:42:41 crc kubenswrapper[4720]: I1013 17:42:41.655615 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Oct 13 17:42:41 crc kubenswrapper[4720]: I1013 17:42:41.736671 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f91af118-9710-48b0-893b-c41d53a4088b-config\") pod \"dnsmasq-dns-59cf4bdb65-c5s42\" (UID: \"f91af118-9710-48b0-893b-c41d53a4088b\") " pod="openstack/dnsmasq-dns-59cf4bdb65-c5s42" Oct 13 17:42:41 crc kubenswrapper[4720]: I1013 17:42:41.736840 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdwxr\" (UniqueName: \"kubernetes.io/projected/f91af118-9710-48b0-893b-c41d53a4088b-kube-api-access-gdwxr\") pod \"dnsmasq-dns-59cf4bdb65-c5s42\" (UID: \"f91af118-9710-48b0-893b-c41d53a4088b\") " pod="openstack/dnsmasq-dns-59cf4bdb65-c5s42" Oct 13 17:42:41 crc kubenswrapper[4720]: I1013 17:42:41.736873 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f91af118-9710-48b0-893b-c41d53a4088b-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-c5s42\" (UID: \"f91af118-9710-48b0-893b-c41d53a4088b\") " pod="openstack/dnsmasq-dns-59cf4bdb65-c5s42" Oct 13 17:42:41 crc kubenswrapper[4720]: I1013 17:42:41.737146 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f91af118-9710-48b0-893b-c41d53a4088b-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-c5s42\" (UID: \"f91af118-9710-48b0-893b-c41d53a4088b\") " pod="openstack/dnsmasq-dns-59cf4bdb65-c5s42" Oct 13 17:42:41 crc kubenswrapper[4720]: I1013 17:42:41.737381 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f91af118-9710-48b0-893b-c41d53a4088b-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-c5s42\" (UID: \"f91af118-9710-48b0-893b-c41d53a4088b\") " pod="openstack/dnsmasq-dns-59cf4bdb65-c5s42" Oct 13 17:42:41 crc kubenswrapper[4720]: I1013 17:42:41.737463 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f91af118-9710-48b0-893b-c41d53a4088b-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-c5s42\" (UID: \"f91af118-9710-48b0-893b-c41d53a4088b\") " pod="openstack/dnsmasq-dns-59cf4bdb65-c5s42" Oct 13 17:42:41 crc kubenswrapper[4720]: I1013 17:42:41.839070 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f91af118-9710-48b0-893b-c41d53a4088b-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-c5s42\" (UID: \"f91af118-9710-48b0-893b-c41d53a4088b\") " pod="openstack/dnsmasq-dns-59cf4bdb65-c5s42" Oct 13 17:42:41 crc kubenswrapper[4720]: I1013 17:42:41.839122 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdwxr\" (UniqueName: \"kubernetes.io/projected/f91af118-9710-48b0-893b-c41d53a4088b-kube-api-access-gdwxr\") pod \"dnsmasq-dns-59cf4bdb65-c5s42\" (UID: \"f91af118-9710-48b0-893b-c41d53a4088b\") " pod="openstack/dnsmasq-dns-59cf4bdb65-c5s42" Oct 13 17:42:41 crc kubenswrapper[4720]: I1013 17:42:41.839220 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f91af118-9710-48b0-893b-c41d53a4088b-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-c5s42\" (UID: \"f91af118-9710-48b0-893b-c41d53a4088b\") " pod="openstack/dnsmasq-dns-59cf4bdb65-c5s42" Oct 13 17:42:41 crc kubenswrapper[4720]: I1013 17:42:41.839286 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f91af118-9710-48b0-893b-c41d53a4088b-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-c5s42\" (UID: \"f91af118-9710-48b0-893b-c41d53a4088b\") " pod="openstack/dnsmasq-dns-59cf4bdb65-c5s42" Oct 13 17:42:41 crc kubenswrapper[4720]: I1013 17:42:41.839330 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f91af118-9710-48b0-893b-c41d53a4088b-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-c5s42\" (UID: \"f91af118-9710-48b0-893b-c41d53a4088b\") " pod="openstack/dnsmasq-dns-59cf4bdb65-c5s42" Oct 13 17:42:41 crc kubenswrapper[4720]: I1013 17:42:41.839423 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f91af118-9710-48b0-893b-c41d53a4088b-config\") pod \"dnsmasq-dns-59cf4bdb65-c5s42\" (UID: \"f91af118-9710-48b0-893b-c41d53a4088b\") " pod="openstack/dnsmasq-dns-59cf4bdb65-c5s42" Oct 13 17:42:41 crc kubenswrapper[4720]: I1013 17:42:41.839972 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f91af118-9710-48b0-893b-c41d53a4088b-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-c5s42\" (UID: \"f91af118-9710-48b0-893b-c41d53a4088b\") " pod="openstack/dnsmasq-dns-59cf4bdb65-c5s42" Oct 13 17:42:41 crc kubenswrapper[4720]: I1013 17:42:41.840788 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f91af118-9710-48b0-893b-c41d53a4088b-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-c5s42\" (UID: \"f91af118-9710-48b0-893b-c41d53a4088b\") " pod="openstack/dnsmasq-dns-59cf4bdb65-c5s42" Oct 13 17:42:41 crc kubenswrapper[4720]: I1013 17:42:41.840845 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f91af118-9710-48b0-893b-c41d53a4088b-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-c5s42\" (UID: \"f91af118-9710-48b0-893b-c41d53a4088b\") " pod="openstack/dnsmasq-dns-59cf4bdb65-c5s42" Oct 13 17:42:41 crc kubenswrapper[4720]: I1013 17:42:41.840851 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f91af118-9710-48b0-893b-c41d53a4088b-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-c5s42\" (UID: \"f91af118-9710-48b0-893b-c41d53a4088b\") " pod="openstack/dnsmasq-dns-59cf4bdb65-c5s42" Oct 13 17:42:41 crc kubenswrapper[4720]: I1013 17:42:41.841632 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f91af118-9710-48b0-893b-c41d53a4088b-config\") pod \"dnsmasq-dns-59cf4bdb65-c5s42\" (UID: \"f91af118-9710-48b0-893b-c41d53a4088b\") " pod="openstack/dnsmasq-dns-59cf4bdb65-c5s42" Oct 13 17:42:41 crc kubenswrapper[4720]: I1013 17:42:41.872128 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdwxr\" (UniqueName: \"kubernetes.io/projected/f91af118-9710-48b0-893b-c41d53a4088b-kube-api-access-gdwxr\") pod \"dnsmasq-dns-59cf4bdb65-c5s42\" (UID: \"f91af118-9710-48b0-893b-c41d53a4088b\") " pod="openstack/dnsmasq-dns-59cf4bdb65-c5s42" Oct 13 17:42:41 crc kubenswrapper[4720]: I1013 17:42:41.878606 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-c5s42" Oct 13 17:42:42 crc kubenswrapper[4720]: I1013 17:42:42.409033 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-c5s42"] Oct 13 17:42:43 crc kubenswrapper[4720]: I1013 17:42:43.368151 4720 generic.go:334] "Generic (PLEG): container finished" podID="f91af118-9710-48b0-893b-c41d53a4088b" containerID="6285aaf3345dcb2ef68aca866c9c52b2268548fde172fb6bc2ad78576f0761ee" exitCode=0 Oct 13 17:42:43 crc kubenswrapper[4720]: I1013 17:42:43.368240 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-c5s42" event={"ID":"f91af118-9710-48b0-893b-c41d53a4088b","Type":"ContainerDied","Data":"6285aaf3345dcb2ef68aca866c9c52b2268548fde172fb6bc2ad78576f0761ee"} Oct 13 17:42:43 crc kubenswrapper[4720]: I1013 17:42:43.368575 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-c5s42" event={"ID":"f91af118-9710-48b0-893b-c41d53a4088b","Type":"ContainerStarted","Data":"6285df0b5f0c2a38f28c03b465aa5dda94c86e1ced0af80af277c5e0edac359b"} Oct 13 17:42:44 crc kubenswrapper[4720]: I1013 17:42:44.074865 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 13 17:42:44 crc kubenswrapper[4720]: I1013 17:42:44.076014 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="04262368-d4ca-4f43-9314-ebca3d3d1c2c" containerName="ceilometer-central-agent" containerID="cri-o://bc3042ab103905f3364870ccee11241170f26f67e99474d66367b901f2092858" gracePeriod=30 Oct 13 17:42:44 crc kubenswrapper[4720]: I1013 17:42:44.076095 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="04262368-d4ca-4f43-9314-ebca3d3d1c2c" containerName="sg-core" containerID="cri-o://bf91cfeb690458f450f7edfd433a4c0a172b7186713b4b1f2aebd21250136ae1" gracePeriod=30 Oct 13 17:42:44 crc kubenswrapper[4720]: I1013 17:42:44.076107 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="04262368-d4ca-4f43-9314-ebca3d3d1c2c" containerName="ceilometer-notification-agent" containerID="cri-o://88e6b377a0a605d9700b8ff6219c87ab6f52d77ac2e2744d11a141e335dfe0e7" gracePeriod=30 Oct 13 17:42:44 crc kubenswrapper[4720]: I1013 17:42:44.076115 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="04262368-d4ca-4f43-9314-ebca3d3d1c2c" containerName="proxy-httpd" containerID="cri-o://d220c2da27274bd15395c535e6e52136df267a35b901836a6d79975343dc77ef" gracePeriod=30 Oct 13 17:42:44 crc kubenswrapper[4720]: I1013 17:42:44.384855 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-c5s42" event={"ID":"f91af118-9710-48b0-893b-c41d53a4088b","Type":"ContainerStarted","Data":"4718bae9e453353ec74a07590e52d28c08ac31f15c6f0644503c356f0148c20f"} Oct 13 17:42:44 crc kubenswrapper[4720]: I1013 17:42:44.386568 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59cf4bdb65-c5s42" Oct 13 17:42:44 crc kubenswrapper[4720]: I1013 17:42:44.393026 4720 generic.go:334] "Generic (PLEG): container finished" podID="04262368-d4ca-4f43-9314-ebca3d3d1c2c" containerID="d220c2da27274bd15395c535e6e52136df267a35b901836a6d79975343dc77ef" exitCode=0 Oct 13 17:42:44 crc kubenswrapper[4720]: I1013 17:42:44.393077 4720 generic.go:334] "Generic (PLEG): container finished" podID="04262368-d4ca-4f43-9314-ebca3d3d1c2c" containerID="bf91cfeb690458f450f7edfd433a4c0a172b7186713b4b1f2aebd21250136ae1" exitCode=2 Oct 13 17:42:44 crc kubenswrapper[4720]: I1013 17:42:44.393112 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04262368-d4ca-4f43-9314-ebca3d3d1c2c","Type":"ContainerDied","Data":"d220c2da27274bd15395c535e6e52136df267a35b901836a6d79975343dc77ef"} Oct 13 17:42:44 crc kubenswrapper[4720]: I1013 17:42:44.393147 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04262368-d4ca-4f43-9314-ebca3d3d1c2c","Type":"ContainerDied","Data":"bf91cfeb690458f450f7edfd433a4c0a172b7186713b4b1f2aebd21250136ae1"} Oct 13 17:42:44 crc kubenswrapper[4720]: I1013 17:42:44.426293 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-59cf4bdb65-c5s42" podStartSLOduration=3.426268648 podStartE2EDuration="3.426268648s" podCreationTimestamp="2025-10-13 17:42:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:42:44.421644509 +0000 UTC m=+1109.878894671" watchObservedRunningTime="2025-10-13 17:42:44.426268648 +0000 UTC m=+1109.883518800" Oct 13 17:42:44 crc kubenswrapper[4720]: I1013 17:42:44.527757 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 13 17:42:44 crc kubenswrapper[4720]: I1013 17:42:44.528026 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e4d97deb-0f09-435e-9ade-51e87b0ded99" containerName="nova-api-log" containerID="cri-o://e95f4f7bc093f23ec5e46ae6b78b68ef94974c8be0fb00bedd7b7993e9599a32" gracePeriod=30 Oct 13 17:42:44 crc kubenswrapper[4720]: I1013 17:42:44.528268 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e4d97deb-0f09-435e-9ade-51e87b0ded99" containerName="nova-api-api" containerID="cri-o://36037d780cae7a1d82bd53e92e053a1fa3fa934796419c1153acfe6edf3a365a" gracePeriod=30 Oct 13 17:42:45 crc kubenswrapper[4720]: I1013 17:42:45.212323 4720 patch_prober.go:28] interesting pod/machine-config-daemon-htwnl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 17:42:45 crc kubenswrapper[4720]: I1013 17:42:45.212720 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 17:42:45 crc kubenswrapper[4720]: I1013 17:42:45.212772 4720 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" Oct 13 17:42:45 crc kubenswrapper[4720]: I1013 17:42:45.213518 4720 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c1278ace50a45373a8479a2cc48b2ad98ee5d6f328dbb131bbb45680a5894fc3"} pod="openshift-machine-config-operator/machine-config-daemon-htwnl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 13 17:42:45 crc kubenswrapper[4720]: I1013 17:42:45.213572 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerName="machine-config-daemon" containerID="cri-o://c1278ace50a45373a8479a2cc48b2ad98ee5d6f328dbb131bbb45680a5894fc3" gracePeriod=600 Oct 13 17:42:45 crc kubenswrapper[4720]: I1013 17:42:45.406279 4720 generic.go:334] "Generic (PLEG): container finished" podID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerID="c1278ace50a45373a8479a2cc48b2ad98ee5d6f328dbb131bbb45680a5894fc3" exitCode=0 Oct 13 17:42:45 crc kubenswrapper[4720]: I1013 17:42:45.406349 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" event={"ID":"ce442c80-fcde-4b79-b6f9-f8f25771dfd4","Type":"ContainerDied","Data":"c1278ace50a45373a8479a2cc48b2ad98ee5d6f328dbb131bbb45680a5894fc3"} Oct 13 17:42:45 crc kubenswrapper[4720]: I1013 17:42:45.406387 4720 scope.go:117] "RemoveContainer" containerID="4d6ae9650e2a4d9303f0ed0df57f14a865dd0defb52c4262f76ce0d77b3d80c5" Oct 13 17:42:45 crc kubenswrapper[4720]: I1013 17:42:45.426626 4720 generic.go:334] "Generic (PLEG): container finished" podID="04262368-d4ca-4f43-9314-ebca3d3d1c2c" containerID="88e6b377a0a605d9700b8ff6219c87ab6f52d77ac2e2744d11a141e335dfe0e7" exitCode=0 Oct 13 17:42:45 crc kubenswrapper[4720]: I1013 17:42:45.426658 4720 generic.go:334] "Generic (PLEG): container finished" podID="04262368-d4ca-4f43-9314-ebca3d3d1c2c" containerID="bc3042ab103905f3364870ccee11241170f26f67e99474d66367b901f2092858" exitCode=0 Oct 13 17:42:45 crc kubenswrapper[4720]: I1013 17:42:45.426693 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04262368-d4ca-4f43-9314-ebca3d3d1c2c","Type":"ContainerDied","Data":"88e6b377a0a605d9700b8ff6219c87ab6f52d77ac2e2744d11a141e335dfe0e7"} Oct 13 17:42:45 crc kubenswrapper[4720]: I1013 17:42:45.426718 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04262368-d4ca-4f43-9314-ebca3d3d1c2c","Type":"ContainerDied","Data":"bc3042ab103905f3364870ccee11241170f26f67e99474d66367b901f2092858"} Oct 13 17:42:45 crc kubenswrapper[4720]: I1013 17:42:45.429236 4720 generic.go:334] "Generic (PLEG): container finished" podID="e4d97deb-0f09-435e-9ade-51e87b0ded99" containerID="e95f4f7bc093f23ec5e46ae6b78b68ef94974c8be0fb00bedd7b7993e9599a32" exitCode=143 Oct 13 17:42:45 crc kubenswrapper[4720]: I1013 17:42:45.429306 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e4d97deb-0f09-435e-9ade-51e87b0ded99","Type":"ContainerDied","Data":"e95f4f7bc093f23ec5e46ae6b78b68ef94974c8be0fb00bedd7b7993e9599a32"} Oct 13 17:42:45 crc kubenswrapper[4720]: I1013 17:42:45.713826 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 17:42:45 crc kubenswrapper[4720]: I1013 17:42:45.828663 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-km844\" (UniqueName: \"kubernetes.io/projected/04262368-d4ca-4f43-9314-ebca3d3d1c2c-kube-api-access-km844\") pod \"04262368-d4ca-4f43-9314-ebca3d3d1c2c\" (UID: \"04262368-d4ca-4f43-9314-ebca3d3d1c2c\") " Oct 13 17:42:45 crc kubenswrapper[4720]: I1013 17:42:45.828720 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04262368-d4ca-4f43-9314-ebca3d3d1c2c-log-httpd\") pod \"04262368-d4ca-4f43-9314-ebca3d3d1c2c\" (UID: \"04262368-d4ca-4f43-9314-ebca3d3d1c2c\") " Oct 13 17:42:45 crc kubenswrapper[4720]: I1013 17:42:45.828757 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04262368-d4ca-4f43-9314-ebca3d3d1c2c-combined-ca-bundle\") pod \"04262368-d4ca-4f43-9314-ebca3d3d1c2c\" (UID: \"04262368-d4ca-4f43-9314-ebca3d3d1c2c\") " Oct 13 17:42:45 crc kubenswrapper[4720]: I1013 17:42:45.828810 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04262368-d4ca-4f43-9314-ebca3d3d1c2c-run-httpd\") pod \"04262368-d4ca-4f43-9314-ebca3d3d1c2c\" (UID: \"04262368-d4ca-4f43-9314-ebca3d3d1c2c\") " Oct 13 17:42:45 crc kubenswrapper[4720]: I1013 17:42:45.828854 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/04262368-d4ca-4f43-9314-ebca3d3d1c2c-ceilometer-tls-certs\") pod \"04262368-d4ca-4f43-9314-ebca3d3d1c2c\" (UID: \"04262368-d4ca-4f43-9314-ebca3d3d1c2c\") " Oct 13 17:42:45 crc kubenswrapper[4720]: I1013 17:42:45.828895 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04262368-d4ca-4f43-9314-ebca3d3d1c2c-scripts\") pod \"04262368-d4ca-4f43-9314-ebca3d3d1c2c\" (UID: \"04262368-d4ca-4f43-9314-ebca3d3d1c2c\") " Oct 13 17:42:45 crc kubenswrapper[4720]: I1013 17:42:45.828911 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04262368-d4ca-4f43-9314-ebca3d3d1c2c-config-data\") pod \"04262368-d4ca-4f43-9314-ebca3d3d1c2c\" (UID: \"04262368-d4ca-4f43-9314-ebca3d3d1c2c\") " Oct 13 17:42:45 crc kubenswrapper[4720]: I1013 17:42:45.828943 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/04262368-d4ca-4f43-9314-ebca3d3d1c2c-sg-core-conf-yaml\") pod \"04262368-d4ca-4f43-9314-ebca3d3d1c2c\" (UID: \"04262368-d4ca-4f43-9314-ebca3d3d1c2c\") " Oct 13 17:42:45 crc kubenswrapper[4720]: I1013 17:42:45.830317 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04262368-d4ca-4f43-9314-ebca3d3d1c2c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "04262368-d4ca-4f43-9314-ebca3d3d1c2c" (UID: "04262368-d4ca-4f43-9314-ebca3d3d1c2c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:42:45 crc kubenswrapper[4720]: I1013 17:42:45.830895 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04262368-d4ca-4f43-9314-ebca3d3d1c2c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "04262368-d4ca-4f43-9314-ebca3d3d1c2c" (UID: "04262368-d4ca-4f43-9314-ebca3d3d1c2c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:42:45 crc kubenswrapper[4720]: I1013 17:42:45.837640 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04262368-d4ca-4f43-9314-ebca3d3d1c2c-scripts" (OuterVolumeSpecName: "scripts") pod "04262368-d4ca-4f43-9314-ebca3d3d1c2c" (UID: "04262368-d4ca-4f43-9314-ebca3d3d1c2c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:42:45 crc kubenswrapper[4720]: I1013 17:42:45.842414 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04262368-d4ca-4f43-9314-ebca3d3d1c2c-kube-api-access-km844" (OuterVolumeSpecName: "kube-api-access-km844") pod "04262368-d4ca-4f43-9314-ebca3d3d1c2c" (UID: "04262368-d4ca-4f43-9314-ebca3d3d1c2c"). InnerVolumeSpecName "kube-api-access-km844". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:42:45 crc kubenswrapper[4720]: I1013 17:42:45.881436 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04262368-d4ca-4f43-9314-ebca3d3d1c2c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "04262368-d4ca-4f43-9314-ebca3d3d1c2c" (UID: "04262368-d4ca-4f43-9314-ebca3d3d1c2c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:42:45 crc kubenswrapper[4720]: I1013 17:42:45.896475 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04262368-d4ca-4f43-9314-ebca3d3d1c2c-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "04262368-d4ca-4f43-9314-ebca3d3d1c2c" (UID: "04262368-d4ca-4f43-9314-ebca3d3d1c2c"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:42:45 crc kubenswrapper[4720]: I1013 17:42:45.918390 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04262368-d4ca-4f43-9314-ebca3d3d1c2c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "04262368-d4ca-4f43-9314-ebca3d3d1c2c" (UID: "04262368-d4ca-4f43-9314-ebca3d3d1c2c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:42:45 crc kubenswrapper[4720]: I1013 17:42:45.930946 4720 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/04262368-d4ca-4f43-9314-ebca3d3d1c2c-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:45 crc kubenswrapper[4720]: I1013 17:42:45.930982 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04262368-d4ca-4f43-9314-ebca3d3d1c2c-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:45 crc kubenswrapper[4720]: I1013 17:42:45.930992 4720 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/04262368-d4ca-4f43-9314-ebca3d3d1c2c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:45 crc kubenswrapper[4720]: I1013 17:42:45.931000 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-km844\" (UniqueName: \"kubernetes.io/projected/04262368-d4ca-4f43-9314-ebca3d3d1c2c-kube-api-access-km844\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:45 crc kubenswrapper[4720]: I1013 17:42:45.931010 4720 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04262368-d4ca-4f43-9314-ebca3d3d1c2c-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:45 crc kubenswrapper[4720]: I1013 17:42:45.931017 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04262368-d4ca-4f43-9314-ebca3d3d1c2c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:45 crc kubenswrapper[4720]: I1013 17:42:45.931025 4720 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04262368-d4ca-4f43-9314-ebca3d3d1c2c-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:45 crc kubenswrapper[4720]: I1013 17:42:45.977346 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04262368-d4ca-4f43-9314-ebca3d3d1c2c-config-data" (OuterVolumeSpecName: "config-data") pod "04262368-d4ca-4f43-9314-ebca3d3d1c2c" (UID: "04262368-d4ca-4f43-9314-ebca3d3d1c2c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:42:46 crc kubenswrapper[4720]: I1013 17:42:46.032939 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04262368-d4ca-4f43-9314-ebca3d3d1c2c-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:46 crc kubenswrapper[4720]: I1013 17:42:46.440273 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" event={"ID":"ce442c80-fcde-4b79-b6f9-f8f25771dfd4","Type":"ContainerStarted","Data":"ee3a413fb70fae37f659cff124cc855967143ab2544217b22584306b14bb1b9a"} Oct 13 17:42:46 crc kubenswrapper[4720]: I1013 17:42:46.445577 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 17:42:46 crc kubenswrapper[4720]: I1013 17:42:46.457348 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04262368-d4ca-4f43-9314-ebca3d3d1c2c","Type":"ContainerDied","Data":"200e96e22bc36631ca64a5bf13b3ac91a40325a18ecf9f5f1f8a1bd0d60980f4"} Oct 13 17:42:46 crc kubenswrapper[4720]: I1013 17:42:46.457428 4720 scope.go:117] "RemoveContainer" containerID="d220c2da27274bd15395c535e6e52136df267a35b901836a6d79975343dc77ef" Oct 13 17:42:46 crc kubenswrapper[4720]: I1013 17:42:46.492888 4720 scope.go:117] "RemoveContainer" containerID="bf91cfeb690458f450f7edfd433a4c0a172b7186713b4b1f2aebd21250136ae1" Oct 13 17:42:46 crc kubenswrapper[4720]: I1013 17:42:46.503253 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 13 17:42:46 crc kubenswrapper[4720]: I1013 17:42:46.510828 4720 scope.go:117] "RemoveContainer" containerID="88e6b377a0a605d9700b8ff6219c87ab6f52d77ac2e2744d11a141e335dfe0e7" Oct 13 17:42:46 crc kubenswrapper[4720]: I1013 17:42:46.520606 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 13 17:42:46 crc kubenswrapper[4720]: I1013 17:42:46.530432 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 13 17:42:46 crc kubenswrapper[4720]: E1013 17:42:46.530917 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04262368-d4ca-4f43-9314-ebca3d3d1c2c" containerName="ceilometer-central-agent" Oct 13 17:42:46 crc kubenswrapper[4720]: I1013 17:42:46.530940 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="04262368-d4ca-4f43-9314-ebca3d3d1c2c" containerName="ceilometer-central-agent" Oct 13 17:42:46 crc kubenswrapper[4720]: E1013 17:42:46.530950 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04262368-d4ca-4f43-9314-ebca3d3d1c2c" containerName="proxy-httpd" Oct 13 17:42:46 crc kubenswrapper[4720]: I1013 17:42:46.530956 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="04262368-d4ca-4f43-9314-ebca3d3d1c2c" containerName="proxy-httpd" Oct 13 17:42:46 crc kubenswrapper[4720]: E1013 17:42:46.530972 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04262368-d4ca-4f43-9314-ebca3d3d1c2c" containerName="ceilometer-notification-agent" Oct 13 17:42:46 crc kubenswrapper[4720]: I1013 17:42:46.530979 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="04262368-d4ca-4f43-9314-ebca3d3d1c2c" containerName="ceilometer-notification-agent" Oct 13 17:42:46 crc kubenswrapper[4720]: E1013 17:42:46.530989 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04262368-d4ca-4f43-9314-ebca3d3d1c2c" containerName="sg-core" Oct 13 17:42:46 crc kubenswrapper[4720]: I1013 17:42:46.530995 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="04262368-d4ca-4f43-9314-ebca3d3d1c2c" containerName="sg-core" Oct 13 17:42:46 crc kubenswrapper[4720]: I1013 17:42:46.531226 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="04262368-d4ca-4f43-9314-ebca3d3d1c2c" containerName="sg-core" Oct 13 17:42:46 crc kubenswrapper[4720]: I1013 17:42:46.531244 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="04262368-d4ca-4f43-9314-ebca3d3d1c2c" containerName="proxy-httpd" Oct 13 17:42:46 crc kubenswrapper[4720]: I1013 17:42:46.531254 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="04262368-d4ca-4f43-9314-ebca3d3d1c2c" containerName="ceilometer-central-agent" Oct 13 17:42:46 crc kubenswrapper[4720]: I1013 17:42:46.531263 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="04262368-d4ca-4f43-9314-ebca3d3d1c2c" containerName="ceilometer-notification-agent" Oct 13 17:42:46 crc kubenswrapper[4720]: I1013 17:42:46.532887 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 17:42:46 crc kubenswrapper[4720]: I1013 17:42:46.534771 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 13 17:42:46 crc kubenswrapper[4720]: I1013 17:42:46.535548 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 13 17:42:46 crc kubenswrapper[4720]: I1013 17:42:46.535691 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Oct 13 17:42:46 crc kubenswrapper[4720]: I1013 17:42:46.545741 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 13 17:42:46 crc kubenswrapper[4720]: I1013 17:42:46.545888 4720 scope.go:117] "RemoveContainer" containerID="bc3042ab103905f3364870ccee11241170f26f67e99474d66367b901f2092858" Oct 13 17:42:46 crc kubenswrapper[4720]: I1013 17:42:46.642290 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18a26891-fa3b-4433-a74a-592bef9b8241-config-data\") pod \"ceilometer-0\" (UID: \"18a26891-fa3b-4433-a74a-592bef9b8241\") " pod="openstack/ceilometer-0" Oct 13 17:42:46 crc kubenswrapper[4720]: I1013 17:42:46.642343 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18a26891-fa3b-4433-a74a-592bef9b8241-scripts\") pod \"ceilometer-0\" (UID: \"18a26891-fa3b-4433-a74a-592bef9b8241\") " pod="openstack/ceilometer-0" Oct 13 17:42:46 crc kubenswrapper[4720]: I1013 17:42:46.642364 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/18a26891-fa3b-4433-a74a-592bef9b8241-log-httpd\") pod \"ceilometer-0\" (UID: \"18a26891-fa3b-4433-a74a-592bef9b8241\") " pod="openstack/ceilometer-0" Oct 13 17:42:46 crc kubenswrapper[4720]: I1013 17:42:46.642608 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztv28\" (UniqueName: \"kubernetes.io/projected/18a26891-fa3b-4433-a74a-592bef9b8241-kube-api-access-ztv28\") pod \"ceilometer-0\" (UID: \"18a26891-fa3b-4433-a74a-592bef9b8241\") " pod="openstack/ceilometer-0" Oct 13 17:42:46 crc kubenswrapper[4720]: I1013 17:42:46.642860 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/18a26891-fa3b-4433-a74a-592bef9b8241-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"18a26891-fa3b-4433-a74a-592bef9b8241\") " pod="openstack/ceilometer-0" Oct 13 17:42:46 crc kubenswrapper[4720]: I1013 17:42:46.643020 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/18a26891-fa3b-4433-a74a-592bef9b8241-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"18a26891-fa3b-4433-a74a-592bef9b8241\") " pod="openstack/ceilometer-0" Oct 13 17:42:46 crc kubenswrapper[4720]: I1013 17:42:46.643075 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18a26891-fa3b-4433-a74a-592bef9b8241-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"18a26891-fa3b-4433-a74a-592bef9b8241\") " pod="openstack/ceilometer-0" Oct 13 17:42:46 crc kubenswrapper[4720]: I1013 17:42:46.643126 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/18a26891-fa3b-4433-a74a-592bef9b8241-run-httpd\") pod \"ceilometer-0\" (UID: \"18a26891-fa3b-4433-a74a-592bef9b8241\") " pod="openstack/ceilometer-0" Oct 13 17:42:46 crc kubenswrapper[4720]: I1013 17:42:46.655660 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Oct 13 17:42:46 crc kubenswrapper[4720]: I1013 17:42:46.675789 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Oct 13 17:42:46 crc kubenswrapper[4720]: I1013 17:42:46.744767 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/18a26891-fa3b-4433-a74a-592bef9b8241-log-httpd\") pod \"ceilometer-0\" (UID: \"18a26891-fa3b-4433-a74a-592bef9b8241\") " pod="openstack/ceilometer-0" Oct 13 17:42:46 crc kubenswrapper[4720]: I1013 17:42:46.744861 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztv28\" (UniqueName: \"kubernetes.io/projected/18a26891-fa3b-4433-a74a-592bef9b8241-kube-api-access-ztv28\") pod \"ceilometer-0\" (UID: \"18a26891-fa3b-4433-a74a-592bef9b8241\") " pod="openstack/ceilometer-0" Oct 13 17:42:46 crc kubenswrapper[4720]: I1013 17:42:46.744918 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/18a26891-fa3b-4433-a74a-592bef9b8241-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"18a26891-fa3b-4433-a74a-592bef9b8241\") " pod="openstack/ceilometer-0" Oct 13 17:42:46 crc kubenswrapper[4720]: I1013 17:42:46.744954 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/18a26891-fa3b-4433-a74a-592bef9b8241-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"18a26891-fa3b-4433-a74a-592bef9b8241\") " pod="openstack/ceilometer-0" Oct 13 17:42:46 crc kubenswrapper[4720]: I1013 17:42:46.744984 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18a26891-fa3b-4433-a74a-592bef9b8241-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"18a26891-fa3b-4433-a74a-592bef9b8241\") " pod="openstack/ceilometer-0" Oct 13 17:42:46 crc kubenswrapper[4720]: I1013 17:42:46.744999 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/18a26891-fa3b-4433-a74a-592bef9b8241-run-httpd\") pod \"ceilometer-0\" (UID: \"18a26891-fa3b-4433-a74a-592bef9b8241\") " pod="openstack/ceilometer-0" Oct 13 17:42:46 crc kubenswrapper[4720]: I1013 17:42:46.745044 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18a26891-fa3b-4433-a74a-592bef9b8241-config-data\") pod \"ceilometer-0\" (UID: \"18a26891-fa3b-4433-a74a-592bef9b8241\") " pod="openstack/ceilometer-0" Oct 13 17:42:46 crc kubenswrapper[4720]: I1013 17:42:46.745080 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18a26891-fa3b-4433-a74a-592bef9b8241-scripts\") pod \"ceilometer-0\" (UID: \"18a26891-fa3b-4433-a74a-592bef9b8241\") " pod="openstack/ceilometer-0" Oct 13 17:42:46 crc kubenswrapper[4720]: I1013 17:42:46.745152 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/18a26891-fa3b-4433-a74a-592bef9b8241-log-httpd\") pod \"ceilometer-0\" (UID: \"18a26891-fa3b-4433-a74a-592bef9b8241\") " pod="openstack/ceilometer-0" Oct 13 17:42:46 crc kubenswrapper[4720]: I1013 17:42:46.746395 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/18a26891-fa3b-4433-a74a-592bef9b8241-run-httpd\") pod \"ceilometer-0\" (UID: \"18a26891-fa3b-4433-a74a-592bef9b8241\") " pod="openstack/ceilometer-0" Oct 13 17:42:46 crc kubenswrapper[4720]: I1013 17:42:46.749126 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/18a26891-fa3b-4433-a74a-592bef9b8241-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"18a26891-fa3b-4433-a74a-592bef9b8241\") " pod="openstack/ceilometer-0" Oct 13 17:42:46 crc kubenswrapper[4720]: I1013 17:42:46.751631 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/18a26891-fa3b-4433-a74a-592bef9b8241-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"18a26891-fa3b-4433-a74a-592bef9b8241\") " pod="openstack/ceilometer-0" Oct 13 17:42:46 crc kubenswrapper[4720]: I1013 17:42:46.751838 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/18a26891-fa3b-4433-a74a-592bef9b8241-scripts\") pod \"ceilometer-0\" (UID: \"18a26891-fa3b-4433-a74a-592bef9b8241\") " pod="openstack/ceilometer-0" Oct 13 17:42:46 crc kubenswrapper[4720]: I1013 17:42:46.752381 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18a26891-fa3b-4433-a74a-592bef9b8241-config-data\") pod \"ceilometer-0\" (UID: \"18a26891-fa3b-4433-a74a-592bef9b8241\") " pod="openstack/ceilometer-0" Oct 13 17:42:46 crc kubenswrapper[4720]: I1013 17:42:46.754131 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18a26891-fa3b-4433-a74a-592bef9b8241-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"18a26891-fa3b-4433-a74a-592bef9b8241\") " pod="openstack/ceilometer-0" Oct 13 17:42:46 crc kubenswrapper[4720]: I1013 17:42:46.766243 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztv28\" (UniqueName: \"kubernetes.io/projected/18a26891-fa3b-4433-a74a-592bef9b8241-kube-api-access-ztv28\") pod \"ceilometer-0\" (UID: \"18a26891-fa3b-4433-a74a-592bef9b8241\") " pod="openstack/ceilometer-0" Oct 13 17:42:46 crc kubenswrapper[4720]: I1013 17:42:46.884624 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 13 17:42:47 crc kubenswrapper[4720]: I1013 17:42:47.179912 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04262368-d4ca-4f43-9314-ebca3d3d1c2c" path="/var/lib/kubelet/pods/04262368-d4ca-4f43-9314-ebca3d3d1c2c/volumes" Oct 13 17:42:47 crc kubenswrapper[4720]: I1013 17:42:47.328737 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 13 17:42:47 crc kubenswrapper[4720]: W1013 17:42:47.340545 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod18a26891_fa3b_4433_a74a_592bef9b8241.slice/crio-13e7ef5e12101081d0a582b545241cdb13370d8d56c7d5bf158b8b5abc805ddb WatchSource:0}: Error finding container 13e7ef5e12101081d0a582b545241cdb13370d8d56c7d5bf158b8b5abc805ddb: Status 404 returned error can't find the container with id 13e7ef5e12101081d0a582b545241cdb13370d8d56c7d5bf158b8b5abc805ddb Oct 13 17:42:47 crc kubenswrapper[4720]: I1013 17:42:47.460726 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"18a26891-fa3b-4433-a74a-592bef9b8241","Type":"ContainerStarted","Data":"13e7ef5e12101081d0a582b545241cdb13370d8d56c7d5bf158b8b5abc805ddb"} Oct 13 17:42:47 crc kubenswrapper[4720]: I1013 17:42:47.487908 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Oct 13 17:42:47 crc kubenswrapper[4720]: I1013 17:42:47.727288 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-j4lmp"] Oct 13 17:42:47 crc kubenswrapper[4720]: I1013 17:42:47.728433 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-j4lmp" Oct 13 17:42:47 crc kubenswrapper[4720]: I1013 17:42:47.730941 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Oct 13 17:42:47 crc kubenswrapper[4720]: I1013 17:42:47.731393 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Oct 13 17:42:47 crc kubenswrapper[4720]: I1013 17:42:47.753492 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-j4lmp"] Oct 13 17:42:47 crc kubenswrapper[4720]: I1013 17:42:47.867530 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f2ee4f4-e8f0-41c3-9ec4-0832701c48a8-config-data\") pod \"nova-cell1-cell-mapping-j4lmp\" (UID: \"3f2ee4f4-e8f0-41c3-9ec4-0832701c48a8\") " pod="openstack/nova-cell1-cell-mapping-j4lmp" Oct 13 17:42:47 crc kubenswrapper[4720]: I1013 17:42:47.867599 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f2ee4f4-e8f0-41c3-9ec4-0832701c48a8-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-j4lmp\" (UID: \"3f2ee4f4-e8f0-41c3-9ec4-0832701c48a8\") " pod="openstack/nova-cell1-cell-mapping-j4lmp" Oct 13 17:42:47 crc kubenswrapper[4720]: I1013 17:42:47.867816 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkflq\" (UniqueName: \"kubernetes.io/projected/3f2ee4f4-e8f0-41c3-9ec4-0832701c48a8-kube-api-access-wkflq\") pod \"nova-cell1-cell-mapping-j4lmp\" (UID: \"3f2ee4f4-e8f0-41c3-9ec4-0832701c48a8\") " pod="openstack/nova-cell1-cell-mapping-j4lmp" Oct 13 17:42:47 crc kubenswrapper[4720]: I1013 17:42:47.867961 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f2ee4f4-e8f0-41c3-9ec4-0832701c48a8-scripts\") pod \"nova-cell1-cell-mapping-j4lmp\" (UID: \"3f2ee4f4-e8f0-41c3-9ec4-0832701c48a8\") " pod="openstack/nova-cell1-cell-mapping-j4lmp" Oct 13 17:42:47 crc kubenswrapper[4720]: I1013 17:42:47.969494 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f2ee4f4-e8f0-41c3-9ec4-0832701c48a8-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-j4lmp\" (UID: \"3f2ee4f4-e8f0-41c3-9ec4-0832701c48a8\") " pod="openstack/nova-cell1-cell-mapping-j4lmp" Oct 13 17:42:47 crc kubenswrapper[4720]: I1013 17:42:47.969575 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkflq\" (UniqueName: \"kubernetes.io/projected/3f2ee4f4-e8f0-41c3-9ec4-0832701c48a8-kube-api-access-wkflq\") pod \"nova-cell1-cell-mapping-j4lmp\" (UID: \"3f2ee4f4-e8f0-41c3-9ec4-0832701c48a8\") " pod="openstack/nova-cell1-cell-mapping-j4lmp" Oct 13 17:42:47 crc kubenswrapper[4720]: I1013 17:42:47.969620 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f2ee4f4-e8f0-41c3-9ec4-0832701c48a8-scripts\") pod \"nova-cell1-cell-mapping-j4lmp\" (UID: \"3f2ee4f4-e8f0-41c3-9ec4-0832701c48a8\") " pod="openstack/nova-cell1-cell-mapping-j4lmp" Oct 13 17:42:47 crc kubenswrapper[4720]: I1013 17:42:47.969686 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f2ee4f4-e8f0-41c3-9ec4-0832701c48a8-config-data\") pod \"nova-cell1-cell-mapping-j4lmp\" (UID: \"3f2ee4f4-e8f0-41c3-9ec4-0832701c48a8\") " pod="openstack/nova-cell1-cell-mapping-j4lmp" Oct 13 17:42:47 crc kubenswrapper[4720]: I1013 17:42:47.973598 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f2ee4f4-e8f0-41c3-9ec4-0832701c48a8-config-data\") pod \"nova-cell1-cell-mapping-j4lmp\" (UID: \"3f2ee4f4-e8f0-41c3-9ec4-0832701c48a8\") " pod="openstack/nova-cell1-cell-mapping-j4lmp" Oct 13 17:42:47 crc kubenswrapper[4720]: I1013 17:42:47.977750 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f2ee4f4-e8f0-41c3-9ec4-0832701c48a8-scripts\") pod \"nova-cell1-cell-mapping-j4lmp\" (UID: \"3f2ee4f4-e8f0-41c3-9ec4-0832701c48a8\") " pod="openstack/nova-cell1-cell-mapping-j4lmp" Oct 13 17:42:47 crc kubenswrapper[4720]: I1013 17:42:47.978104 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f2ee4f4-e8f0-41c3-9ec4-0832701c48a8-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-j4lmp\" (UID: \"3f2ee4f4-e8f0-41c3-9ec4-0832701c48a8\") " pod="openstack/nova-cell1-cell-mapping-j4lmp" Oct 13 17:42:47 crc kubenswrapper[4720]: I1013 17:42:47.990471 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkflq\" (UniqueName: \"kubernetes.io/projected/3f2ee4f4-e8f0-41c3-9ec4-0832701c48a8-kube-api-access-wkflq\") pod \"nova-cell1-cell-mapping-j4lmp\" (UID: \"3f2ee4f4-e8f0-41c3-9ec4-0832701c48a8\") " pod="openstack/nova-cell1-cell-mapping-j4lmp" Oct 13 17:42:48 crc kubenswrapper[4720]: I1013 17:42:48.049608 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 13 17:42:48 crc kubenswrapper[4720]: I1013 17:42:48.109929 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-j4lmp" Oct 13 17:42:48 crc kubenswrapper[4720]: I1013 17:42:48.176551 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4d97deb-0f09-435e-9ade-51e87b0ded99-logs\") pod \"e4d97deb-0f09-435e-9ade-51e87b0ded99\" (UID: \"e4d97deb-0f09-435e-9ade-51e87b0ded99\") " Oct 13 17:42:48 crc kubenswrapper[4720]: I1013 17:42:48.176826 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4d97deb-0f09-435e-9ade-51e87b0ded99-combined-ca-bundle\") pod \"e4d97deb-0f09-435e-9ade-51e87b0ded99\" (UID: \"e4d97deb-0f09-435e-9ade-51e87b0ded99\") " Oct 13 17:42:48 crc kubenswrapper[4720]: I1013 17:42:48.176857 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4d97deb-0f09-435e-9ade-51e87b0ded99-config-data\") pod \"e4d97deb-0f09-435e-9ade-51e87b0ded99\" (UID: \"e4d97deb-0f09-435e-9ade-51e87b0ded99\") " Oct 13 17:42:48 crc kubenswrapper[4720]: I1013 17:42:48.176989 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccvr2\" (UniqueName: \"kubernetes.io/projected/e4d97deb-0f09-435e-9ade-51e87b0ded99-kube-api-access-ccvr2\") pod \"e4d97deb-0f09-435e-9ade-51e87b0ded99\" (UID: \"e4d97deb-0f09-435e-9ade-51e87b0ded99\") " Oct 13 17:42:48 crc kubenswrapper[4720]: I1013 17:42:48.178810 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4d97deb-0f09-435e-9ade-51e87b0ded99-logs" (OuterVolumeSpecName: "logs") pod "e4d97deb-0f09-435e-9ade-51e87b0ded99" (UID: "e4d97deb-0f09-435e-9ade-51e87b0ded99"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:42:48 crc kubenswrapper[4720]: I1013 17:42:48.181306 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4d97deb-0f09-435e-9ade-51e87b0ded99-kube-api-access-ccvr2" (OuterVolumeSpecName: "kube-api-access-ccvr2") pod "e4d97deb-0f09-435e-9ade-51e87b0ded99" (UID: "e4d97deb-0f09-435e-9ade-51e87b0ded99"). InnerVolumeSpecName "kube-api-access-ccvr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:42:48 crc kubenswrapper[4720]: I1013 17:42:48.227556 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4d97deb-0f09-435e-9ade-51e87b0ded99-config-data" (OuterVolumeSpecName: "config-data") pod "e4d97deb-0f09-435e-9ade-51e87b0ded99" (UID: "e4d97deb-0f09-435e-9ade-51e87b0ded99"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:42:48 crc kubenswrapper[4720]: I1013 17:42:48.230226 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4d97deb-0f09-435e-9ade-51e87b0ded99-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e4d97deb-0f09-435e-9ade-51e87b0ded99" (UID: "e4d97deb-0f09-435e-9ade-51e87b0ded99"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:42:48 crc kubenswrapper[4720]: I1013 17:42:48.279536 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ccvr2\" (UniqueName: \"kubernetes.io/projected/e4d97deb-0f09-435e-9ade-51e87b0ded99-kube-api-access-ccvr2\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:48 crc kubenswrapper[4720]: I1013 17:42:48.279582 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4d97deb-0f09-435e-9ade-51e87b0ded99-logs\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:48 crc kubenswrapper[4720]: I1013 17:42:48.279596 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4d97deb-0f09-435e-9ade-51e87b0ded99-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:48 crc kubenswrapper[4720]: I1013 17:42:48.279608 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4d97deb-0f09-435e-9ade-51e87b0ded99-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:48 crc kubenswrapper[4720]: I1013 17:42:48.473614 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"18a26891-fa3b-4433-a74a-592bef9b8241","Type":"ContainerStarted","Data":"030a4101ce48255ef858b81e3a8e9e9fa4a4a3a81ff70bfe14fb353f9bec1450"} Oct 13 17:42:48 crc kubenswrapper[4720]: I1013 17:42:48.475992 4720 generic.go:334] "Generic (PLEG): container finished" podID="e4d97deb-0f09-435e-9ade-51e87b0ded99" containerID="36037d780cae7a1d82bd53e92e053a1fa3fa934796419c1153acfe6edf3a365a" exitCode=0 Oct 13 17:42:48 crc kubenswrapper[4720]: I1013 17:42:48.476084 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 13 17:42:48 crc kubenswrapper[4720]: I1013 17:42:48.476143 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e4d97deb-0f09-435e-9ade-51e87b0ded99","Type":"ContainerDied","Data":"36037d780cae7a1d82bd53e92e053a1fa3fa934796419c1153acfe6edf3a365a"} Oct 13 17:42:48 crc kubenswrapper[4720]: I1013 17:42:48.476171 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e4d97deb-0f09-435e-9ade-51e87b0ded99","Type":"ContainerDied","Data":"767acbfadc9636c298430fbdc876c480cd6bf791dacc569e12b1e84dfe0aa2a4"} Oct 13 17:42:48 crc kubenswrapper[4720]: I1013 17:42:48.476255 4720 scope.go:117] "RemoveContainer" containerID="36037d780cae7a1d82bd53e92e053a1fa3fa934796419c1153acfe6edf3a365a" Oct 13 17:42:48 crc kubenswrapper[4720]: I1013 17:42:48.504324 4720 scope.go:117] "RemoveContainer" containerID="e95f4f7bc093f23ec5e46ae6b78b68ef94974c8be0fb00bedd7b7993e9599a32" Oct 13 17:42:48 crc kubenswrapper[4720]: I1013 17:42:48.517126 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 13 17:42:48 crc kubenswrapper[4720]: I1013 17:42:48.539105 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 13 17:42:48 crc kubenswrapper[4720]: I1013 17:42:48.548826 4720 scope.go:117] "RemoveContainer" containerID="36037d780cae7a1d82bd53e92e053a1fa3fa934796419c1153acfe6edf3a365a" Oct 13 17:42:48 crc kubenswrapper[4720]: I1013 17:42:48.554854 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 13 17:42:48 crc kubenswrapper[4720]: E1013 17:42:48.555325 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4d97deb-0f09-435e-9ade-51e87b0ded99" containerName="nova-api-api" Oct 13 17:42:48 crc kubenswrapper[4720]: I1013 17:42:48.555344 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4d97deb-0f09-435e-9ade-51e87b0ded99" containerName="nova-api-api" Oct 13 17:42:48 crc kubenswrapper[4720]: E1013 17:42:48.555358 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4d97deb-0f09-435e-9ade-51e87b0ded99" containerName="nova-api-log" Oct 13 17:42:48 crc kubenswrapper[4720]: I1013 17:42:48.555375 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4d97deb-0f09-435e-9ade-51e87b0ded99" containerName="nova-api-log" Oct 13 17:42:48 crc kubenswrapper[4720]: I1013 17:42:48.555548 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4d97deb-0f09-435e-9ade-51e87b0ded99" containerName="nova-api-log" Oct 13 17:42:48 crc kubenswrapper[4720]: I1013 17:42:48.555583 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4d97deb-0f09-435e-9ade-51e87b0ded99" containerName="nova-api-api" Oct 13 17:42:48 crc kubenswrapper[4720]: I1013 17:42:48.556857 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 13 17:42:48 crc kubenswrapper[4720]: I1013 17:42:48.559161 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 13 17:42:48 crc kubenswrapper[4720]: I1013 17:42:48.561308 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Oct 13 17:42:48 crc kubenswrapper[4720]: I1013 17:42:48.562223 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Oct 13 17:42:48 crc kubenswrapper[4720]: E1013 17:42:48.564091 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36037d780cae7a1d82bd53e92e053a1fa3fa934796419c1153acfe6edf3a365a\": container with ID starting with 36037d780cae7a1d82bd53e92e053a1fa3fa934796419c1153acfe6edf3a365a not found: ID does not exist" containerID="36037d780cae7a1d82bd53e92e053a1fa3fa934796419c1153acfe6edf3a365a" Oct 13 17:42:48 crc kubenswrapper[4720]: I1013 17:42:48.564144 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36037d780cae7a1d82bd53e92e053a1fa3fa934796419c1153acfe6edf3a365a"} err="failed to get container status \"36037d780cae7a1d82bd53e92e053a1fa3fa934796419c1153acfe6edf3a365a\": rpc error: code = NotFound desc = could not find container \"36037d780cae7a1d82bd53e92e053a1fa3fa934796419c1153acfe6edf3a365a\": container with ID starting with 36037d780cae7a1d82bd53e92e053a1fa3fa934796419c1153acfe6edf3a365a not found: ID does not exist" Oct 13 17:42:48 crc kubenswrapper[4720]: I1013 17:42:48.564177 4720 scope.go:117] "RemoveContainer" containerID="e95f4f7bc093f23ec5e46ae6b78b68ef94974c8be0fb00bedd7b7993e9599a32" Oct 13 17:42:48 crc kubenswrapper[4720]: E1013 17:42:48.564665 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e95f4f7bc093f23ec5e46ae6b78b68ef94974c8be0fb00bedd7b7993e9599a32\": container with ID starting with e95f4f7bc093f23ec5e46ae6b78b68ef94974c8be0fb00bedd7b7993e9599a32 not found: ID does not exist" containerID="e95f4f7bc093f23ec5e46ae6b78b68ef94974c8be0fb00bedd7b7993e9599a32" Oct 13 17:42:48 crc kubenswrapper[4720]: I1013 17:42:48.564702 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e95f4f7bc093f23ec5e46ae6b78b68ef94974c8be0fb00bedd7b7993e9599a32"} err="failed to get container status \"e95f4f7bc093f23ec5e46ae6b78b68ef94974c8be0fb00bedd7b7993e9599a32\": rpc error: code = NotFound desc = could not find container \"e95f4f7bc093f23ec5e46ae6b78b68ef94974c8be0fb00bedd7b7993e9599a32\": container with ID starting with e95f4f7bc093f23ec5e46ae6b78b68ef94974c8be0fb00bedd7b7993e9599a32 not found: ID does not exist" Oct 13 17:42:48 crc kubenswrapper[4720]: I1013 17:42:48.581837 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 13 17:42:48 crc kubenswrapper[4720]: I1013 17:42:48.669328 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-j4lmp"] Oct 13 17:42:48 crc kubenswrapper[4720]: W1013 17:42:48.672293 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f2ee4f4_e8f0_41c3_9ec4_0832701c48a8.slice/crio-0ba84564edbd6b295777c7cbd02dcddacfa1c953c81479d233f0e2250663cb0f WatchSource:0}: Error finding container 0ba84564edbd6b295777c7cbd02dcddacfa1c953c81479d233f0e2250663cb0f: Status 404 returned error can't find the container with id 0ba84564edbd6b295777c7cbd02dcddacfa1c953c81479d233f0e2250663cb0f Oct 13 17:42:48 crc kubenswrapper[4720]: I1013 17:42:48.688157 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5908f1a2-9abb-49bf-a29a-0ed02d078198-internal-tls-certs\") pod \"nova-api-0\" (UID: \"5908f1a2-9abb-49bf-a29a-0ed02d078198\") " pod="openstack/nova-api-0" Oct 13 17:42:48 crc kubenswrapper[4720]: I1013 17:42:48.688323 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5908f1a2-9abb-49bf-a29a-0ed02d078198-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5908f1a2-9abb-49bf-a29a-0ed02d078198\") " pod="openstack/nova-api-0" Oct 13 17:42:48 crc kubenswrapper[4720]: I1013 17:42:48.688421 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gxtt\" (UniqueName: \"kubernetes.io/projected/5908f1a2-9abb-49bf-a29a-0ed02d078198-kube-api-access-6gxtt\") pod \"nova-api-0\" (UID: \"5908f1a2-9abb-49bf-a29a-0ed02d078198\") " pod="openstack/nova-api-0" Oct 13 17:42:48 crc kubenswrapper[4720]: I1013 17:42:48.688506 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5908f1a2-9abb-49bf-a29a-0ed02d078198-logs\") pod \"nova-api-0\" (UID: \"5908f1a2-9abb-49bf-a29a-0ed02d078198\") " pod="openstack/nova-api-0" Oct 13 17:42:48 crc kubenswrapper[4720]: I1013 17:42:48.688632 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5908f1a2-9abb-49bf-a29a-0ed02d078198-public-tls-certs\") pod \"nova-api-0\" (UID: \"5908f1a2-9abb-49bf-a29a-0ed02d078198\") " pod="openstack/nova-api-0" Oct 13 17:42:48 crc kubenswrapper[4720]: I1013 17:42:48.688718 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5908f1a2-9abb-49bf-a29a-0ed02d078198-config-data\") pod \"nova-api-0\" (UID: \"5908f1a2-9abb-49bf-a29a-0ed02d078198\") " pod="openstack/nova-api-0" Oct 13 17:42:48 crc kubenswrapper[4720]: I1013 17:42:48.790419 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5908f1a2-9abb-49bf-a29a-0ed02d078198-public-tls-certs\") pod \"nova-api-0\" (UID: \"5908f1a2-9abb-49bf-a29a-0ed02d078198\") " pod="openstack/nova-api-0" Oct 13 17:42:48 crc kubenswrapper[4720]: I1013 17:42:48.790467 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5908f1a2-9abb-49bf-a29a-0ed02d078198-config-data\") pod \"nova-api-0\" (UID: \"5908f1a2-9abb-49bf-a29a-0ed02d078198\") " pod="openstack/nova-api-0" Oct 13 17:42:48 crc kubenswrapper[4720]: I1013 17:42:48.790555 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5908f1a2-9abb-49bf-a29a-0ed02d078198-internal-tls-certs\") pod \"nova-api-0\" (UID: \"5908f1a2-9abb-49bf-a29a-0ed02d078198\") " pod="openstack/nova-api-0" Oct 13 17:42:48 crc kubenswrapper[4720]: I1013 17:42:48.790573 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5908f1a2-9abb-49bf-a29a-0ed02d078198-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5908f1a2-9abb-49bf-a29a-0ed02d078198\") " pod="openstack/nova-api-0" Oct 13 17:42:48 crc kubenswrapper[4720]: I1013 17:42:48.790605 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gxtt\" (UniqueName: \"kubernetes.io/projected/5908f1a2-9abb-49bf-a29a-0ed02d078198-kube-api-access-6gxtt\") pod \"nova-api-0\" (UID: \"5908f1a2-9abb-49bf-a29a-0ed02d078198\") " pod="openstack/nova-api-0" Oct 13 17:42:48 crc kubenswrapper[4720]: I1013 17:42:48.790638 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5908f1a2-9abb-49bf-a29a-0ed02d078198-logs\") pod \"nova-api-0\" (UID: \"5908f1a2-9abb-49bf-a29a-0ed02d078198\") " pod="openstack/nova-api-0" Oct 13 17:42:48 crc kubenswrapper[4720]: I1013 17:42:48.791540 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5908f1a2-9abb-49bf-a29a-0ed02d078198-logs\") pod \"nova-api-0\" (UID: \"5908f1a2-9abb-49bf-a29a-0ed02d078198\") " pod="openstack/nova-api-0" Oct 13 17:42:48 crc kubenswrapper[4720]: I1013 17:42:48.795069 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5908f1a2-9abb-49bf-a29a-0ed02d078198-public-tls-certs\") pod \"nova-api-0\" (UID: \"5908f1a2-9abb-49bf-a29a-0ed02d078198\") " pod="openstack/nova-api-0" Oct 13 17:42:48 crc kubenswrapper[4720]: I1013 17:42:48.796687 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5908f1a2-9abb-49bf-a29a-0ed02d078198-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5908f1a2-9abb-49bf-a29a-0ed02d078198\") " pod="openstack/nova-api-0" Oct 13 17:42:48 crc kubenswrapper[4720]: I1013 17:42:48.797147 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5908f1a2-9abb-49bf-a29a-0ed02d078198-internal-tls-certs\") pod \"nova-api-0\" (UID: \"5908f1a2-9abb-49bf-a29a-0ed02d078198\") " pod="openstack/nova-api-0" Oct 13 17:42:48 crc kubenswrapper[4720]: I1013 17:42:48.797691 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5908f1a2-9abb-49bf-a29a-0ed02d078198-config-data\") pod \"nova-api-0\" (UID: \"5908f1a2-9abb-49bf-a29a-0ed02d078198\") " pod="openstack/nova-api-0" Oct 13 17:42:48 crc kubenswrapper[4720]: I1013 17:42:48.808410 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gxtt\" (UniqueName: \"kubernetes.io/projected/5908f1a2-9abb-49bf-a29a-0ed02d078198-kube-api-access-6gxtt\") pod \"nova-api-0\" (UID: \"5908f1a2-9abb-49bf-a29a-0ed02d078198\") " pod="openstack/nova-api-0" Oct 13 17:42:48 crc kubenswrapper[4720]: I1013 17:42:48.878530 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 13 17:42:49 crc kubenswrapper[4720]: I1013 17:42:49.178451 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4d97deb-0f09-435e-9ade-51e87b0ded99" path="/var/lib/kubelet/pods/e4d97deb-0f09-435e-9ade-51e87b0ded99/volumes" Oct 13 17:42:49 crc kubenswrapper[4720]: I1013 17:42:49.395726 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 13 17:42:49 crc kubenswrapper[4720]: I1013 17:42:49.487626 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"18a26891-fa3b-4433-a74a-592bef9b8241","Type":"ContainerStarted","Data":"04482989777aa1c8eb6aa1620ad33fb3e0ff1d07706d497059a67916ac804b9b"} Oct 13 17:42:49 crc kubenswrapper[4720]: I1013 17:42:49.488917 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5908f1a2-9abb-49bf-a29a-0ed02d078198","Type":"ContainerStarted","Data":"946bfa85fcf68e59591b987081fb90c2cfcf772145f7fbb694cd4be9a011e141"} Oct 13 17:42:49 crc kubenswrapper[4720]: I1013 17:42:49.490690 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-j4lmp" event={"ID":"3f2ee4f4-e8f0-41c3-9ec4-0832701c48a8","Type":"ContainerStarted","Data":"360130da2fdf61ce37b6f94421c1a85ed37eb48dd143de2265564c462b24a013"} Oct 13 17:42:49 crc kubenswrapper[4720]: I1013 17:42:49.490715 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-j4lmp" event={"ID":"3f2ee4f4-e8f0-41c3-9ec4-0832701c48a8","Type":"ContainerStarted","Data":"0ba84564edbd6b295777c7cbd02dcddacfa1c953c81479d233f0e2250663cb0f"} Oct 13 17:42:49 crc kubenswrapper[4720]: I1013 17:42:49.512561 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-j4lmp" podStartSLOduration=2.5125379629999998 podStartE2EDuration="2.512537963s" podCreationTimestamp="2025-10-13 17:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:42:49.504755413 +0000 UTC m=+1114.962005545" watchObservedRunningTime="2025-10-13 17:42:49.512537963 +0000 UTC m=+1114.969788095" Oct 13 17:42:50 crc kubenswrapper[4720]: I1013 17:42:50.502692 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5908f1a2-9abb-49bf-a29a-0ed02d078198","Type":"ContainerStarted","Data":"68df067edff710760cad825f31e3e44d9c54d23ea2ef31e0d2f1a52c6d37676a"} Oct 13 17:42:50 crc kubenswrapper[4720]: I1013 17:42:50.503075 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5908f1a2-9abb-49bf-a29a-0ed02d078198","Type":"ContainerStarted","Data":"2896a37806e468ae4a786e50976bd32dde2c05fb3fa6385701111fd4d9d36ca1"} Oct 13 17:42:50 crc kubenswrapper[4720]: I1013 17:42:50.505511 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"18a26891-fa3b-4433-a74a-592bef9b8241","Type":"ContainerStarted","Data":"b85a27d03898093f0cf66acae5c53d1d07e983867551d5717fcb168948c58a20"} Oct 13 17:42:50 crc kubenswrapper[4720]: I1013 17:42:50.537443 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.537416914 podStartE2EDuration="2.537416914s" podCreationTimestamp="2025-10-13 17:42:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:42:50.527613931 +0000 UTC m=+1115.984864083" watchObservedRunningTime="2025-10-13 17:42:50.537416914 +0000 UTC m=+1115.994667056" Oct 13 17:42:51 crc kubenswrapper[4720]: I1013 17:42:51.880477 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-59cf4bdb65-c5s42" Oct 13 17:42:51 crc kubenswrapper[4720]: I1013 17:42:51.948883 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-trkfz"] Oct 13 17:42:51 crc kubenswrapper[4720]: I1013 17:42:51.949094 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-845d6d6f59-trkfz" podUID="2eefeacb-a660-43cc-8091-718f61e76f26" containerName="dnsmasq-dns" containerID="cri-o://41af5150c8e788e6a22ac99aea402a479b26a387d339a3c7a5055ddb60c60453" gracePeriod=10 Oct 13 17:42:52 crc kubenswrapper[4720]: I1013 17:42:52.401230 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-trkfz" Oct 13 17:42:52 crc kubenswrapper[4720]: I1013 17:42:52.486473 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2eefeacb-a660-43cc-8091-718f61e76f26-ovsdbserver-sb\") pod \"2eefeacb-a660-43cc-8091-718f61e76f26\" (UID: \"2eefeacb-a660-43cc-8091-718f61e76f26\") " Oct 13 17:42:52 crc kubenswrapper[4720]: I1013 17:42:52.486597 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2eefeacb-a660-43cc-8091-718f61e76f26-config\") pod \"2eefeacb-a660-43cc-8091-718f61e76f26\" (UID: \"2eefeacb-a660-43cc-8091-718f61e76f26\") " Oct 13 17:42:52 crc kubenswrapper[4720]: I1013 17:42:52.486657 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2eefeacb-a660-43cc-8091-718f61e76f26-dns-swift-storage-0\") pod \"2eefeacb-a660-43cc-8091-718f61e76f26\" (UID: \"2eefeacb-a660-43cc-8091-718f61e76f26\") " Oct 13 17:42:52 crc kubenswrapper[4720]: I1013 17:42:52.486801 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2r2p\" (UniqueName: \"kubernetes.io/projected/2eefeacb-a660-43cc-8091-718f61e76f26-kube-api-access-x2r2p\") pod \"2eefeacb-a660-43cc-8091-718f61e76f26\" (UID: \"2eefeacb-a660-43cc-8091-718f61e76f26\") " Oct 13 17:42:52 crc kubenswrapper[4720]: I1013 17:42:52.487335 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2eefeacb-a660-43cc-8091-718f61e76f26-dns-svc\") pod \"2eefeacb-a660-43cc-8091-718f61e76f26\" (UID: \"2eefeacb-a660-43cc-8091-718f61e76f26\") " Oct 13 17:42:52 crc kubenswrapper[4720]: I1013 17:42:52.487359 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2eefeacb-a660-43cc-8091-718f61e76f26-ovsdbserver-nb\") pod \"2eefeacb-a660-43cc-8091-718f61e76f26\" (UID: \"2eefeacb-a660-43cc-8091-718f61e76f26\") " Oct 13 17:42:52 crc kubenswrapper[4720]: I1013 17:42:52.490930 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2eefeacb-a660-43cc-8091-718f61e76f26-kube-api-access-x2r2p" (OuterVolumeSpecName: "kube-api-access-x2r2p") pod "2eefeacb-a660-43cc-8091-718f61e76f26" (UID: "2eefeacb-a660-43cc-8091-718f61e76f26"). InnerVolumeSpecName "kube-api-access-x2r2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:42:52 crc kubenswrapper[4720]: I1013 17:42:52.532859 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2eefeacb-a660-43cc-8091-718f61e76f26-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2eefeacb-a660-43cc-8091-718f61e76f26" (UID: "2eefeacb-a660-43cc-8091-718f61e76f26"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:42:52 crc kubenswrapper[4720]: I1013 17:42:52.541360 4720 generic.go:334] "Generic (PLEG): container finished" podID="2eefeacb-a660-43cc-8091-718f61e76f26" containerID="41af5150c8e788e6a22ac99aea402a479b26a387d339a3c7a5055ddb60c60453" exitCode=0 Oct 13 17:42:52 crc kubenswrapper[4720]: I1013 17:42:52.541466 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-trkfz" event={"ID":"2eefeacb-a660-43cc-8091-718f61e76f26","Type":"ContainerDied","Data":"41af5150c8e788e6a22ac99aea402a479b26a387d339a3c7a5055ddb60c60453"} Oct 13 17:42:52 crc kubenswrapper[4720]: I1013 17:42:52.541512 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-trkfz" event={"ID":"2eefeacb-a660-43cc-8091-718f61e76f26","Type":"ContainerDied","Data":"b610de700cdc246d205fa7bde4851fefc086d8cf4dc554b3bb0248e02934e940"} Oct 13 17:42:52 crc kubenswrapper[4720]: I1013 17:42:52.541535 4720 scope.go:117] "RemoveContainer" containerID="41af5150c8e788e6a22ac99aea402a479b26a387d339a3c7a5055ddb60c60453" Oct 13 17:42:52 crc kubenswrapper[4720]: I1013 17:42:52.541716 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-trkfz" Oct 13 17:42:52 crc kubenswrapper[4720]: I1013 17:42:52.548691 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2eefeacb-a660-43cc-8091-718f61e76f26-config" (OuterVolumeSpecName: "config") pod "2eefeacb-a660-43cc-8091-718f61e76f26" (UID: "2eefeacb-a660-43cc-8091-718f61e76f26"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:42:52 crc kubenswrapper[4720]: I1013 17:42:52.551988 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"18a26891-fa3b-4433-a74a-592bef9b8241","Type":"ContainerStarted","Data":"2312e7340b8a0cd11c5227f720e3af3ae3827e71412d87427d8d778fcf73ec91"} Oct 13 17:42:52 crc kubenswrapper[4720]: I1013 17:42:52.553320 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 13 17:42:52 crc kubenswrapper[4720]: I1013 17:42:52.554601 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2eefeacb-a660-43cc-8091-718f61e76f26-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2eefeacb-a660-43cc-8091-718f61e76f26" (UID: "2eefeacb-a660-43cc-8091-718f61e76f26"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:42:52 crc kubenswrapper[4720]: I1013 17:42:52.570969 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2eefeacb-a660-43cc-8091-718f61e76f26-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2eefeacb-a660-43cc-8091-718f61e76f26" (UID: "2eefeacb-a660-43cc-8091-718f61e76f26"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:42:52 crc kubenswrapper[4720]: I1013 17:42:52.584814 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.446001289 podStartE2EDuration="6.584791941s" podCreationTimestamp="2025-10-13 17:42:46 +0000 UTC" firstStartedPulling="2025-10-13 17:42:47.342759477 +0000 UTC m=+1112.800009649" lastFinishedPulling="2025-10-13 17:42:51.481550169 +0000 UTC m=+1116.938800301" observedRunningTime="2025-10-13 17:42:52.579517865 +0000 UTC m=+1118.036768027" watchObservedRunningTime="2025-10-13 17:42:52.584791941 +0000 UTC m=+1118.042042083" Oct 13 17:42:52 crc kubenswrapper[4720]: I1013 17:42:52.586766 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2eefeacb-a660-43cc-8091-718f61e76f26-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2eefeacb-a660-43cc-8091-718f61e76f26" (UID: "2eefeacb-a660-43cc-8091-718f61e76f26"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:42:52 crc kubenswrapper[4720]: I1013 17:42:52.589600 4720 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2eefeacb-a660-43cc-8091-718f61e76f26-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:52 crc kubenswrapper[4720]: I1013 17:42:52.589631 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2eefeacb-a660-43cc-8091-718f61e76f26-config\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:52 crc kubenswrapper[4720]: I1013 17:42:52.589641 4720 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2eefeacb-a660-43cc-8091-718f61e76f26-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:52 crc kubenswrapper[4720]: I1013 17:42:52.589650 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2r2p\" (UniqueName: \"kubernetes.io/projected/2eefeacb-a660-43cc-8091-718f61e76f26-kube-api-access-x2r2p\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:52 crc kubenswrapper[4720]: I1013 17:42:52.589659 4720 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2eefeacb-a660-43cc-8091-718f61e76f26-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:52 crc kubenswrapper[4720]: I1013 17:42:52.589668 4720 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2eefeacb-a660-43cc-8091-718f61e76f26-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:52 crc kubenswrapper[4720]: I1013 17:42:52.654131 4720 scope.go:117] "RemoveContainer" containerID="82f7066f84536a9fa0604513e10a98f4cae63c0c63a06c95f9eb54be138df06f" Oct 13 17:42:52 crc kubenswrapper[4720]: I1013 17:42:52.680461 4720 scope.go:117] "RemoveContainer" containerID="41af5150c8e788e6a22ac99aea402a479b26a387d339a3c7a5055ddb60c60453" Oct 13 17:42:52 crc kubenswrapper[4720]: E1013 17:42:52.680903 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41af5150c8e788e6a22ac99aea402a479b26a387d339a3c7a5055ddb60c60453\": container with ID starting with 41af5150c8e788e6a22ac99aea402a479b26a387d339a3c7a5055ddb60c60453 not found: ID does not exist" containerID="41af5150c8e788e6a22ac99aea402a479b26a387d339a3c7a5055ddb60c60453" Oct 13 17:42:52 crc kubenswrapper[4720]: I1013 17:42:52.680939 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41af5150c8e788e6a22ac99aea402a479b26a387d339a3c7a5055ddb60c60453"} err="failed to get container status \"41af5150c8e788e6a22ac99aea402a479b26a387d339a3c7a5055ddb60c60453\": rpc error: code = NotFound desc = could not find container \"41af5150c8e788e6a22ac99aea402a479b26a387d339a3c7a5055ddb60c60453\": container with ID starting with 41af5150c8e788e6a22ac99aea402a479b26a387d339a3c7a5055ddb60c60453 not found: ID does not exist" Oct 13 17:42:52 crc kubenswrapper[4720]: I1013 17:42:52.680966 4720 scope.go:117] "RemoveContainer" containerID="82f7066f84536a9fa0604513e10a98f4cae63c0c63a06c95f9eb54be138df06f" Oct 13 17:42:52 crc kubenswrapper[4720]: E1013 17:42:52.681413 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82f7066f84536a9fa0604513e10a98f4cae63c0c63a06c95f9eb54be138df06f\": container with ID starting with 82f7066f84536a9fa0604513e10a98f4cae63c0c63a06c95f9eb54be138df06f not found: ID does not exist" containerID="82f7066f84536a9fa0604513e10a98f4cae63c0c63a06c95f9eb54be138df06f" Oct 13 17:42:52 crc kubenswrapper[4720]: I1013 17:42:52.681446 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82f7066f84536a9fa0604513e10a98f4cae63c0c63a06c95f9eb54be138df06f"} err="failed to get container status \"82f7066f84536a9fa0604513e10a98f4cae63c0c63a06c95f9eb54be138df06f\": rpc error: code = NotFound desc = could not find container \"82f7066f84536a9fa0604513e10a98f4cae63c0c63a06c95f9eb54be138df06f\": container with ID starting with 82f7066f84536a9fa0604513e10a98f4cae63c0c63a06c95f9eb54be138df06f not found: ID does not exist" Oct 13 17:42:52 crc kubenswrapper[4720]: I1013 17:42:52.882642 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-trkfz"] Oct 13 17:42:52 crc kubenswrapper[4720]: I1013 17:42:52.894435 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-trkfz"] Oct 13 17:42:53 crc kubenswrapper[4720]: I1013 17:42:53.178777 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2eefeacb-a660-43cc-8091-718f61e76f26" path="/var/lib/kubelet/pods/2eefeacb-a660-43cc-8091-718f61e76f26/volumes" Oct 13 17:42:54 crc kubenswrapper[4720]: I1013 17:42:54.578275 4720 generic.go:334] "Generic (PLEG): container finished" podID="3f2ee4f4-e8f0-41c3-9ec4-0832701c48a8" containerID="360130da2fdf61ce37b6f94421c1a85ed37eb48dd143de2265564c462b24a013" exitCode=0 Oct 13 17:42:54 crc kubenswrapper[4720]: I1013 17:42:54.578318 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-j4lmp" event={"ID":"3f2ee4f4-e8f0-41c3-9ec4-0832701c48a8","Type":"ContainerDied","Data":"360130da2fdf61ce37b6f94421c1a85ed37eb48dd143de2265564c462b24a013"} Oct 13 17:42:56 crc kubenswrapper[4720]: I1013 17:42:56.029222 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-j4lmp" Oct 13 17:42:56 crc kubenswrapper[4720]: I1013 17:42:56.174486 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f2ee4f4-e8f0-41c3-9ec4-0832701c48a8-scripts\") pod \"3f2ee4f4-e8f0-41c3-9ec4-0832701c48a8\" (UID: \"3f2ee4f4-e8f0-41c3-9ec4-0832701c48a8\") " Oct 13 17:42:56 crc kubenswrapper[4720]: I1013 17:42:56.174733 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkflq\" (UniqueName: \"kubernetes.io/projected/3f2ee4f4-e8f0-41c3-9ec4-0832701c48a8-kube-api-access-wkflq\") pod \"3f2ee4f4-e8f0-41c3-9ec4-0832701c48a8\" (UID: \"3f2ee4f4-e8f0-41c3-9ec4-0832701c48a8\") " Oct 13 17:42:56 crc kubenswrapper[4720]: I1013 17:42:56.174923 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f2ee4f4-e8f0-41c3-9ec4-0832701c48a8-combined-ca-bundle\") pod \"3f2ee4f4-e8f0-41c3-9ec4-0832701c48a8\" (UID: \"3f2ee4f4-e8f0-41c3-9ec4-0832701c48a8\") " Oct 13 17:42:56 crc kubenswrapper[4720]: I1013 17:42:56.175036 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f2ee4f4-e8f0-41c3-9ec4-0832701c48a8-config-data\") pod \"3f2ee4f4-e8f0-41c3-9ec4-0832701c48a8\" (UID: \"3f2ee4f4-e8f0-41c3-9ec4-0832701c48a8\") " Oct 13 17:42:56 crc kubenswrapper[4720]: I1013 17:42:56.180082 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f2ee4f4-e8f0-41c3-9ec4-0832701c48a8-kube-api-access-wkflq" (OuterVolumeSpecName: "kube-api-access-wkflq") pod "3f2ee4f4-e8f0-41c3-9ec4-0832701c48a8" (UID: "3f2ee4f4-e8f0-41c3-9ec4-0832701c48a8"). InnerVolumeSpecName "kube-api-access-wkflq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:42:56 crc kubenswrapper[4720]: I1013 17:42:56.180674 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f2ee4f4-e8f0-41c3-9ec4-0832701c48a8-scripts" (OuterVolumeSpecName: "scripts") pod "3f2ee4f4-e8f0-41c3-9ec4-0832701c48a8" (UID: "3f2ee4f4-e8f0-41c3-9ec4-0832701c48a8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:42:56 crc kubenswrapper[4720]: I1013 17:42:56.200374 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f2ee4f4-e8f0-41c3-9ec4-0832701c48a8-config-data" (OuterVolumeSpecName: "config-data") pod "3f2ee4f4-e8f0-41c3-9ec4-0832701c48a8" (UID: "3f2ee4f4-e8f0-41c3-9ec4-0832701c48a8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:42:56 crc kubenswrapper[4720]: I1013 17:42:56.226118 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f2ee4f4-e8f0-41c3-9ec4-0832701c48a8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3f2ee4f4-e8f0-41c3-9ec4-0832701c48a8" (UID: "3f2ee4f4-e8f0-41c3-9ec4-0832701c48a8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:42:56 crc kubenswrapper[4720]: I1013 17:42:56.277651 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkflq\" (UniqueName: \"kubernetes.io/projected/3f2ee4f4-e8f0-41c3-9ec4-0832701c48a8-kube-api-access-wkflq\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:56 crc kubenswrapper[4720]: I1013 17:42:56.277683 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f2ee4f4-e8f0-41c3-9ec4-0832701c48a8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:56 crc kubenswrapper[4720]: I1013 17:42:56.277692 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f2ee4f4-e8f0-41c3-9ec4-0832701c48a8-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:56 crc kubenswrapper[4720]: I1013 17:42:56.277700 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f2ee4f4-e8f0-41c3-9ec4-0832701c48a8-scripts\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:56 crc kubenswrapper[4720]: I1013 17:42:56.605241 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-j4lmp" event={"ID":"3f2ee4f4-e8f0-41c3-9ec4-0832701c48a8","Type":"ContainerDied","Data":"0ba84564edbd6b295777c7cbd02dcddacfa1c953c81479d233f0e2250663cb0f"} Oct 13 17:42:56 crc kubenswrapper[4720]: I1013 17:42:56.605309 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ba84564edbd6b295777c7cbd02dcddacfa1c953c81479d233f0e2250663cb0f" Oct 13 17:42:56 crc kubenswrapper[4720]: I1013 17:42:56.605359 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-j4lmp" Oct 13 17:42:56 crc kubenswrapper[4720]: I1013 17:42:56.831057 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 13 17:42:56 crc kubenswrapper[4720]: I1013 17:42:56.831454 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="5908f1a2-9abb-49bf-a29a-0ed02d078198" containerName="nova-api-log" containerID="cri-o://2896a37806e468ae4a786e50976bd32dde2c05fb3fa6385701111fd4d9d36ca1" gracePeriod=30 Oct 13 17:42:56 crc kubenswrapper[4720]: I1013 17:42:56.832049 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="5908f1a2-9abb-49bf-a29a-0ed02d078198" containerName="nova-api-api" containerID="cri-o://68df067edff710760cad825f31e3e44d9c54d23ea2ef31e0d2f1a52c6d37676a" gracePeriod=30 Oct 13 17:42:56 crc kubenswrapper[4720]: I1013 17:42:56.851711 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 13 17:42:56 crc kubenswrapper[4720]: I1013 17:42:56.852248 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="f65be040-a3da-4f04-a883-995351ba908b" containerName="nova-scheduler-scheduler" containerID="cri-o://babf2748262064788db22e65543d4efd9f77b93e063718a0c45a9e5ba1e36402" gracePeriod=30 Oct 13 17:42:56 crc kubenswrapper[4720]: I1013 17:42:56.871595 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 13 17:42:56 crc kubenswrapper[4720]: I1013 17:42:56.872157 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4c54bcbf-ffd9-4595-b51d-0efcfcafd52a" containerName="nova-metadata-log" containerID="cri-o://c50a054551174514909d400930146f247c90ce7b9ec89a0aaf7a3bf9c6732252" gracePeriod=30 Oct 13 17:42:56 crc kubenswrapper[4720]: I1013 17:42:56.872275 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4c54bcbf-ffd9-4595-b51d-0efcfcafd52a" containerName="nova-metadata-metadata" containerID="cri-o://a848c4d10f2b62cf6c973bdde1aa6d0cb1b4a57ef6f5e204223976e823fcb3cf" gracePeriod=30 Oct 13 17:42:57 crc kubenswrapper[4720]: I1013 17:42:57.300360 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-845d6d6f59-trkfz" podUID="2eefeacb-a660-43cc-8091-718f61e76f26" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.193:5353: i/o timeout" Oct 13 17:42:57 crc kubenswrapper[4720]: I1013 17:42:57.445283 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 13 17:42:57 crc kubenswrapper[4720]: I1013 17:42:57.611704 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5908f1a2-9abb-49bf-a29a-0ed02d078198-config-data\") pod \"5908f1a2-9abb-49bf-a29a-0ed02d078198\" (UID: \"5908f1a2-9abb-49bf-a29a-0ed02d078198\") " Oct 13 17:42:57 crc kubenswrapper[4720]: I1013 17:42:57.611788 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5908f1a2-9abb-49bf-a29a-0ed02d078198-internal-tls-certs\") pod \"5908f1a2-9abb-49bf-a29a-0ed02d078198\" (UID: \"5908f1a2-9abb-49bf-a29a-0ed02d078198\") " Oct 13 17:42:57 crc kubenswrapper[4720]: I1013 17:42:57.611837 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gxtt\" (UniqueName: \"kubernetes.io/projected/5908f1a2-9abb-49bf-a29a-0ed02d078198-kube-api-access-6gxtt\") pod \"5908f1a2-9abb-49bf-a29a-0ed02d078198\" (UID: \"5908f1a2-9abb-49bf-a29a-0ed02d078198\") " Oct 13 17:42:57 crc kubenswrapper[4720]: I1013 17:42:57.611881 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5908f1a2-9abb-49bf-a29a-0ed02d078198-combined-ca-bundle\") pod \"5908f1a2-9abb-49bf-a29a-0ed02d078198\" (UID: \"5908f1a2-9abb-49bf-a29a-0ed02d078198\") " Oct 13 17:42:57 crc kubenswrapper[4720]: I1013 17:42:57.611920 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5908f1a2-9abb-49bf-a29a-0ed02d078198-logs\") pod \"5908f1a2-9abb-49bf-a29a-0ed02d078198\" (UID: \"5908f1a2-9abb-49bf-a29a-0ed02d078198\") " Oct 13 17:42:57 crc kubenswrapper[4720]: I1013 17:42:57.612062 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5908f1a2-9abb-49bf-a29a-0ed02d078198-public-tls-certs\") pod \"5908f1a2-9abb-49bf-a29a-0ed02d078198\" (UID: \"5908f1a2-9abb-49bf-a29a-0ed02d078198\") " Oct 13 17:42:57 crc kubenswrapper[4720]: I1013 17:42:57.612659 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5908f1a2-9abb-49bf-a29a-0ed02d078198-logs" (OuterVolumeSpecName: "logs") pod "5908f1a2-9abb-49bf-a29a-0ed02d078198" (UID: "5908f1a2-9abb-49bf-a29a-0ed02d078198"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:42:57 crc kubenswrapper[4720]: I1013 17:42:57.612803 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5908f1a2-9abb-49bf-a29a-0ed02d078198-logs\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:57 crc kubenswrapper[4720]: I1013 17:42:57.616768 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5908f1a2-9abb-49bf-a29a-0ed02d078198-kube-api-access-6gxtt" (OuterVolumeSpecName: "kube-api-access-6gxtt") pod "5908f1a2-9abb-49bf-a29a-0ed02d078198" (UID: "5908f1a2-9abb-49bf-a29a-0ed02d078198"). InnerVolumeSpecName "kube-api-access-6gxtt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:42:57 crc kubenswrapper[4720]: I1013 17:42:57.619082 4720 generic.go:334] "Generic (PLEG): container finished" podID="4c54bcbf-ffd9-4595-b51d-0efcfcafd52a" containerID="c50a054551174514909d400930146f247c90ce7b9ec89a0aaf7a3bf9c6732252" exitCode=143 Oct 13 17:42:57 crc kubenswrapper[4720]: I1013 17:42:57.619150 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4c54bcbf-ffd9-4595-b51d-0efcfcafd52a","Type":"ContainerDied","Data":"c50a054551174514909d400930146f247c90ce7b9ec89a0aaf7a3bf9c6732252"} Oct 13 17:42:57 crc kubenswrapper[4720]: I1013 17:42:57.620906 4720 generic.go:334] "Generic (PLEG): container finished" podID="5908f1a2-9abb-49bf-a29a-0ed02d078198" containerID="68df067edff710760cad825f31e3e44d9c54d23ea2ef31e0d2f1a52c6d37676a" exitCode=0 Oct 13 17:42:57 crc kubenswrapper[4720]: I1013 17:42:57.620926 4720 generic.go:334] "Generic (PLEG): container finished" podID="5908f1a2-9abb-49bf-a29a-0ed02d078198" containerID="2896a37806e468ae4a786e50976bd32dde2c05fb3fa6385701111fd4d9d36ca1" exitCode=143 Oct 13 17:42:57 crc kubenswrapper[4720]: I1013 17:42:57.620945 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5908f1a2-9abb-49bf-a29a-0ed02d078198","Type":"ContainerDied","Data":"68df067edff710760cad825f31e3e44d9c54d23ea2ef31e0d2f1a52c6d37676a"} Oct 13 17:42:57 crc kubenswrapper[4720]: I1013 17:42:57.620971 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5908f1a2-9abb-49bf-a29a-0ed02d078198","Type":"ContainerDied","Data":"2896a37806e468ae4a786e50976bd32dde2c05fb3fa6385701111fd4d9d36ca1"} Oct 13 17:42:57 crc kubenswrapper[4720]: I1013 17:42:57.620980 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5908f1a2-9abb-49bf-a29a-0ed02d078198","Type":"ContainerDied","Data":"946bfa85fcf68e59591b987081fb90c2cfcf772145f7fbb694cd4be9a011e141"} Oct 13 17:42:57 crc kubenswrapper[4720]: I1013 17:42:57.620995 4720 scope.go:117] "RemoveContainer" containerID="68df067edff710760cad825f31e3e44d9c54d23ea2ef31e0d2f1a52c6d37676a" Oct 13 17:42:57 crc kubenswrapper[4720]: I1013 17:42:57.621104 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 13 17:42:57 crc kubenswrapper[4720]: I1013 17:42:57.642127 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5908f1a2-9abb-49bf-a29a-0ed02d078198-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5908f1a2-9abb-49bf-a29a-0ed02d078198" (UID: "5908f1a2-9abb-49bf-a29a-0ed02d078198"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:42:57 crc kubenswrapper[4720]: I1013 17:42:57.647935 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5908f1a2-9abb-49bf-a29a-0ed02d078198-config-data" (OuterVolumeSpecName: "config-data") pod "5908f1a2-9abb-49bf-a29a-0ed02d078198" (UID: "5908f1a2-9abb-49bf-a29a-0ed02d078198"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:42:57 crc kubenswrapper[4720]: I1013 17:42:57.659827 4720 scope.go:117] "RemoveContainer" containerID="2896a37806e468ae4a786e50976bd32dde2c05fb3fa6385701111fd4d9d36ca1" Oct 13 17:42:57 crc kubenswrapper[4720]: I1013 17:42:57.664012 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5908f1a2-9abb-49bf-a29a-0ed02d078198-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5908f1a2-9abb-49bf-a29a-0ed02d078198" (UID: "5908f1a2-9abb-49bf-a29a-0ed02d078198"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:42:57 crc kubenswrapper[4720]: I1013 17:42:57.699388 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5908f1a2-9abb-49bf-a29a-0ed02d078198-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "5908f1a2-9abb-49bf-a29a-0ed02d078198" (UID: "5908f1a2-9abb-49bf-a29a-0ed02d078198"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:42:57 crc kubenswrapper[4720]: I1013 17:42:57.699401 4720 scope.go:117] "RemoveContainer" containerID="68df067edff710760cad825f31e3e44d9c54d23ea2ef31e0d2f1a52c6d37676a" Oct 13 17:42:57 crc kubenswrapper[4720]: E1013 17:42:57.700472 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68df067edff710760cad825f31e3e44d9c54d23ea2ef31e0d2f1a52c6d37676a\": container with ID starting with 68df067edff710760cad825f31e3e44d9c54d23ea2ef31e0d2f1a52c6d37676a not found: ID does not exist" containerID="68df067edff710760cad825f31e3e44d9c54d23ea2ef31e0d2f1a52c6d37676a" Oct 13 17:42:57 crc kubenswrapper[4720]: I1013 17:42:57.700518 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68df067edff710760cad825f31e3e44d9c54d23ea2ef31e0d2f1a52c6d37676a"} err="failed to get container status \"68df067edff710760cad825f31e3e44d9c54d23ea2ef31e0d2f1a52c6d37676a\": rpc error: code = NotFound desc = could not find container \"68df067edff710760cad825f31e3e44d9c54d23ea2ef31e0d2f1a52c6d37676a\": container with ID starting with 68df067edff710760cad825f31e3e44d9c54d23ea2ef31e0d2f1a52c6d37676a not found: ID does not exist" Oct 13 17:42:57 crc kubenswrapper[4720]: I1013 17:42:57.700541 4720 scope.go:117] "RemoveContainer" containerID="2896a37806e468ae4a786e50976bd32dde2c05fb3fa6385701111fd4d9d36ca1" Oct 13 17:42:57 crc kubenswrapper[4720]: E1013 17:42:57.701135 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2896a37806e468ae4a786e50976bd32dde2c05fb3fa6385701111fd4d9d36ca1\": container with ID starting with 2896a37806e468ae4a786e50976bd32dde2c05fb3fa6385701111fd4d9d36ca1 not found: ID does not exist" containerID="2896a37806e468ae4a786e50976bd32dde2c05fb3fa6385701111fd4d9d36ca1" Oct 13 17:42:57 crc kubenswrapper[4720]: I1013 17:42:57.701157 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2896a37806e468ae4a786e50976bd32dde2c05fb3fa6385701111fd4d9d36ca1"} err="failed to get container status \"2896a37806e468ae4a786e50976bd32dde2c05fb3fa6385701111fd4d9d36ca1\": rpc error: code = NotFound desc = could not find container \"2896a37806e468ae4a786e50976bd32dde2c05fb3fa6385701111fd4d9d36ca1\": container with ID starting with 2896a37806e468ae4a786e50976bd32dde2c05fb3fa6385701111fd4d9d36ca1 not found: ID does not exist" Oct 13 17:42:57 crc kubenswrapper[4720]: I1013 17:42:57.701171 4720 scope.go:117] "RemoveContainer" containerID="68df067edff710760cad825f31e3e44d9c54d23ea2ef31e0d2f1a52c6d37676a" Oct 13 17:42:57 crc kubenswrapper[4720]: I1013 17:42:57.701460 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68df067edff710760cad825f31e3e44d9c54d23ea2ef31e0d2f1a52c6d37676a"} err="failed to get container status \"68df067edff710760cad825f31e3e44d9c54d23ea2ef31e0d2f1a52c6d37676a\": rpc error: code = NotFound desc = could not find container \"68df067edff710760cad825f31e3e44d9c54d23ea2ef31e0d2f1a52c6d37676a\": container with ID starting with 68df067edff710760cad825f31e3e44d9c54d23ea2ef31e0d2f1a52c6d37676a not found: ID does not exist" Oct 13 17:42:57 crc kubenswrapper[4720]: I1013 17:42:57.701474 4720 scope.go:117] "RemoveContainer" containerID="2896a37806e468ae4a786e50976bd32dde2c05fb3fa6385701111fd4d9d36ca1" Oct 13 17:42:57 crc kubenswrapper[4720]: I1013 17:42:57.701647 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2896a37806e468ae4a786e50976bd32dde2c05fb3fa6385701111fd4d9d36ca1"} err="failed to get container status \"2896a37806e468ae4a786e50976bd32dde2c05fb3fa6385701111fd4d9d36ca1\": rpc error: code = NotFound desc = could not find container \"2896a37806e468ae4a786e50976bd32dde2c05fb3fa6385701111fd4d9d36ca1\": container with ID starting with 2896a37806e468ae4a786e50976bd32dde2c05fb3fa6385701111fd4d9d36ca1 not found: ID does not exist" Oct 13 17:42:57 crc kubenswrapper[4720]: I1013 17:42:57.714757 4720 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5908f1a2-9abb-49bf-a29a-0ed02d078198-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:57 crc kubenswrapper[4720]: I1013 17:42:57.714788 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6gxtt\" (UniqueName: \"kubernetes.io/projected/5908f1a2-9abb-49bf-a29a-0ed02d078198-kube-api-access-6gxtt\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:57 crc kubenswrapper[4720]: I1013 17:42:57.714798 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5908f1a2-9abb-49bf-a29a-0ed02d078198-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:57 crc kubenswrapper[4720]: I1013 17:42:57.714808 4720 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5908f1a2-9abb-49bf-a29a-0ed02d078198-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:57 crc kubenswrapper[4720]: I1013 17:42:57.714816 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5908f1a2-9abb-49bf-a29a-0ed02d078198-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 17:42:58 crc kubenswrapper[4720]: I1013 17:42:58.029451 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 13 17:42:58 crc kubenswrapper[4720]: I1013 17:42:58.038640 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 13 17:42:58 crc kubenswrapper[4720]: I1013 17:42:58.067914 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 13 17:42:58 crc kubenswrapper[4720]: E1013 17:42:58.068512 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2eefeacb-a660-43cc-8091-718f61e76f26" containerName="init" Oct 13 17:42:58 crc kubenswrapper[4720]: I1013 17:42:58.068542 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eefeacb-a660-43cc-8091-718f61e76f26" containerName="init" Oct 13 17:42:58 crc kubenswrapper[4720]: E1013 17:42:58.068565 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5908f1a2-9abb-49bf-a29a-0ed02d078198" containerName="nova-api-api" Oct 13 17:42:58 crc kubenswrapper[4720]: I1013 17:42:58.068578 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="5908f1a2-9abb-49bf-a29a-0ed02d078198" containerName="nova-api-api" Oct 13 17:42:58 crc kubenswrapper[4720]: E1013 17:42:58.068615 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2eefeacb-a660-43cc-8091-718f61e76f26" containerName="dnsmasq-dns" Oct 13 17:42:58 crc kubenswrapper[4720]: I1013 17:42:58.068628 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eefeacb-a660-43cc-8091-718f61e76f26" containerName="dnsmasq-dns" Oct 13 17:42:58 crc kubenswrapper[4720]: E1013 17:42:58.068659 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5908f1a2-9abb-49bf-a29a-0ed02d078198" containerName="nova-api-log" Oct 13 17:42:58 crc kubenswrapper[4720]: I1013 17:42:58.068673 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="5908f1a2-9abb-49bf-a29a-0ed02d078198" containerName="nova-api-log" Oct 13 17:42:58 crc kubenswrapper[4720]: E1013 17:42:58.068713 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f2ee4f4-e8f0-41c3-9ec4-0832701c48a8" containerName="nova-manage" Oct 13 17:42:58 crc kubenswrapper[4720]: I1013 17:42:58.068724 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f2ee4f4-e8f0-41c3-9ec4-0832701c48a8" containerName="nova-manage" Oct 13 17:42:58 crc kubenswrapper[4720]: I1013 17:42:58.069080 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="5908f1a2-9abb-49bf-a29a-0ed02d078198" containerName="nova-api-log" Oct 13 17:42:58 crc kubenswrapper[4720]: I1013 17:42:58.069106 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="5908f1a2-9abb-49bf-a29a-0ed02d078198" containerName="nova-api-api" Oct 13 17:42:58 crc kubenswrapper[4720]: I1013 17:42:58.069130 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eefeacb-a660-43cc-8091-718f61e76f26" containerName="dnsmasq-dns" Oct 13 17:42:58 crc kubenswrapper[4720]: I1013 17:42:58.069154 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f2ee4f4-e8f0-41c3-9ec4-0832701c48a8" containerName="nova-manage" Oct 13 17:42:58 crc kubenswrapper[4720]: I1013 17:42:58.073349 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 13 17:42:58 crc kubenswrapper[4720]: I1013 17:42:58.076303 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Oct 13 17:42:58 crc kubenswrapper[4720]: I1013 17:42:58.076581 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 13 17:42:58 crc kubenswrapper[4720]: I1013 17:42:58.080752 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Oct 13 17:42:58 crc kubenswrapper[4720]: I1013 17:42:58.107840 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 13 17:42:58 crc kubenswrapper[4720]: I1013 17:42:58.226089 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d013b32a-b904-46e1-85be-0691c6d981da-config-data\") pod \"nova-api-0\" (UID: \"d013b32a-b904-46e1-85be-0691c6d981da\") " pod="openstack/nova-api-0" Oct 13 17:42:58 crc kubenswrapper[4720]: I1013 17:42:58.226150 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d013b32a-b904-46e1-85be-0691c6d981da-logs\") pod \"nova-api-0\" (UID: \"d013b32a-b904-46e1-85be-0691c6d981da\") " pod="openstack/nova-api-0" Oct 13 17:42:58 crc kubenswrapper[4720]: I1013 17:42:58.226379 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d013b32a-b904-46e1-85be-0691c6d981da-internal-tls-certs\") pod \"nova-api-0\" (UID: \"d013b32a-b904-46e1-85be-0691c6d981da\") " pod="openstack/nova-api-0" Oct 13 17:42:58 crc kubenswrapper[4720]: I1013 17:42:58.226434 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d013b32a-b904-46e1-85be-0691c6d981da-public-tls-certs\") pod \"nova-api-0\" (UID: \"d013b32a-b904-46e1-85be-0691c6d981da\") " pod="openstack/nova-api-0" Oct 13 17:42:58 crc kubenswrapper[4720]: I1013 17:42:58.226771 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d013b32a-b904-46e1-85be-0691c6d981da-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d013b32a-b904-46e1-85be-0691c6d981da\") " pod="openstack/nova-api-0" Oct 13 17:42:58 crc kubenswrapper[4720]: I1013 17:42:58.226835 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wsjp\" (UniqueName: \"kubernetes.io/projected/d013b32a-b904-46e1-85be-0691c6d981da-kube-api-access-8wsjp\") pod \"nova-api-0\" (UID: \"d013b32a-b904-46e1-85be-0691c6d981da\") " pod="openstack/nova-api-0" Oct 13 17:42:58 crc kubenswrapper[4720]: I1013 17:42:58.328382 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d013b32a-b904-46e1-85be-0691c6d981da-internal-tls-certs\") pod \"nova-api-0\" (UID: \"d013b32a-b904-46e1-85be-0691c6d981da\") " pod="openstack/nova-api-0" Oct 13 17:42:58 crc kubenswrapper[4720]: I1013 17:42:58.328426 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d013b32a-b904-46e1-85be-0691c6d981da-public-tls-certs\") pod \"nova-api-0\" (UID: \"d013b32a-b904-46e1-85be-0691c6d981da\") " pod="openstack/nova-api-0" Oct 13 17:42:58 crc kubenswrapper[4720]: I1013 17:42:58.328503 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d013b32a-b904-46e1-85be-0691c6d981da-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d013b32a-b904-46e1-85be-0691c6d981da\") " pod="openstack/nova-api-0" Oct 13 17:42:58 crc kubenswrapper[4720]: I1013 17:42:58.328531 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wsjp\" (UniqueName: \"kubernetes.io/projected/d013b32a-b904-46e1-85be-0691c6d981da-kube-api-access-8wsjp\") pod \"nova-api-0\" (UID: \"d013b32a-b904-46e1-85be-0691c6d981da\") " pod="openstack/nova-api-0" Oct 13 17:42:58 crc kubenswrapper[4720]: I1013 17:42:58.328618 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d013b32a-b904-46e1-85be-0691c6d981da-config-data\") pod \"nova-api-0\" (UID: \"d013b32a-b904-46e1-85be-0691c6d981da\") " pod="openstack/nova-api-0" Oct 13 17:42:58 crc kubenswrapper[4720]: I1013 17:42:58.328665 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d013b32a-b904-46e1-85be-0691c6d981da-logs\") pod \"nova-api-0\" (UID: \"d013b32a-b904-46e1-85be-0691c6d981da\") " pod="openstack/nova-api-0" Oct 13 17:42:58 crc kubenswrapper[4720]: I1013 17:42:58.329410 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d013b32a-b904-46e1-85be-0691c6d981da-logs\") pod \"nova-api-0\" (UID: \"d013b32a-b904-46e1-85be-0691c6d981da\") " pod="openstack/nova-api-0" Oct 13 17:42:58 crc kubenswrapper[4720]: I1013 17:42:58.333430 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d013b32a-b904-46e1-85be-0691c6d981da-config-data\") pod \"nova-api-0\" (UID: \"d013b32a-b904-46e1-85be-0691c6d981da\") " pod="openstack/nova-api-0" Oct 13 17:42:58 crc kubenswrapper[4720]: I1013 17:42:58.333555 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d013b32a-b904-46e1-85be-0691c6d981da-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d013b32a-b904-46e1-85be-0691c6d981da\") " pod="openstack/nova-api-0" Oct 13 17:42:58 crc kubenswrapper[4720]: I1013 17:42:58.339599 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d013b32a-b904-46e1-85be-0691c6d981da-public-tls-certs\") pod \"nova-api-0\" (UID: \"d013b32a-b904-46e1-85be-0691c6d981da\") " pod="openstack/nova-api-0" Oct 13 17:42:58 crc kubenswrapper[4720]: I1013 17:42:58.349559 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wsjp\" (UniqueName: \"kubernetes.io/projected/d013b32a-b904-46e1-85be-0691c6d981da-kube-api-access-8wsjp\") pod \"nova-api-0\" (UID: \"d013b32a-b904-46e1-85be-0691c6d981da\") " pod="openstack/nova-api-0" Oct 13 17:42:58 crc kubenswrapper[4720]: I1013 17:42:58.351669 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d013b32a-b904-46e1-85be-0691c6d981da-internal-tls-certs\") pod \"nova-api-0\" (UID: \"d013b32a-b904-46e1-85be-0691c6d981da\") " pod="openstack/nova-api-0" Oct 13 17:42:58 crc kubenswrapper[4720]: I1013 17:42:58.407383 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 13 17:42:58 crc kubenswrapper[4720]: E1013 17:42:58.481607 4720 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="babf2748262064788db22e65543d4efd9f77b93e063718a0c45a9e5ba1e36402" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 13 17:42:58 crc kubenswrapper[4720]: E1013 17:42:58.483958 4720 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="babf2748262064788db22e65543d4efd9f77b93e063718a0c45a9e5ba1e36402" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 13 17:42:58 crc kubenswrapper[4720]: E1013 17:42:58.485830 4720 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="babf2748262064788db22e65543d4efd9f77b93e063718a0c45a9e5ba1e36402" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 13 17:42:58 crc kubenswrapper[4720]: E1013 17:42:58.485918 4720 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="f65be040-a3da-4f04-a883-995351ba908b" containerName="nova-scheduler-scheduler" Oct 13 17:42:59 crc kubenswrapper[4720]: W1013 17:42:58.918478 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd013b32a_b904_46e1_85be_0691c6d981da.slice/crio-5bcc0ed2f0fcc3928c255ef8610c096a2d2855f61c916a9fe1410796068d54f4 WatchSource:0}: Error finding container 5bcc0ed2f0fcc3928c255ef8610c096a2d2855f61c916a9fe1410796068d54f4: Status 404 returned error can't find the container with id 5bcc0ed2f0fcc3928c255ef8610c096a2d2855f61c916a9fe1410796068d54f4 Oct 13 17:42:59 crc kubenswrapper[4720]: I1013 17:42:58.925182 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 13 17:42:59 crc kubenswrapper[4720]: I1013 17:42:59.182179 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5908f1a2-9abb-49bf-a29a-0ed02d078198" path="/var/lib/kubelet/pods/5908f1a2-9abb-49bf-a29a-0ed02d078198/volumes" Oct 13 17:42:59 crc kubenswrapper[4720]: I1013 17:42:59.644032 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d013b32a-b904-46e1-85be-0691c6d981da","Type":"ContainerStarted","Data":"b0c3f2f0c13c94d1642f3a71d727b99e90723332812619d93cd87d99a4ad5d80"} Oct 13 17:42:59 crc kubenswrapper[4720]: I1013 17:42:59.644082 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d013b32a-b904-46e1-85be-0691c6d981da","Type":"ContainerStarted","Data":"158a406662cedbf2334aa00f506bd9cbd2172e103340f3cf1fc852a4d74bcf2c"} Oct 13 17:42:59 crc kubenswrapper[4720]: I1013 17:42:59.644098 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d013b32a-b904-46e1-85be-0691c6d981da","Type":"ContainerStarted","Data":"5bcc0ed2f0fcc3928c255ef8610c096a2d2855f61c916a9fe1410796068d54f4"} Oct 13 17:42:59 crc kubenswrapper[4720]: I1013 17:42:59.667631 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.667605612 podStartE2EDuration="1.667605612s" podCreationTimestamp="2025-10-13 17:42:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:42:59.658265341 +0000 UTC m=+1125.115515483" watchObservedRunningTime="2025-10-13 17:42:59.667605612 +0000 UTC m=+1125.124855754" Oct 13 17:43:00 crc kubenswrapper[4720]: I1013 17:43:00.017810 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="4c54bcbf-ffd9-4595-b51d-0efcfcafd52a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": read tcp 10.217.0.2:56794->10.217.0.197:8775: read: connection reset by peer" Oct 13 17:43:00 crc kubenswrapper[4720]: I1013 17:43:00.018135 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="4c54bcbf-ffd9-4595-b51d-0efcfcafd52a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": read tcp 10.217.0.2:56804->10.217.0.197:8775: read: connection reset by peer" Oct 13 17:43:00 crc kubenswrapper[4720]: I1013 17:43:00.472295 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 13 17:43:00 crc kubenswrapper[4720]: I1013 17:43:00.576527 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c54bcbf-ffd9-4595-b51d-0efcfcafd52a-logs\") pod \"4c54bcbf-ffd9-4595-b51d-0efcfcafd52a\" (UID: \"4c54bcbf-ffd9-4595-b51d-0efcfcafd52a\") " Oct 13 17:43:00 crc kubenswrapper[4720]: I1013 17:43:00.576725 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c54bcbf-ffd9-4595-b51d-0efcfcafd52a-nova-metadata-tls-certs\") pod \"4c54bcbf-ffd9-4595-b51d-0efcfcafd52a\" (UID: \"4c54bcbf-ffd9-4595-b51d-0efcfcafd52a\") " Oct 13 17:43:00 crc kubenswrapper[4720]: I1013 17:43:00.576797 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxp47\" (UniqueName: \"kubernetes.io/projected/4c54bcbf-ffd9-4595-b51d-0efcfcafd52a-kube-api-access-zxp47\") pod \"4c54bcbf-ffd9-4595-b51d-0efcfcafd52a\" (UID: \"4c54bcbf-ffd9-4595-b51d-0efcfcafd52a\") " Oct 13 17:43:00 crc kubenswrapper[4720]: I1013 17:43:00.576932 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c54bcbf-ffd9-4595-b51d-0efcfcafd52a-config-data\") pod \"4c54bcbf-ffd9-4595-b51d-0efcfcafd52a\" (UID: \"4c54bcbf-ffd9-4595-b51d-0efcfcafd52a\") " Oct 13 17:43:00 crc kubenswrapper[4720]: I1013 17:43:00.577096 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c54bcbf-ffd9-4595-b51d-0efcfcafd52a-combined-ca-bundle\") pod \"4c54bcbf-ffd9-4595-b51d-0efcfcafd52a\" (UID: \"4c54bcbf-ffd9-4595-b51d-0efcfcafd52a\") " Oct 13 17:43:00 crc kubenswrapper[4720]: I1013 17:43:00.577183 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c54bcbf-ffd9-4595-b51d-0efcfcafd52a-logs" (OuterVolumeSpecName: "logs") pod "4c54bcbf-ffd9-4595-b51d-0efcfcafd52a" (UID: "4c54bcbf-ffd9-4595-b51d-0efcfcafd52a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:43:00 crc kubenswrapper[4720]: I1013 17:43:00.577731 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c54bcbf-ffd9-4595-b51d-0efcfcafd52a-logs\") on node \"crc\" DevicePath \"\"" Oct 13 17:43:00 crc kubenswrapper[4720]: I1013 17:43:00.603180 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c54bcbf-ffd9-4595-b51d-0efcfcafd52a-kube-api-access-zxp47" (OuterVolumeSpecName: "kube-api-access-zxp47") pod "4c54bcbf-ffd9-4595-b51d-0efcfcafd52a" (UID: "4c54bcbf-ffd9-4595-b51d-0efcfcafd52a"). InnerVolumeSpecName "kube-api-access-zxp47". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:43:00 crc kubenswrapper[4720]: I1013 17:43:00.621436 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c54bcbf-ffd9-4595-b51d-0efcfcafd52a-config-data" (OuterVolumeSpecName: "config-data") pod "4c54bcbf-ffd9-4595-b51d-0efcfcafd52a" (UID: "4c54bcbf-ffd9-4595-b51d-0efcfcafd52a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:43:00 crc kubenswrapper[4720]: I1013 17:43:00.634488 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c54bcbf-ffd9-4595-b51d-0efcfcafd52a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4c54bcbf-ffd9-4595-b51d-0efcfcafd52a" (UID: "4c54bcbf-ffd9-4595-b51d-0efcfcafd52a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:43:00 crc kubenswrapper[4720]: I1013 17:43:00.663861 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c54bcbf-ffd9-4595-b51d-0efcfcafd52a-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "4c54bcbf-ffd9-4595-b51d-0efcfcafd52a" (UID: "4c54bcbf-ffd9-4595-b51d-0efcfcafd52a"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:43:00 crc kubenswrapper[4720]: I1013 17:43:00.665349 4720 generic.go:334] "Generic (PLEG): container finished" podID="4c54bcbf-ffd9-4595-b51d-0efcfcafd52a" containerID="a848c4d10f2b62cf6c973bdde1aa6d0cb1b4a57ef6f5e204223976e823fcb3cf" exitCode=0 Oct 13 17:43:00 crc kubenswrapper[4720]: I1013 17:43:00.666001 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 13 17:43:00 crc kubenswrapper[4720]: I1013 17:43:00.666909 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4c54bcbf-ffd9-4595-b51d-0efcfcafd52a","Type":"ContainerDied","Data":"a848c4d10f2b62cf6c973bdde1aa6d0cb1b4a57ef6f5e204223976e823fcb3cf"} Oct 13 17:43:00 crc kubenswrapper[4720]: I1013 17:43:00.667002 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4c54bcbf-ffd9-4595-b51d-0efcfcafd52a","Type":"ContainerDied","Data":"8373b2354c0dbedb8f44bae289007fe32f0939e8db7b6f4cde3c0e0b5812d5f6"} Oct 13 17:43:00 crc kubenswrapper[4720]: I1013 17:43:00.667031 4720 scope.go:117] "RemoveContainer" containerID="a848c4d10f2b62cf6c973bdde1aa6d0cb1b4a57ef6f5e204223976e823fcb3cf" Oct 13 17:43:00 crc kubenswrapper[4720]: I1013 17:43:00.679657 4720 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c54bcbf-ffd9-4595-b51d-0efcfcafd52a-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 13 17:43:00 crc kubenswrapper[4720]: I1013 17:43:00.679690 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxp47\" (UniqueName: \"kubernetes.io/projected/4c54bcbf-ffd9-4595-b51d-0efcfcafd52a-kube-api-access-zxp47\") on node \"crc\" DevicePath \"\"" Oct 13 17:43:00 crc kubenswrapper[4720]: I1013 17:43:00.679708 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c54bcbf-ffd9-4595-b51d-0efcfcafd52a-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 17:43:00 crc kubenswrapper[4720]: I1013 17:43:00.679727 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c54bcbf-ffd9-4595-b51d-0efcfcafd52a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 17:43:00 crc kubenswrapper[4720]: I1013 17:43:00.749927 4720 scope.go:117] "RemoveContainer" containerID="c50a054551174514909d400930146f247c90ce7b9ec89a0aaf7a3bf9c6732252" Oct 13 17:43:00 crc kubenswrapper[4720]: I1013 17:43:00.770252 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 13 17:43:00 crc kubenswrapper[4720]: I1013 17:43:00.778411 4720 scope.go:117] "RemoveContainer" containerID="a848c4d10f2b62cf6c973bdde1aa6d0cb1b4a57ef6f5e204223976e823fcb3cf" Oct 13 17:43:00 crc kubenswrapper[4720]: E1013 17:43:00.782037 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a848c4d10f2b62cf6c973bdde1aa6d0cb1b4a57ef6f5e204223976e823fcb3cf\": container with ID starting with a848c4d10f2b62cf6c973bdde1aa6d0cb1b4a57ef6f5e204223976e823fcb3cf not found: ID does not exist" containerID="a848c4d10f2b62cf6c973bdde1aa6d0cb1b4a57ef6f5e204223976e823fcb3cf" Oct 13 17:43:00 crc kubenswrapper[4720]: I1013 17:43:00.782283 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a848c4d10f2b62cf6c973bdde1aa6d0cb1b4a57ef6f5e204223976e823fcb3cf"} err="failed to get container status \"a848c4d10f2b62cf6c973bdde1aa6d0cb1b4a57ef6f5e204223976e823fcb3cf\": rpc error: code = NotFound desc = could not find container \"a848c4d10f2b62cf6c973bdde1aa6d0cb1b4a57ef6f5e204223976e823fcb3cf\": container with ID starting with a848c4d10f2b62cf6c973bdde1aa6d0cb1b4a57ef6f5e204223976e823fcb3cf not found: ID does not exist" Oct 13 17:43:00 crc kubenswrapper[4720]: I1013 17:43:00.782488 4720 scope.go:117] "RemoveContainer" containerID="c50a054551174514909d400930146f247c90ce7b9ec89a0aaf7a3bf9c6732252" Oct 13 17:43:00 crc kubenswrapper[4720]: E1013 17:43:00.804339 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c50a054551174514909d400930146f247c90ce7b9ec89a0aaf7a3bf9c6732252\": container with ID starting with c50a054551174514909d400930146f247c90ce7b9ec89a0aaf7a3bf9c6732252 not found: ID does not exist" containerID="c50a054551174514909d400930146f247c90ce7b9ec89a0aaf7a3bf9c6732252" Oct 13 17:43:00 crc kubenswrapper[4720]: I1013 17:43:00.804435 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c50a054551174514909d400930146f247c90ce7b9ec89a0aaf7a3bf9c6732252"} err="failed to get container status \"c50a054551174514909d400930146f247c90ce7b9ec89a0aaf7a3bf9c6732252\": rpc error: code = NotFound desc = could not find container \"c50a054551174514909d400930146f247c90ce7b9ec89a0aaf7a3bf9c6732252\": container with ID starting with c50a054551174514909d400930146f247c90ce7b9ec89a0aaf7a3bf9c6732252 not found: ID does not exist" Oct 13 17:43:00 crc kubenswrapper[4720]: I1013 17:43:00.821034 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 13 17:43:00 crc kubenswrapper[4720]: I1013 17:43:00.857257 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 13 17:43:00 crc kubenswrapper[4720]: E1013 17:43:00.857733 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c54bcbf-ffd9-4595-b51d-0efcfcafd52a" containerName="nova-metadata-log" Oct 13 17:43:00 crc kubenswrapper[4720]: I1013 17:43:00.857749 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c54bcbf-ffd9-4595-b51d-0efcfcafd52a" containerName="nova-metadata-log" Oct 13 17:43:00 crc kubenswrapper[4720]: E1013 17:43:00.857766 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c54bcbf-ffd9-4595-b51d-0efcfcafd52a" containerName="nova-metadata-metadata" Oct 13 17:43:00 crc kubenswrapper[4720]: I1013 17:43:00.857773 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c54bcbf-ffd9-4595-b51d-0efcfcafd52a" containerName="nova-metadata-metadata" Oct 13 17:43:00 crc kubenswrapper[4720]: I1013 17:43:00.857999 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c54bcbf-ffd9-4595-b51d-0efcfcafd52a" containerName="nova-metadata-log" Oct 13 17:43:00 crc kubenswrapper[4720]: I1013 17:43:00.858028 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c54bcbf-ffd9-4595-b51d-0efcfcafd52a" containerName="nova-metadata-metadata" Oct 13 17:43:00 crc kubenswrapper[4720]: I1013 17:43:00.859141 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 13 17:43:00 crc kubenswrapper[4720]: I1013 17:43:00.867325 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 13 17:43:00 crc kubenswrapper[4720]: I1013 17:43:00.867633 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 13 17:43:00 crc kubenswrapper[4720]: I1013 17:43:00.873666 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 13 17:43:00 crc kubenswrapper[4720]: I1013 17:43:00.993316 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmd97\" (UniqueName: \"kubernetes.io/projected/8ab3d870-8836-484b-a291-4bc7b329ed83-kube-api-access-kmd97\") pod \"nova-metadata-0\" (UID: \"8ab3d870-8836-484b-a291-4bc7b329ed83\") " pod="openstack/nova-metadata-0" Oct 13 17:43:00 crc kubenswrapper[4720]: I1013 17:43:00.993593 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ab3d870-8836-484b-a291-4bc7b329ed83-logs\") pod \"nova-metadata-0\" (UID: \"8ab3d870-8836-484b-a291-4bc7b329ed83\") " pod="openstack/nova-metadata-0" Oct 13 17:43:00 crc kubenswrapper[4720]: I1013 17:43:00.993640 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ab3d870-8836-484b-a291-4bc7b329ed83-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8ab3d870-8836-484b-a291-4bc7b329ed83\") " pod="openstack/nova-metadata-0" Oct 13 17:43:00 crc kubenswrapper[4720]: I1013 17:43:00.993668 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ab3d870-8836-484b-a291-4bc7b329ed83-config-data\") pod \"nova-metadata-0\" (UID: \"8ab3d870-8836-484b-a291-4bc7b329ed83\") " pod="openstack/nova-metadata-0" Oct 13 17:43:00 crc kubenswrapper[4720]: I1013 17:43:00.993832 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ab3d870-8836-484b-a291-4bc7b329ed83-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8ab3d870-8836-484b-a291-4bc7b329ed83\") " pod="openstack/nova-metadata-0" Oct 13 17:43:01 crc kubenswrapper[4720]: I1013 17:43:01.095636 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ab3d870-8836-484b-a291-4bc7b329ed83-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8ab3d870-8836-484b-a291-4bc7b329ed83\") " pod="openstack/nova-metadata-0" Oct 13 17:43:01 crc kubenswrapper[4720]: I1013 17:43:01.095726 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmd97\" (UniqueName: \"kubernetes.io/projected/8ab3d870-8836-484b-a291-4bc7b329ed83-kube-api-access-kmd97\") pod \"nova-metadata-0\" (UID: \"8ab3d870-8836-484b-a291-4bc7b329ed83\") " pod="openstack/nova-metadata-0" Oct 13 17:43:01 crc kubenswrapper[4720]: I1013 17:43:01.096285 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ab3d870-8836-484b-a291-4bc7b329ed83-logs\") pod \"nova-metadata-0\" (UID: \"8ab3d870-8836-484b-a291-4bc7b329ed83\") " pod="openstack/nova-metadata-0" Oct 13 17:43:01 crc kubenswrapper[4720]: I1013 17:43:01.096312 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ab3d870-8836-484b-a291-4bc7b329ed83-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8ab3d870-8836-484b-a291-4bc7b329ed83\") " pod="openstack/nova-metadata-0" Oct 13 17:43:01 crc kubenswrapper[4720]: I1013 17:43:01.096780 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ab3d870-8836-484b-a291-4bc7b329ed83-logs\") pod \"nova-metadata-0\" (UID: \"8ab3d870-8836-484b-a291-4bc7b329ed83\") " pod="openstack/nova-metadata-0" Oct 13 17:43:01 crc kubenswrapper[4720]: I1013 17:43:01.096833 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ab3d870-8836-484b-a291-4bc7b329ed83-config-data\") pod \"nova-metadata-0\" (UID: \"8ab3d870-8836-484b-a291-4bc7b329ed83\") " pod="openstack/nova-metadata-0" Oct 13 17:43:01 crc kubenswrapper[4720]: I1013 17:43:01.100118 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ab3d870-8836-484b-a291-4bc7b329ed83-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8ab3d870-8836-484b-a291-4bc7b329ed83\") " pod="openstack/nova-metadata-0" Oct 13 17:43:01 crc kubenswrapper[4720]: I1013 17:43:01.101127 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ab3d870-8836-484b-a291-4bc7b329ed83-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8ab3d870-8836-484b-a291-4bc7b329ed83\") " pod="openstack/nova-metadata-0" Oct 13 17:43:01 crc kubenswrapper[4720]: I1013 17:43:01.102164 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ab3d870-8836-484b-a291-4bc7b329ed83-config-data\") pod \"nova-metadata-0\" (UID: \"8ab3d870-8836-484b-a291-4bc7b329ed83\") " pod="openstack/nova-metadata-0" Oct 13 17:43:01 crc kubenswrapper[4720]: I1013 17:43:01.113041 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmd97\" (UniqueName: \"kubernetes.io/projected/8ab3d870-8836-484b-a291-4bc7b329ed83-kube-api-access-kmd97\") pod \"nova-metadata-0\" (UID: \"8ab3d870-8836-484b-a291-4bc7b329ed83\") " pod="openstack/nova-metadata-0" Oct 13 17:43:01 crc kubenswrapper[4720]: I1013 17:43:01.179454 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c54bcbf-ffd9-4595-b51d-0efcfcafd52a" path="/var/lib/kubelet/pods/4c54bcbf-ffd9-4595-b51d-0efcfcafd52a/volumes" Oct 13 17:43:01 crc kubenswrapper[4720]: I1013 17:43:01.190892 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 13 17:43:01 crc kubenswrapper[4720]: I1013 17:43:01.711592 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 13 17:43:02 crc kubenswrapper[4720]: I1013 17:43:02.520412 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 13 17:43:02 crc kubenswrapper[4720]: I1013 17:43:02.623557 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6t8z\" (UniqueName: \"kubernetes.io/projected/f65be040-a3da-4f04-a883-995351ba908b-kube-api-access-t6t8z\") pod \"f65be040-a3da-4f04-a883-995351ba908b\" (UID: \"f65be040-a3da-4f04-a883-995351ba908b\") " Oct 13 17:43:02 crc kubenswrapper[4720]: I1013 17:43:02.623785 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f65be040-a3da-4f04-a883-995351ba908b-config-data\") pod \"f65be040-a3da-4f04-a883-995351ba908b\" (UID: \"f65be040-a3da-4f04-a883-995351ba908b\") " Oct 13 17:43:02 crc kubenswrapper[4720]: I1013 17:43:02.623943 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f65be040-a3da-4f04-a883-995351ba908b-combined-ca-bundle\") pod \"f65be040-a3da-4f04-a883-995351ba908b\" (UID: \"f65be040-a3da-4f04-a883-995351ba908b\") " Oct 13 17:43:02 crc kubenswrapper[4720]: I1013 17:43:02.646052 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65be040-a3da-4f04-a883-995351ba908b-kube-api-access-t6t8z" (OuterVolumeSpecName: "kube-api-access-t6t8z") pod "f65be040-a3da-4f04-a883-995351ba908b" (UID: "f65be040-a3da-4f04-a883-995351ba908b"). InnerVolumeSpecName "kube-api-access-t6t8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:43:02 crc kubenswrapper[4720]: I1013 17:43:02.671875 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65be040-a3da-4f04-a883-995351ba908b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f65be040-a3da-4f04-a883-995351ba908b" (UID: "f65be040-a3da-4f04-a883-995351ba908b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:43:02 crc kubenswrapper[4720]: I1013 17:43:02.674323 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65be040-a3da-4f04-a883-995351ba908b-config-data" (OuterVolumeSpecName: "config-data") pod "f65be040-a3da-4f04-a883-995351ba908b" (UID: "f65be040-a3da-4f04-a883-995351ba908b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:43:02 crc kubenswrapper[4720]: I1013 17:43:02.693312 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8ab3d870-8836-484b-a291-4bc7b329ed83","Type":"ContainerStarted","Data":"5f079f1a835c09a97be017f766be11c577281c5f56c15cd239bf8b2431e93515"} Oct 13 17:43:02 crc kubenswrapper[4720]: I1013 17:43:02.693377 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8ab3d870-8836-484b-a291-4bc7b329ed83","Type":"ContainerStarted","Data":"4ff1d6f0ba5db5483ed9efc1826fed9c1ca4fb1e13b78589ea6e87a02612a3fe"} Oct 13 17:43:02 crc kubenswrapper[4720]: I1013 17:43:02.693397 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8ab3d870-8836-484b-a291-4bc7b329ed83","Type":"ContainerStarted","Data":"5bfbe9c12fe2f48eedb726d2f0364177daed6c7d11e5a14ebfe5bf9f72c41ac3"} Oct 13 17:43:02 crc kubenswrapper[4720]: I1013 17:43:02.697496 4720 generic.go:334] "Generic (PLEG): container finished" podID="f65be040-a3da-4f04-a883-995351ba908b" containerID="babf2748262064788db22e65543d4efd9f77b93e063718a0c45a9e5ba1e36402" exitCode=0 Oct 13 17:43:02 crc kubenswrapper[4720]: I1013 17:43:02.697548 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f65be040-a3da-4f04-a883-995351ba908b","Type":"ContainerDied","Data":"babf2748262064788db22e65543d4efd9f77b93e063718a0c45a9e5ba1e36402"} Oct 13 17:43:02 crc kubenswrapper[4720]: I1013 17:43:02.697581 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f65be040-a3da-4f04-a883-995351ba908b","Type":"ContainerDied","Data":"638d4f7cb1983b23a71064cb84445508a948c1e4fad1703a10208865c534aaed"} Oct 13 17:43:02 crc kubenswrapper[4720]: I1013 17:43:02.697609 4720 scope.go:117] "RemoveContainer" containerID="babf2748262064788db22e65543d4efd9f77b93e063718a0c45a9e5ba1e36402" Oct 13 17:43:02 crc kubenswrapper[4720]: I1013 17:43:02.697752 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 13 17:43:02 crc kubenswrapper[4720]: I1013 17:43:02.726134 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f65be040-a3da-4f04-a883-995351ba908b-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 17:43:02 crc kubenswrapper[4720]: I1013 17:43:02.726173 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f65be040-a3da-4f04-a883-995351ba908b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 17:43:02 crc kubenswrapper[4720]: I1013 17:43:02.726195 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6t8z\" (UniqueName: \"kubernetes.io/projected/f65be040-a3da-4f04-a883-995351ba908b-kube-api-access-t6t8z\") on node \"crc\" DevicePath \"\"" Oct 13 17:43:02 crc kubenswrapper[4720]: I1013 17:43:02.727963 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.727937713 podStartE2EDuration="2.727937713s" podCreationTimestamp="2025-10-13 17:43:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:43:02.715170963 +0000 UTC m=+1128.172421125" watchObservedRunningTime="2025-10-13 17:43:02.727937713 +0000 UTC m=+1128.185187855" Oct 13 17:43:02 crc kubenswrapper[4720]: I1013 17:43:02.743393 4720 scope.go:117] "RemoveContainer" containerID="babf2748262064788db22e65543d4efd9f77b93e063718a0c45a9e5ba1e36402" Oct 13 17:43:02 crc kubenswrapper[4720]: E1013 17:43:02.746925 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"babf2748262064788db22e65543d4efd9f77b93e063718a0c45a9e5ba1e36402\": container with ID starting with babf2748262064788db22e65543d4efd9f77b93e063718a0c45a9e5ba1e36402 not found: ID does not exist" containerID="babf2748262064788db22e65543d4efd9f77b93e063718a0c45a9e5ba1e36402" Oct 13 17:43:02 crc kubenswrapper[4720]: I1013 17:43:02.746993 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"babf2748262064788db22e65543d4efd9f77b93e063718a0c45a9e5ba1e36402"} err="failed to get container status \"babf2748262064788db22e65543d4efd9f77b93e063718a0c45a9e5ba1e36402\": rpc error: code = NotFound desc = could not find container \"babf2748262064788db22e65543d4efd9f77b93e063718a0c45a9e5ba1e36402\": container with ID starting with babf2748262064788db22e65543d4efd9f77b93e063718a0c45a9e5ba1e36402 not found: ID does not exist" Oct 13 17:43:02 crc kubenswrapper[4720]: I1013 17:43:02.761751 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 13 17:43:02 crc kubenswrapper[4720]: I1013 17:43:02.775625 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Oct 13 17:43:02 crc kubenswrapper[4720]: I1013 17:43:02.795069 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 13 17:43:02 crc kubenswrapper[4720]: E1013 17:43:02.795526 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f65be040-a3da-4f04-a883-995351ba908b" containerName="nova-scheduler-scheduler" Oct 13 17:43:02 crc kubenswrapper[4720]: I1013 17:43:02.795548 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="f65be040-a3da-4f04-a883-995351ba908b" containerName="nova-scheduler-scheduler" Oct 13 17:43:02 crc kubenswrapper[4720]: I1013 17:43:02.795826 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="f65be040-a3da-4f04-a883-995351ba908b" containerName="nova-scheduler-scheduler" Oct 13 17:43:02 crc kubenswrapper[4720]: I1013 17:43:02.796544 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 13 17:43:02 crc kubenswrapper[4720]: I1013 17:43:02.797720 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 13 17:43:02 crc kubenswrapper[4720]: I1013 17:43:02.799685 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 13 17:43:02 crc kubenswrapper[4720]: I1013 17:43:02.929643 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6j5lg\" (UniqueName: \"kubernetes.io/projected/afdc561e-00de-42e7-aeda-de229e3f7836-kube-api-access-6j5lg\") pod \"nova-scheduler-0\" (UID: \"afdc561e-00de-42e7-aeda-de229e3f7836\") " pod="openstack/nova-scheduler-0" Oct 13 17:43:02 crc kubenswrapper[4720]: I1013 17:43:02.929702 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afdc561e-00de-42e7-aeda-de229e3f7836-config-data\") pod \"nova-scheduler-0\" (UID: \"afdc561e-00de-42e7-aeda-de229e3f7836\") " pod="openstack/nova-scheduler-0" Oct 13 17:43:02 crc kubenswrapper[4720]: I1013 17:43:02.929726 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afdc561e-00de-42e7-aeda-de229e3f7836-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"afdc561e-00de-42e7-aeda-de229e3f7836\") " pod="openstack/nova-scheduler-0" Oct 13 17:43:03 crc kubenswrapper[4720]: I1013 17:43:03.031880 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6j5lg\" (UniqueName: \"kubernetes.io/projected/afdc561e-00de-42e7-aeda-de229e3f7836-kube-api-access-6j5lg\") pod \"nova-scheduler-0\" (UID: \"afdc561e-00de-42e7-aeda-de229e3f7836\") " pod="openstack/nova-scheduler-0" Oct 13 17:43:03 crc kubenswrapper[4720]: I1013 17:43:03.031953 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afdc561e-00de-42e7-aeda-de229e3f7836-config-data\") pod \"nova-scheduler-0\" (UID: \"afdc561e-00de-42e7-aeda-de229e3f7836\") " pod="openstack/nova-scheduler-0" Oct 13 17:43:03 crc kubenswrapper[4720]: I1013 17:43:03.031982 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afdc561e-00de-42e7-aeda-de229e3f7836-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"afdc561e-00de-42e7-aeda-de229e3f7836\") " pod="openstack/nova-scheduler-0" Oct 13 17:43:03 crc kubenswrapper[4720]: I1013 17:43:03.036434 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afdc561e-00de-42e7-aeda-de229e3f7836-config-data\") pod \"nova-scheduler-0\" (UID: \"afdc561e-00de-42e7-aeda-de229e3f7836\") " pod="openstack/nova-scheduler-0" Oct 13 17:43:03 crc kubenswrapper[4720]: I1013 17:43:03.036703 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afdc561e-00de-42e7-aeda-de229e3f7836-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"afdc561e-00de-42e7-aeda-de229e3f7836\") " pod="openstack/nova-scheduler-0" Oct 13 17:43:03 crc kubenswrapper[4720]: I1013 17:43:03.059982 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6j5lg\" (UniqueName: \"kubernetes.io/projected/afdc561e-00de-42e7-aeda-de229e3f7836-kube-api-access-6j5lg\") pod \"nova-scheduler-0\" (UID: \"afdc561e-00de-42e7-aeda-de229e3f7836\") " pod="openstack/nova-scheduler-0" Oct 13 17:43:03 crc kubenswrapper[4720]: I1013 17:43:03.128366 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 13 17:43:03 crc kubenswrapper[4720]: I1013 17:43:03.183137 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f65be040-a3da-4f04-a883-995351ba908b" path="/var/lib/kubelet/pods/f65be040-a3da-4f04-a883-995351ba908b/volumes" Oct 13 17:43:03 crc kubenswrapper[4720]: W1013 17:43:03.611573 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podafdc561e_00de_42e7_aeda_de229e3f7836.slice/crio-a662dac62e6d622d1b294b22724cb5c3be1ac901c18452915d1ebc0bc3a29b6f WatchSource:0}: Error finding container a662dac62e6d622d1b294b22724cb5c3be1ac901c18452915d1ebc0bc3a29b6f: Status 404 returned error can't find the container with id a662dac62e6d622d1b294b22724cb5c3be1ac901c18452915d1ebc0bc3a29b6f Oct 13 17:43:03 crc kubenswrapper[4720]: I1013 17:43:03.611707 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 13 17:43:03 crc kubenswrapper[4720]: I1013 17:43:03.714464 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"afdc561e-00de-42e7-aeda-de229e3f7836","Type":"ContainerStarted","Data":"a662dac62e6d622d1b294b22724cb5c3be1ac901c18452915d1ebc0bc3a29b6f"} Oct 13 17:43:04 crc kubenswrapper[4720]: I1013 17:43:04.728239 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"afdc561e-00de-42e7-aeda-de229e3f7836","Type":"ContainerStarted","Data":"57833e554a348827bfb6e78afdf0fef7100a73ac34e56b4bddc62fcf123ea4b0"} Oct 13 17:43:04 crc kubenswrapper[4720]: I1013 17:43:04.763249 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.763223649 podStartE2EDuration="2.763223649s" podCreationTimestamp="2025-10-13 17:43:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:43:04.760350495 +0000 UTC m=+1130.217600667" watchObservedRunningTime="2025-10-13 17:43:04.763223649 +0000 UTC m=+1130.220473811" Oct 13 17:43:06 crc kubenswrapper[4720]: I1013 17:43:06.191477 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 13 17:43:06 crc kubenswrapper[4720]: I1013 17:43:06.191913 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 13 17:43:08 crc kubenswrapper[4720]: I1013 17:43:08.128953 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 13 17:43:08 crc kubenswrapper[4720]: I1013 17:43:08.408484 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 13 17:43:08 crc kubenswrapper[4720]: I1013 17:43:08.408579 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 13 17:43:09 crc kubenswrapper[4720]: I1013 17:43:09.422467 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d013b32a-b904-46e1-85be-0691c6d981da" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 13 17:43:09 crc kubenswrapper[4720]: I1013 17:43:09.422482 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d013b32a-b904-46e1-85be-0691c6d981da" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 13 17:43:11 crc kubenswrapper[4720]: I1013 17:43:11.191402 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 13 17:43:11 crc kubenswrapper[4720]: I1013 17:43:11.192507 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 13 17:43:12 crc kubenswrapper[4720]: I1013 17:43:12.210365 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="8ab3d870-8836-484b-a291-4bc7b329ed83" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.208:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 13 17:43:12 crc kubenswrapper[4720]: I1013 17:43:12.210443 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="8ab3d870-8836-484b-a291-4bc7b329ed83" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.208:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 13 17:43:13 crc kubenswrapper[4720]: I1013 17:43:13.129100 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 13 17:43:13 crc kubenswrapper[4720]: I1013 17:43:13.180735 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 13 17:43:13 crc kubenswrapper[4720]: I1013 17:43:13.876188 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 13 17:43:16 crc kubenswrapper[4720]: I1013 17:43:16.899488 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Oct 13 17:43:18 crc kubenswrapper[4720]: I1013 17:43:18.416276 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 13 17:43:18 crc kubenswrapper[4720]: I1013 17:43:18.416893 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 13 17:43:18 crc kubenswrapper[4720]: I1013 17:43:18.417228 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 13 17:43:18 crc kubenswrapper[4720]: I1013 17:43:18.425931 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 13 17:43:18 crc kubenswrapper[4720]: I1013 17:43:18.890077 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 13 17:43:18 crc kubenswrapper[4720]: I1013 17:43:18.902439 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 13 17:43:21 crc kubenswrapper[4720]: I1013 17:43:21.196919 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 13 17:43:21 crc kubenswrapper[4720]: I1013 17:43:21.197416 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 13 17:43:21 crc kubenswrapper[4720]: I1013 17:43:21.203476 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 13 17:43:21 crc kubenswrapper[4720]: I1013 17:43:21.205577 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 13 17:43:29 crc kubenswrapper[4720]: I1013 17:43:29.352044 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 13 17:43:30 crc kubenswrapper[4720]: I1013 17:43:30.922483 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 13 17:43:33 crc kubenswrapper[4720]: I1013 17:43:33.673036 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="af59309d-fcea-47ce-85b5-0eafbf780d08" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.100:5671: connect: connection refused" Oct 13 17:43:33 crc kubenswrapper[4720]: I1013 17:43:33.700836 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="af59309d-fcea-47ce-85b5-0eafbf780d08" containerName="rabbitmq" containerID="cri-o://eae200ee8e3ab7a84a4b212ea54a0bc79c87bfdac4395df822709711c402e84c" gracePeriod=604796 Oct 13 17:43:35 crc kubenswrapper[4720]: I1013 17:43:35.350935 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="76c17d7a-8441-4b23-839b-f95ac54a6b24" containerName="rabbitmq" containerID="cri-o://31c995078368b5b95f3c3877081d2472920be4f1127b2d1afc577d2f63a1c6c9" gracePeriod=604796 Oct 13 17:43:40 crc kubenswrapper[4720]: I1013 17:43:40.165366 4720 generic.go:334] "Generic (PLEG): container finished" podID="af59309d-fcea-47ce-85b5-0eafbf780d08" containerID="eae200ee8e3ab7a84a4b212ea54a0bc79c87bfdac4395df822709711c402e84c" exitCode=0 Oct 13 17:43:40 crc kubenswrapper[4720]: I1013 17:43:40.165652 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"af59309d-fcea-47ce-85b5-0eafbf780d08","Type":"ContainerDied","Data":"eae200ee8e3ab7a84a4b212ea54a0bc79c87bfdac4395df822709711c402e84c"} Oct 13 17:43:40 crc kubenswrapper[4720]: I1013 17:43:40.324859 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 13 17:43:40 crc kubenswrapper[4720]: I1013 17:43:40.428287 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwjgj\" (UniqueName: \"kubernetes.io/projected/af59309d-fcea-47ce-85b5-0eafbf780d08-kube-api-access-kwjgj\") pod \"af59309d-fcea-47ce-85b5-0eafbf780d08\" (UID: \"af59309d-fcea-47ce-85b5-0eafbf780d08\") " Oct 13 17:43:40 crc kubenswrapper[4720]: I1013 17:43:40.428352 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/af59309d-fcea-47ce-85b5-0eafbf780d08-server-conf\") pod \"af59309d-fcea-47ce-85b5-0eafbf780d08\" (UID: \"af59309d-fcea-47ce-85b5-0eafbf780d08\") " Oct 13 17:43:40 crc kubenswrapper[4720]: I1013 17:43:40.428396 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/af59309d-fcea-47ce-85b5-0eafbf780d08-rabbitmq-tls\") pod \"af59309d-fcea-47ce-85b5-0eafbf780d08\" (UID: \"af59309d-fcea-47ce-85b5-0eafbf780d08\") " Oct 13 17:43:40 crc kubenswrapper[4720]: I1013 17:43:40.428454 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"af59309d-fcea-47ce-85b5-0eafbf780d08\" (UID: \"af59309d-fcea-47ce-85b5-0eafbf780d08\") " Oct 13 17:43:40 crc kubenswrapper[4720]: I1013 17:43:40.428495 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/af59309d-fcea-47ce-85b5-0eafbf780d08-rabbitmq-confd\") pod \"af59309d-fcea-47ce-85b5-0eafbf780d08\" (UID: \"af59309d-fcea-47ce-85b5-0eafbf780d08\") " Oct 13 17:43:40 crc kubenswrapper[4720]: I1013 17:43:40.428575 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/af59309d-fcea-47ce-85b5-0eafbf780d08-plugins-conf\") pod \"af59309d-fcea-47ce-85b5-0eafbf780d08\" (UID: \"af59309d-fcea-47ce-85b5-0eafbf780d08\") " Oct 13 17:43:40 crc kubenswrapper[4720]: I1013 17:43:40.428639 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/af59309d-fcea-47ce-85b5-0eafbf780d08-pod-info\") pod \"af59309d-fcea-47ce-85b5-0eafbf780d08\" (UID: \"af59309d-fcea-47ce-85b5-0eafbf780d08\") " Oct 13 17:43:40 crc kubenswrapper[4720]: I1013 17:43:40.428722 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/af59309d-fcea-47ce-85b5-0eafbf780d08-rabbitmq-plugins\") pod \"af59309d-fcea-47ce-85b5-0eafbf780d08\" (UID: \"af59309d-fcea-47ce-85b5-0eafbf780d08\") " Oct 13 17:43:40 crc kubenswrapper[4720]: I1013 17:43:40.428822 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/af59309d-fcea-47ce-85b5-0eafbf780d08-rabbitmq-erlang-cookie\") pod \"af59309d-fcea-47ce-85b5-0eafbf780d08\" (UID: \"af59309d-fcea-47ce-85b5-0eafbf780d08\") " Oct 13 17:43:40 crc kubenswrapper[4720]: I1013 17:43:40.428884 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/af59309d-fcea-47ce-85b5-0eafbf780d08-erlang-cookie-secret\") pod \"af59309d-fcea-47ce-85b5-0eafbf780d08\" (UID: \"af59309d-fcea-47ce-85b5-0eafbf780d08\") " Oct 13 17:43:40 crc kubenswrapper[4720]: I1013 17:43:40.428911 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/af59309d-fcea-47ce-85b5-0eafbf780d08-config-data\") pod \"af59309d-fcea-47ce-85b5-0eafbf780d08\" (UID: \"af59309d-fcea-47ce-85b5-0eafbf780d08\") " Oct 13 17:43:40 crc kubenswrapper[4720]: I1013 17:43:40.436525 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af59309d-fcea-47ce-85b5-0eafbf780d08-kube-api-access-kwjgj" (OuterVolumeSpecName: "kube-api-access-kwjgj") pod "af59309d-fcea-47ce-85b5-0eafbf780d08" (UID: "af59309d-fcea-47ce-85b5-0eafbf780d08"). InnerVolumeSpecName "kube-api-access-kwjgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:43:40 crc kubenswrapper[4720]: I1013 17:43:40.459471 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af59309d-fcea-47ce-85b5-0eafbf780d08-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "af59309d-fcea-47ce-85b5-0eafbf780d08" (UID: "af59309d-fcea-47ce-85b5-0eafbf780d08"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:43:40 crc kubenswrapper[4720]: I1013 17:43:40.459705 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af59309d-fcea-47ce-85b5-0eafbf780d08-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "af59309d-fcea-47ce-85b5-0eafbf780d08" (UID: "af59309d-fcea-47ce-85b5-0eafbf780d08"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:43:40 crc kubenswrapper[4720]: I1013 17:43:40.464735 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af59309d-fcea-47ce-85b5-0eafbf780d08-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "af59309d-fcea-47ce-85b5-0eafbf780d08" (UID: "af59309d-fcea-47ce-85b5-0eafbf780d08"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:43:40 crc kubenswrapper[4720]: I1013 17:43:40.466742 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af59309d-fcea-47ce-85b5-0eafbf780d08-config-data" (OuterVolumeSpecName: "config-data") pod "af59309d-fcea-47ce-85b5-0eafbf780d08" (UID: "af59309d-fcea-47ce-85b5-0eafbf780d08"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:43:40 crc kubenswrapper[4720]: I1013 17:43:40.468798 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "persistence") pod "af59309d-fcea-47ce-85b5-0eafbf780d08" (UID: "af59309d-fcea-47ce-85b5-0eafbf780d08"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 13 17:43:40 crc kubenswrapper[4720]: I1013 17:43:40.469044 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af59309d-fcea-47ce-85b5-0eafbf780d08-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "af59309d-fcea-47ce-85b5-0eafbf780d08" (UID: "af59309d-fcea-47ce-85b5-0eafbf780d08"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:43:40 crc kubenswrapper[4720]: I1013 17:43:40.470435 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/af59309d-fcea-47ce-85b5-0eafbf780d08-pod-info" (OuterVolumeSpecName: "pod-info") pod "af59309d-fcea-47ce-85b5-0eafbf780d08" (UID: "af59309d-fcea-47ce-85b5-0eafbf780d08"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Oct 13 17:43:40 crc kubenswrapper[4720]: I1013 17:43:40.471128 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af59309d-fcea-47ce-85b5-0eafbf780d08-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "af59309d-fcea-47ce-85b5-0eafbf780d08" (UID: "af59309d-fcea-47ce-85b5-0eafbf780d08"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:43:40 crc kubenswrapper[4720]: I1013 17:43:40.500710 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af59309d-fcea-47ce-85b5-0eafbf780d08-server-conf" (OuterVolumeSpecName: "server-conf") pod "af59309d-fcea-47ce-85b5-0eafbf780d08" (UID: "af59309d-fcea-47ce-85b5-0eafbf780d08"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:43:40 crc kubenswrapper[4720]: I1013 17:43:40.532578 4720 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/af59309d-fcea-47ce-85b5-0eafbf780d08-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Oct 13 17:43:40 crc kubenswrapper[4720]: I1013 17:43:40.532611 4720 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/af59309d-fcea-47ce-85b5-0eafbf780d08-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Oct 13 17:43:40 crc kubenswrapper[4720]: I1013 17:43:40.532622 4720 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/af59309d-fcea-47ce-85b5-0eafbf780d08-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Oct 13 17:43:40 crc kubenswrapper[4720]: I1013 17:43:40.532633 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/af59309d-fcea-47ce-85b5-0eafbf780d08-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 17:43:40 crc kubenswrapper[4720]: I1013 17:43:40.532641 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwjgj\" (UniqueName: \"kubernetes.io/projected/af59309d-fcea-47ce-85b5-0eafbf780d08-kube-api-access-kwjgj\") on node \"crc\" DevicePath \"\"" Oct 13 17:43:40 crc kubenswrapper[4720]: I1013 17:43:40.532650 4720 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/af59309d-fcea-47ce-85b5-0eafbf780d08-server-conf\") on node \"crc\" DevicePath \"\"" Oct 13 17:43:40 crc kubenswrapper[4720]: I1013 17:43:40.532657 4720 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/af59309d-fcea-47ce-85b5-0eafbf780d08-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Oct 13 17:43:40 crc kubenswrapper[4720]: I1013 17:43:40.532683 4720 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Oct 13 17:43:40 crc kubenswrapper[4720]: I1013 17:43:40.532692 4720 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/af59309d-fcea-47ce-85b5-0eafbf780d08-plugins-conf\") on node \"crc\" DevicePath \"\"" Oct 13 17:43:40 crc kubenswrapper[4720]: I1013 17:43:40.532700 4720 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/af59309d-fcea-47ce-85b5-0eafbf780d08-pod-info\") on node \"crc\" DevicePath \"\"" Oct 13 17:43:40 crc kubenswrapper[4720]: I1013 17:43:40.555287 4720 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Oct 13 17:43:40 crc kubenswrapper[4720]: I1013 17:43:40.587451 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af59309d-fcea-47ce-85b5-0eafbf780d08-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "af59309d-fcea-47ce-85b5-0eafbf780d08" (UID: "af59309d-fcea-47ce-85b5-0eafbf780d08"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:43:40 crc kubenswrapper[4720]: I1013 17:43:40.634495 4720 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Oct 13 17:43:40 crc kubenswrapper[4720]: I1013 17:43:40.634531 4720 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/af59309d-fcea-47ce-85b5-0eafbf780d08-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.178298 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.179585 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"af59309d-fcea-47ce-85b5-0eafbf780d08","Type":"ContainerDied","Data":"856c28a7e34caccb6b2aa765bd1447c8f68ef73ec992d8f61b93699d97375d8e"} Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.179637 4720 scope.go:117] "RemoveContainer" containerID="eae200ee8e3ab7a84a4b212ea54a0bc79c87bfdac4395df822709711c402e84c" Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.211477 4720 scope.go:117] "RemoveContainer" containerID="cbc606f7c5761bcc6663e0a2f8f1346ba2ab48d02354a5f22b7a9a7e5ee7cad0" Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.215740 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.227154 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.275273 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Oct 13 17:43:41 crc kubenswrapper[4720]: E1013 17:43:41.276114 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af59309d-fcea-47ce-85b5-0eafbf780d08" containerName="setup-container" Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.276135 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="af59309d-fcea-47ce-85b5-0eafbf780d08" containerName="setup-container" Oct 13 17:43:41 crc kubenswrapper[4720]: E1013 17:43:41.276182 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af59309d-fcea-47ce-85b5-0eafbf780d08" containerName="rabbitmq" Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.276242 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="af59309d-fcea-47ce-85b5-0eafbf780d08" containerName="rabbitmq" Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.276812 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="af59309d-fcea-47ce-85b5-0eafbf780d08" containerName="rabbitmq" Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.278229 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.283672 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.284135 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-4rmtl" Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.284303 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.284324 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.284488 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.284488 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.284694 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.287420 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.348496 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx5lx\" (UniqueName: \"kubernetes.io/projected/234df878-2921-45dc-854c-b3840afdbd45-kube-api-access-wx5lx\") pod \"rabbitmq-server-0\" (UID: \"234df878-2921-45dc-854c-b3840afdbd45\") " pod="openstack/rabbitmq-server-0" Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.348567 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/234df878-2921-45dc-854c-b3840afdbd45-pod-info\") pod \"rabbitmq-server-0\" (UID: \"234df878-2921-45dc-854c-b3840afdbd45\") " pod="openstack/rabbitmq-server-0" Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.348606 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/234df878-2921-45dc-854c-b3840afdbd45-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"234df878-2921-45dc-854c-b3840afdbd45\") " pod="openstack/rabbitmq-server-0" Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.348646 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"234df878-2921-45dc-854c-b3840afdbd45\") " pod="openstack/rabbitmq-server-0" Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.348715 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/234df878-2921-45dc-854c-b3840afdbd45-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"234df878-2921-45dc-854c-b3840afdbd45\") " pod="openstack/rabbitmq-server-0" Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.348793 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/234df878-2921-45dc-854c-b3840afdbd45-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"234df878-2921-45dc-854c-b3840afdbd45\") " pod="openstack/rabbitmq-server-0" Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.348814 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/234df878-2921-45dc-854c-b3840afdbd45-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"234df878-2921-45dc-854c-b3840afdbd45\") " pod="openstack/rabbitmq-server-0" Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.348941 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/234df878-2921-45dc-854c-b3840afdbd45-server-conf\") pod \"rabbitmq-server-0\" (UID: \"234df878-2921-45dc-854c-b3840afdbd45\") " pod="openstack/rabbitmq-server-0" Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.349030 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/234df878-2921-45dc-854c-b3840afdbd45-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"234df878-2921-45dc-854c-b3840afdbd45\") " pod="openstack/rabbitmq-server-0" Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.349076 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/234df878-2921-45dc-854c-b3840afdbd45-config-data\") pod \"rabbitmq-server-0\" (UID: \"234df878-2921-45dc-854c-b3840afdbd45\") " pod="openstack/rabbitmq-server-0" Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.349273 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/234df878-2921-45dc-854c-b3840afdbd45-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"234df878-2921-45dc-854c-b3840afdbd45\") " pod="openstack/rabbitmq-server-0" Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.451364 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/234df878-2921-45dc-854c-b3840afdbd45-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"234df878-2921-45dc-854c-b3840afdbd45\") " pod="openstack/rabbitmq-server-0" Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.451465 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/234df878-2921-45dc-854c-b3840afdbd45-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"234df878-2921-45dc-854c-b3840afdbd45\") " pod="openstack/rabbitmq-server-0" Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.451493 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/234df878-2921-45dc-854c-b3840afdbd45-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"234df878-2921-45dc-854c-b3840afdbd45\") " pod="openstack/rabbitmq-server-0" Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.451519 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/234df878-2921-45dc-854c-b3840afdbd45-server-conf\") pod \"rabbitmq-server-0\" (UID: \"234df878-2921-45dc-854c-b3840afdbd45\") " pod="openstack/rabbitmq-server-0" Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.451543 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/234df878-2921-45dc-854c-b3840afdbd45-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"234df878-2921-45dc-854c-b3840afdbd45\") " pod="openstack/rabbitmq-server-0" Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.451562 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/234df878-2921-45dc-854c-b3840afdbd45-config-data\") pod \"rabbitmq-server-0\" (UID: \"234df878-2921-45dc-854c-b3840afdbd45\") " pod="openstack/rabbitmq-server-0" Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.451606 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/234df878-2921-45dc-854c-b3840afdbd45-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"234df878-2921-45dc-854c-b3840afdbd45\") " pod="openstack/rabbitmq-server-0" Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.451647 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wx5lx\" (UniqueName: \"kubernetes.io/projected/234df878-2921-45dc-854c-b3840afdbd45-kube-api-access-wx5lx\") pod \"rabbitmq-server-0\" (UID: \"234df878-2921-45dc-854c-b3840afdbd45\") " pod="openstack/rabbitmq-server-0" Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.451671 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/234df878-2921-45dc-854c-b3840afdbd45-pod-info\") pod \"rabbitmq-server-0\" (UID: \"234df878-2921-45dc-854c-b3840afdbd45\") " pod="openstack/rabbitmq-server-0" Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.451691 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/234df878-2921-45dc-854c-b3840afdbd45-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"234df878-2921-45dc-854c-b3840afdbd45\") " pod="openstack/rabbitmq-server-0" Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.451729 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"234df878-2921-45dc-854c-b3840afdbd45\") " pod="openstack/rabbitmq-server-0" Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.452021 4720 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"234df878-2921-45dc-854c-b3840afdbd45\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-server-0" Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.452329 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/234df878-2921-45dc-854c-b3840afdbd45-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"234df878-2921-45dc-854c-b3840afdbd45\") " pod="openstack/rabbitmq-server-0" Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.452871 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/234df878-2921-45dc-854c-b3840afdbd45-config-data\") pod \"rabbitmq-server-0\" (UID: \"234df878-2921-45dc-854c-b3840afdbd45\") " pod="openstack/rabbitmq-server-0" Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.453133 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/234df878-2921-45dc-854c-b3840afdbd45-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"234df878-2921-45dc-854c-b3840afdbd45\") " pod="openstack/rabbitmq-server-0" Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.453756 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/234df878-2921-45dc-854c-b3840afdbd45-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"234df878-2921-45dc-854c-b3840afdbd45\") " pod="openstack/rabbitmq-server-0" Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.454396 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/234df878-2921-45dc-854c-b3840afdbd45-server-conf\") pod \"rabbitmq-server-0\" (UID: \"234df878-2921-45dc-854c-b3840afdbd45\") " pod="openstack/rabbitmq-server-0" Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.458896 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/234df878-2921-45dc-854c-b3840afdbd45-pod-info\") pod \"rabbitmq-server-0\" (UID: \"234df878-2921-45dc-854c-b3840afdbd45\") " pod="openstack/rabbitmq-server-0" Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.458973 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/234df878-2921-45dc-854c-b3840afdbd45-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"234df878-2921-45dc-854c-b3840afdbd45\") " pod="openstack/rabbitmq-server-0" Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.460718 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/234df878-2921-45dc-854c-b3840afdbd45-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"234df878-2921-45dc-854c-b3840afdbd45\") " pod="openstack/rabbitmq-server-0" Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.484575 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wx5lx\" (UniqueName: \"kubernetes.io/projected/234df878-2921-45dc-854c-b3840afdbd45-kube-api-access-wx5lx\") pod \"rabbitmq-server-0\" (UID: \"234df878-2921-45dc-854c-b3840afdbd45\") " pod="openstack/rabbitmq-server-0" Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.488017 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/234df878-2921-45dc-854c-b3840afdbd45-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"234df878-2921-45dc-854c-b3840afdbd45\") " pod="openstack/rabbitmq-server-0" Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.496308 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"234df878-2921-45dc-854c-b3840afdbd45\") " pod="openstack/rabbitmq-server-0" Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.611561 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.878881 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.961307 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/76c17d7a-8441-4b23-839b-f95ac54a6b24-server-conf\") pod \"76c17d7a-8441-4b23-839b-f95ac54a6b24\" (UID: \"76c17d7a-8441-4b23-839b-f95ac54a6b24\") " Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.961363 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/76c17d7a-8441-4b23-839b-f95ac54a6b24-rabbitmq-plugins\") pod \"76c17d7a-8441-4b23-839b-f95ac54a6b24\" (UID: \"76c17d7a-8441-4b23-839b-f95ac54a6b24\") " Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.961432 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/76c17d7a-8441-4b23-839b-f95ac54a6b24-pod-info\") pod \"76c17d7a-8441-4b23-839b-f95ac54a6b24\" (UID: \"76c17d7a-8441-4b23-839b-f95ac54a6b24\") " Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.961479 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/76c17d7a-8441-4b23-839b-f95ac54a6b24-plugins-conf\") pod \"76c17d7a-8441-4b23-839b-f95ac54a6b24\" (UID: \"76c17d7a-8441-4b23-839b-f95ac54a6b24\") " Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.961557 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/76c17d7a-8441-4b23-839b-f95ac54a6b24-erlang-cookie-secret\") pod \"76c17d7a-8441-4b23-839b-f95ac54a6b24\" (UID: \"76c17d7a-8441-4b23-839b-f95ac54a6b24\") " Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.961573 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"76c17d7a-8441-4b23-839b-f95ac54a6b24\" (UID: \"76c17d7a-8441-4b23-839b-f95ac54a6b24\") " Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.961617 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/76c17d7a-8441-4b23-839b-f95ac54a6b24-rabbitmq-confd\") pod \"76c17d7a-8441-4b23-839b-f95ac54a6b24\" (UID: \"76c17d7a-8441-4b23-839b-f95ac54a6b24\") " Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.961641 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/76c17d7a-8441-4b23-839b-f95ac54a6b24-rabbitmq-tls\") pod \"76c17d7a-8441-4b23-839b-f95ac54a6b24\" (UID: \"76c17d7a-8441-4b23-839b-f95ac54a6b24\") " Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.961669 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6dc9\" (UniqueName: \"kubernetes.io/projected/76c17d7a-8441-4b23-839b-f95ac54a6b24-kube-api-access-p6dc9\") pod \"76c17d7a-8441-4b23-839b-f95ac54a6b24\" (UID: \"76c17d7a-8441-4b23-839b-f95ac54a6b24\") " Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.961702 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/76c17d7a-8441-4b23-839b-f95ac54a6b24-rabbitmq-erlang-cookie\") pod \"76c17d7a-8441-4b23-839b-f95ac54a6b24\" (UID: \"76c17d7a-8441-4b23-839b-f95ac54a6b24\") " Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.961724 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/76c17d7a-8441-4b23-839b-f95ac54a6b24-config-data\") pod \"76c17d7a-8441-4b23-839b-f95ac54a6b24\" (UID: \"76c17d7a-8441-4b23-839b-f95ac54a6b24\") " Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.964067 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76c17d7a-8441-4b23-839b-f95ac54a6b24-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "76c17d7a-8441-4b23-839b-f95ac54a6b24" (UID: "76c17d7a-8441-4b23-839b-f95ac54a6b24"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.970087 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76c17d7a-8441-4b23-839b-f95ac54a6b24-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "76c17d7a-8441-4b23-839b-f95ac54a6b24" (UID: "76c17d7a-8441-4b23-839b-f95ac54a6b24"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.970654 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76c17d7a-8441-4b23-839b-f95ac54a6b24-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "76c17d7a-8441-4b23-839b-f95ac54a6b24" (UID: "76c17d7a-8441-4b23-839b-f95ac54a6b24"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.972342 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76c17d7a-8441-4b23-839b-f95ac54a6b24-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "76c17d7a-8441-4b23-839b-f95ac54a6b24" (UID: "76c17d7a-8441-4b23-839b-f95ac54a6b24"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.976328 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/76c17d7a-8441-4b23-839b-f95ac54a6b24-pod-info" (OuterVolumeSpecName: "pod-info") pod "76c17d7a-8441-4b23-839b-f95ac54a6b24" (UID: "76c17d7a-8441-4b23-839b-f95ac54a6b24"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.980245 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "persistence") pod "76c17d7a-8441-4b23-839b-f95ac54a6b24" (UID: "76c17d7a-8441-4b23-839b-f95ac54a6b24"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.988505 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76c17d7a-8441-4b23-839b-f95ac54a6b24-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "76c17d7a-8441-4b23-839b-f95ac54a6b24" (UID: "76c17d7a-8441-4b23-839b-f95ac54a6b24"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:43:41 crc kubenswrapper[4720]: I1013 17:43:41.992854 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76c17d7a-8441-4b23-839b-f95ac54a6b24-kube-api-access-p6dc9" (OuterVolumeSpecName: "kube-api-access-p6dc9") pod "76c17d7a-8441-4b23-839b-f95ac54a6b24" (UID: "76c17d7a-8441-4b23-839b-f95ac54a6b24"). InnerVolumeSpecName "kube-api-access-p6dc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.013995 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76c17d7a-8441-4b23-839b-f95ac54a6b24-config-data" (OuterVolumeSpecName: "config-data") pod "76c17d7a-8441-4b23-839b-f95ac54a6b24" (UID: "76c17d7a-8441-4b23-839b-f95ac54a6b24"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.064203 4720 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/76c17d7a-8441-4b23-839b-f95ac54a6b24-plugins-conf\") on node \"crc\" DevicePath \"\"" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.064233 4720 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/76c17d7a-8441-4b23-839b-f95ac54a6b24-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.064263 4720 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.064273 4720 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/76c17d7a-8441-4b23-839b-f95ac54a6b24-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.064282 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6dc9\" (UniqueName: \"kubernetes.io/projected/76c17d7a-8441-4b23-839b-f95ac54a6b24-kube-api-access-p6dc9\") on node \"crc\" DevicePath \"\"" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.064294 4720 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/76c17d7a-8441-4b23-839b-f95ac54a6b24-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.064302 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/76c17d7a-8441-4b23-839b-f95ac54a6b24-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.064310 4720 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/76c17d7a-8441-4b23-839b-f95ac54a6b24-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.064319 4720 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/76c17d7a-8441-4b23-839b-f95ac54a6b24-pod-info\") on node \"crc\" DevicePath \"\"" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.071322 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76c17d7a-8441-4b23-839b-f95ac54a6b24-server-conf" (OuterVolumeSpecName: "server-conf") pod "76c17d7a-8441-4b23-839b-f95ac54a6b24" (UID: "76c17d7a-8441-4b23-839b-f95ac54a6b24"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.085835 4720 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.107769 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76c17d7a-8441-4b23-839b-f95ac54a6b24-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "76c17d7a-8441-4b23-839b-f95ac54a6b24" (UID: "76c17d7a-8441-4b23-839b-f95ac54a6b24"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.138886 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.166250 4720 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.166450 4720 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/76c17d7a-8441-4b23-839b-f95ac54a6b24-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.166504 4720 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/76c17d7a-8441-4b23-839b-f95ac54a6b24-server-conf\") on node \"crc\" DevicePath \"\"" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.189405 4720 generic.go:334] "Generic (PLEG): container finished" podID="76c17d7a-8441-4b23-839b-f95ac54a6b24" containerID="31c995078368b5b95f3c3877081d2472920be4f1127b2d1afc577d2f63a1c6c9" exitCode=0 Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.189459 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.189499 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"76c17d7a-8441-4b23-839b-f95ac54a6b24","Type":"ContainerDied","Data":"31c995078368b5b95f3c3877081d2472920be4f1127b2d1afc577d2f63a1c6c9"} Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.189552 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"76c17d7a-8441-4b23-839b-f95ac54a6b24","Type":"ContainerDied","Data":"0ede3eaf7711748fcd53cb9d82922e79e784bf3efbfaaf169015e022023bc446"} Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.189573 4720 scope.go:117] "RemoveContainer" containerID="31c995078368b5b95f3c3877081d2472920be4f1127b2d1afc577d2f63a1c6c9" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.193098 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"234df878-2921-45dc-854c-b3840afdbd45","Type":"ContainerStarted","Data":"0f590ed094de27c96b4e52e2963fb60642fa9771363966257acced5d69a41809"} Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.208011 4720 scope.go:117] "RemoveContainer" containerID="b82320f15b735d12b0964ce3b17f86b2a98cb4eabc1eed44f6074ca7c0dcf4e4" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.228094 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.244936 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.252005 4720 scope.go:117] "RemoveContainer" containerID="31c995078368b5b95f3c3877081d2472920be4f1127b2d1afc577d2f63a1c6c9" Oct 13 17:43:42 crc kubenswrapper[4720]: E1013 17:43:42.252531 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31c995078368b5b95f3c3877081d2472920be4f1127b2d1afc577d2f63a1c6c9\": container with ID starting with 31c995078368b5b95f3c3877081d2472920be4f1127b2d1afc577d2f63a1c6c9 not found: ID does not exist" containerID="31c995078368b5b95f3c3877081d2472920be4f1127b2d1afc577d2f63a1c6c9" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.253681 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31c995078368b5b95f3c3877081d2472920be4f1127b2d1afc577d2f63a1c6c9"} err="failed to get container status \"31c995078368b5b95f3c3877081d2472920be4f1127b2d1afc577d2f63a1c6c9\": rpc error: code = NotFound desc = could not find container \"31c995078368b5b95f3c3877081d2472920be4f1127b2d1afc577d2f63a1c6c9\": container with ID starting with 31c995078368b5b95f3c3877081d2472920be4f1127b2d1afc577d2f63a1c6c9 not found: ID does not exist" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.253703 4720 scope.go:117] "RemoveContainer" containerID="b82320f15b735d12b0964ce3b17f86b2a98cb4eabc1eed44f6074ca7c0dcf4e4" Oct 13 17:43:42 crc kubenswrapper[4720]: E1013 17:43:42.254091 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b82320f15b735d12b0964ce3b17f86b2a98cb4eabc1eed44f6074ca7c0dcf4e4\": container with ID starting with b82320f15b735d12b0964ce3b17f86b2a98cb4eabc1eed44f6074ca7c0dcf4e4 not found: ID does not exist" containerID="b82320f15b735d12b0964ce3b17f86b2a98cb4eabc1eed44f6074ca7c0dcf4e4" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.254138 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b82320f15b735d12b0964ce3b17f86b2a98cb4eabc1eed44f6074ca7c0dcf4e4"} err="failed to get container status \"b82320f15b735d12b0964ce3b17f86b2a98cb4eabc1eed44f6074ca7c0dcf4e4\": rpc error: code = NotFound desc = could not find container \"b82320f15b735d12b0964ce3b17f86b2a98cb4eabc1eed44f6074ca7c0dcf4e4\": container with ID starting with b82320f15b735d12b0964ce3b17f86b2a98cb4eabc1eed44f6074ca7c0dcf4e4 not found: ID does not exist" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.256225 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 13 17:43:42 crc kubenswrapper[4720]: E1013 17:43:42.256688 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76c17d7a-8441-4b23-839b-f95ac54a6b24" containerName="setup-container" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.256707 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="76c17d7a-8441-4b23-839b-f95ac54a6b24" containerName="setup-container" Oct 13 17:43:42 crc kubenswrapper[4720]: E1013 17:43:42.256727 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76c17d7a-8441-4b23-839b-f95ac54a6b24" containerName="rabbitmq" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.256734 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="76c17d7a-8441-4b23-839b-f95ac54a6b24" containerName="rabbitmq" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.256918 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="76c17d7a-8441-4b23-839b-f95ac54a6b24" containerName="rabbitmq" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.257917 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.262956 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.263160 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.263285 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.263406 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.263593 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.263647 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-cvg4z" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.263750 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.297226 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.369733 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0bc24914-0bdd-4fa7-a859-a4d4f06f0455-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc24914-0bdd-4fa7-a859-a4d4f06f0455\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.369793 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0bc24914-0bdd-4fa7-a859-a4d4f06f0455-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc24914-0bdd-4fa7-a859-a4d4f06f0455\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.369827 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0bc24914-0bdd-4fa7-a859-a4d4f06f0455-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc24914-0bdd-4fa7-a859-a4d4f06f0455\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.369859 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0bc24914-0bdd-4fa7-a859-a4d4f06f0455-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc24914-0bdd-4fa7-a859-a4d4f06f0455\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.369877 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0bc24914-0bdd-4fa7-a859-a4d4f06f0455-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc24914-0bdd-4fa7-a859-a4d4f06f0455\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.369913 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0bc24914-0bdd-4fa7-a859-a4d4f06f0455-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc24914-0bdd-4fa7-a859-a4d4f06f0455\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.369945 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0bc24914-0bdd-4fa7-a859-a4d4f06f0455-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc24914-0bdd-4fa7-a859-a4d4f06f0455\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.369991 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc24914-0bdd-4fa7-a859-a4d4f06f0455\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.370025 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0bc24914-0bdd-4fa7-a859-a4d4f06f0455-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc24914-0bdd-4fa7-a859-a4d4f06f0455\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.370042 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lbbr\" (UniqueName: \"kubernetes.io/projected/0bc24914-0bdd-4fa7-a859-a4d4f06f0455-kube-api-access-6lbbr\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc24914-0bdd-4fa7-a859-a4d4f06f0455\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.370058 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0bc24914-0bdd-4fa7-a859-a4d4f06f0455-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc24914-0bdd-4fa7-a859-a4d4f06f0455\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.471157 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0bc24914-0bdd-4fa7-a859-a4d4f06f0455-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc24914-0bdd-4fa7-a859-a4d4f06f0455\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.471390 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0bc24914-0bdd-4fa7-a859-a4d4f06f0455-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc24914-0bdd-4fa7-a859-a4d4f06f0455\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.471489 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0bc24914-0bdd-4fa7-a859-a4d4f06f0455-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc24914-0bdd-4fa7-a859-a4d4f06f0455\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.471576 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0bc24914-0bdd-4fa7-a859-a4d4f06f0455-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc24914-0bdd-4fa7-a859-a4d4f06f0455\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.471636 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0bc24914-0bdd-4fa7-a859-a4d4f06f0455-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc24914-0bdd-4fa7-a859-a4d4f06f0455\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.471723 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0bc24914-0bdd-4fa7-a859-a4d4f06f0455-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc24914-0bdd-4fa7-a859-a4d4f06f0455\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.471797 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0bc24914-0bdd-4fa7-a859-a4d4f06f0455-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc24914-0bdd-4fa7-a859-a4d4f06f0455\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.471878 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc24914-0bdd-4fa7-a859-a4d4f06f0455\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.471965 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0bc24914-0bdd-4fa7-a859-a4d4f06f0455-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc24914-0bdd-4fa7-a859-a4d4f06f0455\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.472024 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lbbr\" (UniqueName: \"kubernetes.io/projected/0bc24914-0bdd-4fa7-a859-a4d4f06f0455-kube-api-access-6lbbr\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc24914-0bdd-4fa7-a859-a4d4f06f0455\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.472086 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0bc24914-0bdd-4fa7-a859-a4d4f06f0455-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc24914-0bdd-4fa7-a859-a4d4f06f0455\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.472656 4720 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc24914-0bdd-4fa7-a859-a4d4f06f0455\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.472739 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0bc24914-0bdd-4fa7-a859-a4d4f06f0455-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc24914-0bdd-4fa7-a859-a4d4f06f0455\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.473089 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0bc24914-0bdd-4fa7-a859-a4d4f06f0455-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc24914-0bdd-4fa7-a859-a4d4f06f0455\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.473311 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0bc24914-0bdd-4fa7-a859-a4d4f06f0455-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc24914-0bdd-4fa7-a859-a4d4f06f0455\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.473710 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0bc24914-0bdd-4fa7-a859-a4d4f06f0455-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc24914-0bdd-4fa7-a859-a4d4f06f0455\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.473800 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0bc24914-0bdd-4fa7-a859-a4d4f06f0455-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc24914-0bdd-4fa7-a859-a4d4f06f0455\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.477004 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0bc24914-0bdd-4fa7-a859-a4d4f06f0455-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc24914-0bdd-4fa7-a859-a4d4f06f0455\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.477234 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0bc24914-0bdd-4fa7-a859-a4d4f06f0455-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc24914-0bdd-4fa7-a859-a4d4f06f0455\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.477325 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0bc24914-0bdd-4fa7-a859-a4d4f06f0455-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc24914-0bdd-4fa7-a859-a4d4f06f0455\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.477821 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0bc24914-0bdd-4fa7-a859-a4d4f06f0455-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc24914-0bdd-4fa7-a859-a4d4f06f0455\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.488271 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lbbr\" (UniqueName: \"kubernetes.io/projected/0bc24914-0bdd-4fa7-a859-a4d4f06f0455-kube-api-access-6lbbr\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc24914-0bdd-4fa7-a859-a4d4f06f0455\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.511694 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"0bc24914-0bdd-4fa7-a859-a4d4f06f0455\") " pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.588851 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.795042 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-t2szn"] Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.796565 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-t2szn" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.802570 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.820959 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-t2szn"] Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.879648 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/642eeea2-1300-4204-a1ef-1dd718e045b1-dns-svc\") pod \"dnsmasq-dns-67b789f86c-t2szn\" (UID: \"642eeea2-1300-4204-a1ef-1dd718e045b1\") " pod="openstack/dnsmasq-dns-67b789f86c-t2szn" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.879703 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/642eeea2-1300-4204-a1ef-1dd718e045b1-ovsdbserver-nb\") pod \"dnsmasq-dns-67b789f86c-t2szn\" (UID: \"642eeea2-1300-4204-a1ef-1dd718e045b1\") " pod="openstack/dnsmasq-dns-67b789f86c-t2szn" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.879743 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/642eeea2-1300-4204-a1ef-1dd718e045b1-ovsdbserver-sb\") pod \"dnsmasq-dns-67b789f86c-t2szn\" (UID: \"642eeea2-1300-4204-a1ef-1dd718e045b1\") " pod="openstack/dnsmasq-dns-67b789f86c-t2szn" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.879954 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/642eeea2-1300-4204-a1ef-1dd718e045b1-openstack-edpm-ipam\") pod \"dnsmasq-dns-67b789f86c-t2szn\" (UID: \"642eeea2-1300-4204-a1ef-1dd718e045b1\") " pod="openstack/dnsmasq-dns-67b789f86c-t2szn" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.880008 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/642eeea2-1300-4204-a1ef-1dd718e045b1-dns-swift-storage-0\") pod \"dnsmasq-dns-67b789f86c-t2szn\" (UID: \"642eeea2-1300-4204-a1ef-1dd718e045b1\") " pod="openstack/dnsmasq-dns-67b789f86c-t2szn" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.880072 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/642eeea2-1300-4204-a1ef-1dd718e045b1-config\") pod \"dnsmasq-dns-67b789f86c-t2szn\" (UID: \"642eeea2-1300-4204-a1ef-1dd718e045b1\") " pod="openstack/dnsmasq-dns-67b789f86c-t2szn" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.880211 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d4dn\" (UniqueName: \"kubernetes.io/projected/642eeea2-1300-4204-a1ef-1dd718e045b1-kube-api-access-6d4dn\") pod \"dnsmasq-dns-67b789f86c-t2szn\" (UID: \"642eeea2-1300-4204-a1ef-1dd718e045b1\") " pod="openstack/dnsmasq-dns-67b789f86c-t2szn" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.981419 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/642eeea2-1300-4204-a1ef-1dd718e045b1-dns-svc\") pod \"dnsmasq-dns-67b789f86c-t2szn\" (UID: \"642eeea2-1300-4204-a1ef-1dd718e045b1\") " pod="openstack/dnsmasq-dns-67b789f86c-t2szn" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.981475 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/642eeea2-1300-4204-a1ef-1dd718e045b1-ovsdbserver-nb\") pod \"dnsmasq-dns-67b789f86c-t2szn\" (UID: \"642eeea2-1300-4204-a1ef-1dd718e045b1\") " pod="openstack/dnsmasq-dns-67b789f86c-t2szn" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.981505 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/642eeea2-1300-4204-a1ef-1dd718e045b1-ovsdbserver-sb\") pod \"dnsmasq-dns-67b789f86c-t2szn\" (UID: \"642eeea2-1300-4204-a1ef-1dd718e045b1\") " pod="openstack/dnsmasq-dns-67b789f86c-t2szn" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.981552 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/642eeea2-1300-4204-a1ef-1dd718e045b1-openstack-edpm-ipam\") pod \"dnsmasq-dns-67b789f86c-t2szn\" (UID: \"642eeea2-1300-4204-a1ef-1dd718e045b1\") " pod="openstack/dnsmasq-dns-67b789f86c-t2szn" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.981572 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/642eeea2-1300-4204-a1ef-1dd718e045b1-dns-swift-storage-0\") pod \"dnsmasq-dns-67b789f86c-t2szn\" (UID: \"642eeea2-1300-4204-a1ef-1dd718e045b1\") " pod="openstack/dnsmasq-dns-67b789f86c-t2szn" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.981601 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/642eeea2-1300-4204-a1ef-1dd718e045b1-config\") pod \"dnsmasq-dns-67b789f86c-t2szn\" (UID: \"642eeea2-1300-4204-a1ef-1dd718e045b1\") " pod="openstack/dnsmasq-dns-67b789f86c-t2szn" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.981642 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6d4dn\" (UniqueName: \"kubernetes.io/projected/642eeea2-1300-4204-a1ef-1dd718e045b1-kube-api-access-6d4dn\") pod \"dnsmasq-dns-67b789f86c-t2szn\" (UID: \"642eeea2-1300-4204-a1ef-1dd718e045b1\") " pod="openstack/dnsmasq-dns-67b789f86c-t2szn" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.982586 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/642eeea2-1300-4204-a1ef-1dd718e045b1-ovsdbserver-nb\") pod \"dnsmasq-dns-67b789f86c-t2szn\" (UID: \"642eeea2-1300-4204-a1ef-1dd718e045b1\") " pod="openstack/dnsmasq-dns-67b789f86c-t2szn" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.982596 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/642eeea2-1300-4204-a1ef-1dd718e045b1-dns-svc\") pod \"dnsmasq-dns-67b789f86c-t2szn\" (UID: \"642eeea2-1300-4204-a1ef-1dd718e045b1\") " pod="openstack/dnsmasq-dns-67b789f86c-t2szn" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.982730 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/642eeea2-1300-4204-a1ef-1dd718e045b1-config\") pod \"dnsmasq-dns-67b789f86c-t2szn\" (UID: \"642eeea2-1300-4204-a1ef-1dd718e045b1\") " pod="openstack/dnsmasq-dns-67b789f86c-t2szn" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.982867 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/642eeea2-1300-4204-a1ef-1dd718e045b1-openstack-edpm-ipam\") pod \"dnsmasq-dns-67b789f86c-t2szn\" (UID: \"642eeea2-1300-4204-a1ef-1dd718e045b1\") " pod="openstack/dnsmasq-dns-67b789f86c-t2szn" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.983353 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/642eeea2-1300-4204-a1ef-1dd718e045b1-ovsdbserver-sb\") pod \"dnsmasq-dns-67b789f86c-t2szn\" (UID: \"642eeea2-1300-4204-a1ef-1dd718e045b1\") " pod="openstack/dnsmasq-dns-67b789f86c-t2szn" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.983401 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/642eeea2-1300-4204-a1ef-1dd718e045b1-dns-swift-storage-0\") pod \"dnsmasq-dns-67b789f86c-t2szn\" (UID: \"642eeea2-1300-4204-a1ef-1dd718e045b1\") " pod="openstack/dnsmasq-dns-67b789f86c-t2szn" Oct 13 17:43:42 crc kubenswrapper[4720]: I1013 17:43:42.999728 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6d4dn\" (UniqueName: \"kubernetes.io/projected/642eeea2-1300-4204-a1ef-1dd718e045b1-kube-api-access-6d4dn\") pod \"dnsmasq-dns-67b789f86c-t2szn\" (UID: \"642eeea2-1300-4204-a1ef-1dd718e045b1\") " pod="openstack/dnsmasq-dns-67b789f86c-t2szn" Oct 13 17:43:43 crc kubenswrapper[4720]: I1013 17:43:43.104446 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 13 17:43:43 crc kubenswrapper[4720]: I1013 17:43:43.141020 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-t2szn" Oct 13 17:43:43 crc kubenswrapper[4720]: I1013 17:43:43.183880 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76c17d7a-8441-4b23-839b-f95ac54a6b24" path="/var/lib/kubelet/pods/76c17d7a-8441-4b23-839b-f95ac54a6b24/volumes" Oct 13 17:43:43 crc kubenswrapper[4720]: I1013 17:43:43.185259 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af59309d-fcea-47ce-85b5-0eafbf780d08" path="/var/lib/kubelet/pods/af59309d-fcea-47ce-85b5-0eafbf780d08/volumes" Oct 13 17:43:43 crc kubenswrapper[4720]: W1013 17:43:43.212825 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bc24914_0bdd_4fa7_a859_a4d4f06f0455.slice/crio-d358de6f55a046f453bb58da4787c6ed1ae8cbac9d0d60f518f4d355dd0b7e9a WatchSource:0}: Error finding container d358de6f55a046f453bb58da4787c6ed1ae8cbac9d0d60f518f4d355dd0b7e9a: Status 404 returned error can't find the container with id d358de6f55a046f453bb58da4787c6ed1ae8cbac9d0d60f518f4d355dd0b7e9a Oct 13 17:43:43 crc kubenswrapper[4720]: I1013 17:43:43.825666 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-t2szn"] Oct 13 17:43:44 crc kubenswrapper[4720]: I1013 17:43:44.236490 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0bc24914-0bdd-4fa7-a859-a4d4f06f0455","Type":"ContainerStarted","Data":"d358de6f55a046f453bb58da4787c6ed1ae8cbac9d0d60f518f4d355dd0b7e9a"} Oct 13 17:43:44 crc kubenswrapper[4720]: I1013 17:43:44.239398 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"234df878-2921-45dc-854c-b3840afdbd45","Type":"ContainerStarted","Data":"55c063df5588e2a6409f4ff2b497618f0ae0a03e9485caeebc6e06a6a2218c7f"} Oct 13 17:43:44 crc kubenswrapper[4720]: I1013 17:43:44.242795 4720 generic.go:334] "Generic (PLEG): container finished" podID="642eeea2-1300-4204-a1ef-1dd718e045b1" containerID="7ad370c0174225ceb32aba0c9c1fa9d3179d577d86990afd73cc288ce7c04777" exitCode=0 Oct 13 17:43:44 crc kubenswrapper[4720]: I1013 17:43:44.242825 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-t2szn" event={"ID":"642eeea2-1300-4204-a1ef-1dd718e045b1","Type":"ContainerDied","Data":"7ad370c0174225ceb32aba0c9c1fa9d3179d577d86990afd73cc288ce7c04777"} Oct 13 17:43:44 crc kubenswrapper[4720]: I1013 17:43:44.242841 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-t2szn" event={"ID":"642eeea2-1300-4204-a1ef-1dd718e045b1","Type":"ContainerStarted","Data":"874005d034497d895bb738c531a4901d8fbcaf9718c49f46dad7537a5a86badb"} Oct 13 17:43:45 crc kubenswrapper[4720]: I1013 17:43:45.265733 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-t2szn" event={"ID":"642eeea2-1300-4204-a1ef-1dd718e045b1","Type":"ContainerStarted","Data":"3709571105c309180427f073ce0f9330d14ed0ee103563536cb1c1408e8e9c82"} Oct 13 17:43:45 crc kubenswrapper[4720]: I1013 17:43:45.268940 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67b789f86c-t2szn" Oct 13 17:43:45 crc kubenswrapper[4720]: I1013 17:43:45.275568 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0bc24914-0bdd-4fa7-a859-a4d4f06f0455","Type":"ContainerStarted","Data":"494fe785ad8c54ba2dc25510339534cbbcab5031224745f8b0db3fd42cf05b86"} Oct 13 17:43:45 crc kubenswrapper[4720]: I1013 17:43:45.296302 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67b789f86c-t2szn" podStartSLOduration=3.2962765210000002 podStartE2EDuration="3.296276521s" podCreationTimestamp="2025-10-13 17:43:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:43:45.288289445 +0000 UTC m=+1170.745539617" watchObservedRunningTime="2025-10-13 17:43:45.296276521 +0000 UTC m=+1170.753526663" Oct 13 17:43:53 crc kubenswrapper[4720]: I1013 17:43:53.143496 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-67b789f86c-t2szn" Oct 13 17:43:53 crc kubenswrapper[4720]: I1013 17:43:53.208422 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-c5s42"] Oct 13 17:43:53 crc kubenswrapper[4720]: I1013 17:43:53.208637 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-59cf4bdb65-c5s42" podUID="f91af118-9710-48b0-893b-c41d53a4088b" containerName="dnsmasq-dns" containerID="cri-o://4718bae9e453353ec74a07590e52d28c08ac31f15c6f0644503c356f0148c20f" gracePeriod=10 Oct 13 17:43:53 crc kubenswrapper[4720]: I1013 17:43:53.361617 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cb6ffcf87-wvm8b"] Oct 13 17:43:53 crc kubenswrapper[4720]: I1013 17:43:53.363980 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb6ffcf87-wvm8b" Oct 13 17:43:53 crc kubenswrapper[4720]: I1013 17:43:53.385219 4720 generic.go:334] "Generic (PLEG): container finished" podID="f91af118-9710-48b0-893b-c41d53a4088b" containerID="4718bae9e453353ec74a07590e52d28c08ac31f15c6f0644503c356f0148c20f" exitCode=0 Oct 13 17:43:53 crc kubenswrapper[4720]: I1013 17:43:53.385258 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-c5s42" event={"ID":"f91af118-9710-48b0-893b-c41d53a4088b","Type":"ContainerDied","Data":"4718bae9e453353ec74a07590e52d28c08ac31f15c6f0644503c356f0148c20f"} Oct 13 17:43:53 crc kubenswrapper[4720]: I1013 17:43:53.386684 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cb6ffcf87-wvm8b"] Oct 13 17:43:53 crc kubenswrapper[4720]: I1013 17:43:53.523429 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/323cdd25-bf01-4cf0-8ccc-7dbc90581afd-dns-svc\") pod \"dnsmasq-dns-cb6ffcf87-wvm8b\" (UID: \"323cdd25-bf01-4cf0-8ccc-7dbc90581afd\") " pod="openstack/dnsmasq-dns-cb6ffcf87-wvm8b" Oct 13 17:43:53 crc kubenswrapper[4720]: I1013 17:43:53.523484 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h9j4\" (UniqueName: \"kubernetes.io/projected/323cdd25-bf01-4cf0-8ccc-7dbc90581afd-kube-api-access-5h9j4\") pod \"dnsmasq-dns-cb6ffcf87-wvm8b\" (UID: \"323cdd25-bf01-4cf0-8ccc-7dbc90581afd\") " pod="openstack/dnsmasq-dns-cb6ffcf87-wvm8b" Oct 13 17:43:53 crc kubenswrapper[4720]: I1013 17:43:53.523595 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/323cdd25-bf01-4cf0-8ccc-7dbc90581afd-dns-swift-storage-0\") pod \"dnsmasq-dns-cb6ffcf87-wvm8b\" (UID: \"323cdd25-bf01-4cf0-8ccc-7dbc90581afd\") " pod="openstack/dnsmasq-dns-cb6ffcf87-wvm8b" Oct 13 17:43:53 crc kubenswrapper[4720]: I1013 17:43:53.523900 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/323cdd25-bf01-4cf0-8ccc-7dbc90581afd-ovsdbserver-sb\") pod \"dnsmasq-dns-cb6ffcf87-wvm8b\" (UID: \"323cdd25-bf01-4cf0-8ccc-7dbc90581afd\") " pod="openstack/dnsmasq-dns-cb6ffcf87-wvm8b" Oct 13 17:43:53 crc kubenswrapper[4720]: I1013 17:43:53.524020 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/323cdd25-bf01-4cf0-8ccc-7dbc90581afd-ovsdbserver-nb\") pod \"dnsmasq-dns-cb6ffcf87-wvm8b\" (UID: \"323cdd25-bf01-4cf0-8ccc-7dbc90581afd\") " pod="openstack/dnsmasq-dns-cb6ffcf87-wvm8b" Oct 13 17:43:53 crc kubenswrapper[4720]: I1013 17:43:53.524099 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/323cdd25-bf01-4cf0-8ccc-7dbc90581afd-openstack-edpm-ipam\") pod \"dnsmasq-dns-cb6ffcf87-wvm8b\" (UID: \"323cdd25-bf01-4cf0-8ccc-7dbc90581afd\") " pod="openstack/dnsmasq-dns-cb6ffcf87-wvm8b" Oct 13 17:43:53 crc kubenswrapper[4720]: I1013 17:43:53.524350 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/323cdd25-bf01-4cf0-8ccc-7dbc90581afd-config\") pod \"dnsmasq-dns-cb6ffcf87-wvm8b\" (UID: \"323cdd25-bf01-4cf0-8ccc-7dbc90581afd\") " pod="openstack/dnsmasq-dns-cb6ffcf87-wvm8b" Oct 13 17:43:53 crc kubenswrapper[4720]: I1013 17:43:53.626141 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/323cdd25-bf01-4cf0-8ccc-7dbc90581afd-dns-svc\") pod \"dnsmasq-dns-cb6ffcf87-wvm8b\" (UID: \"323cdd25-bf01-4cf0-8ccc-7dbc90581afd\") " pod="openstack/dnsmasq-dns-cb6ffcf87-wvm8b" Oct 13 17:43:53 crc kubenswrapper[4720]: I1013 17:43:53.626201 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5h9j4\" (UniqueName: \"kubernetes.io/projected/323cdd25-bf01-4cf0-8ccc-7dbc90581afd-kube-api-access-5h9j4\") pod \"dnsmasq-dns-cb6ffcf87-wvm8b\" (UID: \"323cdd25-bf01-4cf0-8ccc-7dbc90581afd\") " pod="openstack/dnsmasq-dns-cb6ffcf87-wvm8b" Oct 13 17:43:53 crc kubenswrapper[4720]: I1013 17:43:53.626246 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/323cdd25-bf01-4cf0-8ccc-7dbc90581afd-dns-swift-storage-0\") pod \"dnsmasq-dns-cb6ffcf87-wvm8b\" (UID: \"323cdd25-bf01-4cf0-8ccc-7dbc90581afd\") " pod="openstack/dnsmasq-dns-cb6ffcf87-wvm8b" Oct 13 17:43:53 crc kubenswrapper[4720]: I1013 17:43:53.626299 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/323cdd25-bf01-4cf0-8ccc-7dbc90581afd-ovsdbserver-sb\") pod \"dnsmasq-dns-cb6ffcf87-wvm8b\" (UID: \"323cdd25-bf01-4cf0-8ccc-7dbc90581afd\") " pod="openstack/dnsmasq-dns-cb6ffcf87-wvm8b" Oct 13 17:43:53 crc kubenswrapper[4720]: I1013 17:43:53.626332 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/323cdd25-bf01-4cf0-8ccc-7dbc90581afd-ovsdbserver-nb\") pod \"dnsmasq-dns-cb6ffcf87-wvm8b\" (UID: \"323cdd25-bf01-4cf0-8ccc-7dbc90581afd\") " pod="openstack/dnsmasq-dns-cb6ffcf87-wvm8b" Oct 13 17:43:53 crc kubenswrapper[4720]: I1013 17:43:53.626357 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/323cdd25-bf01-4cf0-8ccc-7dbc90581afd-openstack-edpm-ipam\") pod \"dnsmasq-dns-cb6ffcf87-wvm8b\" (UID: \"323cdd25-bf01-4cf0-8ccc-7dbc90581afd\") " pod="openstack/dnsmasq-dns-cb6ffcf87-wvm8b" Oct 13 17:43:53 crc kubenswrapper[4720]: I1013 17:43:53.626396 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/323cdd25-bf01-4cf0-8ccc-7dbc90581afd-config\") pod \"dnsmasq-dns-cb6ffcf87-wvm8b\" (UID: \"323cdd25-bf01-4cf0-8ccc-7dbc90581afd\") " pod="openstack/dnsmasq-dns-cb6ffcf87-wvm8b" Oct 13 17:43:53 crc kubenswrapper[4720]: I1013 17:43:53.627244 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/323cdd25-bf01-4cf0-8ccc-7dbc90581afd-config\") pod \"dnsmasq-dns-cb6ffcf87-wvm8b\" (UID: \"323cdd25-bf01-4cf0-8ccc-7dbc90581afd\") " pod="openstack/dnsmasq-dns-cb6ffcf87-wvm8b" Oct 13 17:43:53 crc kubenswrapper[4720]: I1013 17:43:53.627797 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/323cdd25-bf01-4cf0-8ccc-7dbc90581afd-dns-svc\") pod \"dnsmasq-dns-cb6ffcf87-wvm8b\" (UID: \"323cdd25-bf01-4cf0-8ccc-7dbc90581afd\") " pod="openstack/dnsmasq-dns-cb6ffcf87-wvm8b" Oct 13 17:43:53 crc kubenswrapper[4720]: I1013 17:43:53.628274 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/323cdd25-bf01-4cf0-8ccc-7dbc90581afd-ovsdbserver-sb\") pod \"dnsmasq-dns-cb6ffcf87-wvm8b\" (UID: \"323cdd25-bf01-4cf0-8ccc-7dbc90581afd\") " pod="openstack/dnsmasq-dns-cb6ffcf87-wvm8b" Oct 13 17:43:53 crc kubenswrapper[4720]: I1013 17:43:53.628375 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/323cdd25-bf01-4cf0-8ccc-7dbc90581afd-ovsdbserver-nb\") pod \"dnsmasq-dns-cb6ffcf87-wvm8b\" (UID: \"323cdd25-bf01-4cf0-8ccc-7dbc90581afd\") " pod="openstack/dnsmasq-dns-cb6ffcf87-wvm8b" Oct 13 17:43:53 crc kubenswrapper[4720]: I1013 17:43:53.628910 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/323cdd25-bf01-4cf0-8ccc-7dbc90581afd-openstack-edpm-ipam\") pod \"dnsmasq-dns-cb6ffcf87-wvm8b\" (UID: \"323cdd25-bf01-4cf0-8ccc-7dbc90581afd\") " pod="openstack/dnsmasq-dns-cb6ffcf87-wvm8b" Oct 13 17:43:53 crc kubenswrapper[4720]: I1013 17:43:53.629265 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/323cdd25-bf01-4cf0-8ccc-7dbc90581afd-dns-swift-storage-0\") pod \"dnsmasq-dns-cb6ffcf87-wvm8b\" (UID: \"323cdd25-bf01-4cf0-8ccc-7dbc90581afd\") " pod="openstack/dnsmasq-dns-cb6ffcf87-wvm8b" Oct 13 17:43:53 crc kubenswrapper[4720]: I1013 17:43:53.684411 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5h9j4\" (UniqueName: \"kubernetes.io/projected/323cdd25-bf01-4cf0-8ccc-7dbc90581afd-kube-api-access-5h9j4\") pod \"dnsmasq-dns-cb6ffcf87-wvm8b\" (UID: \"323cdd25-bf01-4cf0-8ccc-7dbc90581afd\") " pod="openstack/dnsmasq-dns-cb6ffcf87-wvm8b" Oct 13 17:43:53 crc kubenswrapper[4720]: I1013 17:43:53.729565 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb6ffcf87-wvm8b" Oct 13 17:43:53 crc kubenswrapper[4720]: I1013 17:43:53.861455 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-c5s42" Oct 13 17:43:54 crc kubenswrapper[4720]: I1013 17:43:54.035980 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdwxr\" (UniqueName: \"kubernetes.io/projected/f91af118-9710-48b0-893b-c41d53a4088b-kube-api-access-gdwxr\") pod \"f91af118-9710-48b0-893b-c41d53a4088b\" (UID: \"f91af118-9710-48b0-893b-c41d53a4088b\") " Oct 13 17:43:54 crc kubenswrapper[4720]: I1013 17:43:54.036029 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f91af118-9710-48b0-893b-c41d53a4088b-dns-svc\") pod \"f91af118-9710-48b0-893b-c41d53a4088b\" (UID: \"f91af118-9710-48b0-893b-c41d53a4088b\") " Oct 13 17:43:54 crc kubenswrapper[4720]: I1013 17:43:54.036137 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f91af118-9710-48b0-893b-c41d53a4088b-ovsdbserver-sb\") pod \"f91af118-9710-48b0-893b-c41d53a4088b\" (UID: \"f91af118-9710-48b0-893b-c41d53a4088b\") " Oct 13 17:43:54 crc kubenswrapper[4720]: I1013 17:43:54.036162 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f91af118-9710-48b0-893b-c41d53a4088b-config\") pod \"f91af118-9710-48b0-893b-c41d53a4088b\" (UID: \"f91af118-9710-48b0-893b-c41d53a4088b\") " Oct 13 17:43:54 crc kubenswrapper[4720]: I1013 17:43:54.036242 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f91af118-9710-48b0-893b-c41d53a4088b-dns-swift-storage-0\") pod \"f91af118-9710-48b0-893b-c41d53a4088b\" (UID: \"f91af118-9710-48b0-893b-c41d53a4088b\") " Oct 13 17:43:54 crc kubenswrapper[4720]: I1013 17:43:54.036283 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f91af118-9710-48b0-893b-c41d53a4088b-ovsdbserver-nb\") pod \"f91af118-9710-48b0-893b-c41d53a4088b\" (UID: \"f91af118-9710-48b0-893b-c41d53a4088b\") " Oct 13 17:43:54 crc kubenswrapper[4720]: I1013 17:43:54.040850 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f91af118-9710-48b0-893b-c41d53a4088b-kube-api-access-gdwxr" (OuterVolumeSpecName: "kube-api-access-gdwxr") pod "f91af118-9710-48b0-893b-c41d53a4088b" (UID: "f91af118-9710-48b0-893b-c41d53a4088b"). InnerVolumeSpecName "kube-api-access-gdwxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:43:54 crc kubenswrapper[4720]: I1013 17:43:54.083546 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f91af118-9710-48b0-893b-c41d53a4088b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f91af118-9710-48b0-893b-c41d53a4088b" (UID: "f91af118-9710-48b0-893b-c41d53a4088b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:43:54 crc kubenswrapper[4720]: I1013 17:43:54.083635 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f91af118-9710-48b0-893b-c41d53a4088b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f91af118-9710-48b0-893b-c41d53a4088b" (UID: "f91af118-9710-48b0-893b-c41d53a4088b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:43:54 crc kubenswrapper[4720]: I1013 17:43:54.099212 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f91af118-9710-48b0-893b-c41d53a4088b-config" (OuterVolumeSpecName: "config") pod "f91af118-9710-48b0-893b-c41d53a4088b" (UID: "f91af118-9710-48b0-893b-c41d53a4088b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:43:54 crc kubenswrapper[4720]: I1013 17:43:54.103895 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f91af118-9710-48b0-893b-c41d53a4088b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f91af118-9710-48b0-893b-c41d53a4088b" (UID: "f91af118-9710-48b0-893b-c41d53a4088b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:43:54 crc kubenswrapper[4720]: I1013 17:43:54.108711 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f91af118-9710-48b0-893b-c41d53a4088b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f91af118-9710-48b0-893b-c41d53a4088b" (UID: "f91af118-9710-48b0-893b-c41d53a4088b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:43:54 crc kubenswrapper[4720]: I1013 17:43:54.139707 4720 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f91af118-9710-48b0-893b-c41d53a4088b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 13 17:43:54 crc kubenswrapper[4720]: I1013 17:43:54.139964 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f91af118-9710-48b0-893b-c41d53a4088b-config\") on node \"crc\" DevicePath \"\"" Oct 13 17:43:54 crc kubenswrapper[4720]: I1013 17:43:54.139974 4720 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f91af118-9710-48b0-893b-c41d53a4088b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 13 17:43:54 crc kubenswrapper[4720]: I1013 17:43:54.139985 4720 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f91af118-9710-48b0-893b-c41d53a4088b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 13 17:43:54 crc kubenswrapper[4720]: I1013 17:43:54.139993 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdwxr\" (UniqueName: \"kubernetes.io/projected/f91af118-9710-48b0-893b-c41d53a4088b-kube-api-access-gdwxr\") on node \"crc\" DevicePath \"\"" Oct 13 17:43:54 crc kubenswrapper[4720]: I1013 17:43:54.140003 4720 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f91af118-9710-48b0-893b-c41d53a4088b-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 13 17:43:54 crc kubenswrapper[4720]: I1013 17:43:54.190893 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cb6ffcf87-wvm8b"] Oct 13 17:43:54 crc kubenswrapper[4720]: W1013 17:43:54.197758 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod323cdd25_bf01_4cf0_8ccc_7dbc90581afd.slice/crio-08cc1315059683437a68433ff75af1a6bde602e14bf8bbbe59a648830602280f WatchSource:0}: Error finding container 08cc1315059683437a68433ff75af1a6bde602e14bf8bbbe59a648830602280f: Status 404 returned error can't find the container with id 08cc1315059683437a68433ff75af1a6bde602e14bf8bbbe59a648830602280f Oct 13 17:43:54 crc kubenswrapper[4720]: I1013 17:43:54.403510 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-c5s42" event={"ID":"f91af118-9710-48b0-893b-c41d53a4088b","Type":"ContainerDied","Data":"6285df0b5f0c2a38f28c03b465aa5dda94c86e1ced0af80af277c5e0edac359b"} Oct 13 17:43:54 crc kubenswrapper[4720]: I1013 17:43:54.403585 4720 scope.go:117] "RemoveContainer" containerID="4718bae9e453353ec74a07590e52d28c08ac31f15c6f0644503c356f0148c20f" Oct 13 17:43:54 crc kubenswrapper[4720]: I1013 17:43:54.403780 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-c5s42" Oct 13 17:43:54 crc kubenswrapper[4720]: I1013 17:43:54.407726 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb6ffcf87-wvm8b" event={"ID":"323cdd25-bf01-4cf0-8ccc-7dbc90581afd","Type":"ContainerStarted","Data":"08cc1315059683437a68433ff75af1a6bde602e14bf8bbbe59a648830602280f"} Oct 13 17:43:54 crc kubenswrapper[4720]: I1013 17:43:54.451532 4720 scope.go:117] "RemoveContainer" containerID="6285aaf3345dcb2ef68aca866c9c52b2268548fde172fb6bc2ad78576f0761ee" Oct 13 17:43:54 crc kubenswrapper[4720]: I1013 17:43:54.458605 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-c5s42"] Oct 13 17:43:54 crc kubenswrapper[4720]: I1013 17:43:54.467323 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-c5s42"] Oct 13 17:43:55 crc kubenswrapper[4720]: I1013 17:43:55.193174 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f91af118-9710-48b0-893b-c41d53a4088b" path="/var/lib/kubelet/pods/f91af118-9710-48b0-893b-c41d53a4088b/volumes" Oct 13 17:43:55 crc kubenswrapper[4720]: I1013 17:43:55.421136 4720 generic.go:334] "Generic (PLEG): container finished" podID="323cdd25-bf01-4cf0-8ccc-7dbc90581afd" containerID="ea24e0618c714e7e50357979ab5137d8f5f2794cf8ab629c2e5fdeba3a77a712" exitCode=0 Oct 13 17:43:55 crc kubenswrapper[4720]: I1013 17:43:55.421206 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb6ffcf87-wvm8b" event={"ID":"323cdd25-bf01-4cf0-8ccc-7dbc90581afd","Type":"ContainerDied","Data":"ea24e0618c714e7e50357979ab5137d8f5f2794cf8ab629c2e5fdeba3a77a712"} Oct 13 17:43:56 crc kubenswrapper[4720]: I1013 17:43:56.429641 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb6ffcf87-wvm8b" event={"ID":"323cdd25-bf01-4cf0-8ccc-7dbc90581afd","Type":"ContainerStarted","Data":"e166325e5ced8ecbbf2f95e594eef0bc923dd656bb5f9cf5ac7c1ed2ff69b38b"} Oct 13 17:43:56 crc kubenswrapper[4720]: I1013 17:43:56.430106 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cb6ffcf87-wvm8b" Oct 13 17:43:56 crc kubenswrapper[4720]: I1013 17:43:56.464467 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cb6ffcf87-wvm8b" podStartSLOduration=3.4644354010000002 podStartE2EDuration="3.464435401s" podCreationTimestamp="2025-10-13 17:43:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:43:56.45273108 +0000 UTC m=+1181.909981252" watchObservedRunningTime="2025-10-13 17:43:56.464435401 +0000 UTC m=+1181.921685583" Oct 13 17:44:03 crc kubenswrapper[4720]: I1013 17:44:03.732334 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cb6ffcf87-wvm8b" Oct 13 17:44:03 crc kubenswrapper[4720]: I1013 17:44:03.805438 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-t2szn"] Oct 13 17:44:03 crc kubenswrapper[4720]: I1013 17:44:03.805805 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67b789f86c-t2szn" podUID="642eeea2-1300-4204-a1ef-1dd718e045b1" containerName="dnsmasq-dns" containerID="cri-o://3709571105c309180427f073ce0f9330d14ed0ee103563536cb1c1408e8e9c82" gracePeriod=10 Oct 13 17:44:04 crc kubenswrapper[4720]: I1013 17:44:04.286044 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-t2szn" Oct 13 17:44:04 crc kubenswrapper[4720]: I1013 17:44:04.420926 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/642eeea2-1300-4204-a1ef-1dd718e045b1-ovsdbserver-sb\") pod \"642eeea2-1300-4204-a1ef-1dd718e045b1\" (UID: \"642eeea2-1300-4204-a1ef-1dd718e045b1\") " Oct 13 17:44:04 crc kubenswrapper[4720]: I1013 17:44:04.421276 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/642eeea2-1300-4204-a1ef-1dd718e045b1-ovsdbserver-nb\") pod \"642eeea2-1300-4204-a1ef-1dd718e045b1\" (UID: \"642eeea2-1300-4204-a1ef-1dd718e045b1\") " Oct 13 17:44:04 crc kubenswrapper[4720]: I1013 17:44:04.421328 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6d4dn\" (UniqueName: \"kubernetes.io/projected/642eeea2-1300-4204-a1ef-1dd718e045b1-kube-api-access-6d4dn\") pod \"642eeea2-1300-4204-a1ef-1dd718e045b1\" (UID: \"642eeea2-1300-4204-a1ef-1dd718e045b1\") " Oct 13 17:44:04 crc kubenswrapper[4720]: I1013 17:44:04.421494 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/642eeea2-1300-4204-a1ef-1dd718e045b1-openstack-edpm-ipam\") pod \"642eeea2-1300-4204-a1ef-1dd718e045b1\" (UID: \"642eeea2-1300-4204-a1ef-1dd718e045b1\") " Oct 13 17:44:04 crc kubenswrapper[4720]: I1013 17:44:04.421573 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/642eeea2-1300-4204-a1ef-1dd718e045b1-dns-svc\") pod \"642eeea2-1300-4204-a1ef-1dd718e045b1\" (UID: \"642eeea2-1300-4204-a1ef-1dd718e045b1\") " Oct 13 17:44:04 crc kubenswrapper[4720]: I1013 17:44:04.421752 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/642eeea2-1300-4204-a1ef-1dd718e045b1-config\") pod \"642eeea2-1300-4204-a1ef-1dd718e045b1\" (UID: \"642eeea2-1300-4204-a1ef-1dd718e045b1\") " Oct 13 17:44:04 crc kubenswrapper[4720]: I1013 17:44:04.421857 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/642eeea2-1300-4204-a1ef-1dd718e045b1-dns-swift-storage-0\") pod \"642eeea2-1300-4204-a1ef-1dd718e045b1\" (UID: \"642eeea2-1300-4204-a1ef-1dd718e045b1\") " Oct 13 17:44:04 crc kubenswrapper[4720]: I1013 17:44:04.428552 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/642eeea2-1300-4204-a1ef-1dd718e045b1-kube-api-access-6d4dn" (OuterVolumeSpecName: "kube-api-access-6d4dn") pod "642eeea2-1300-4204-a1ef-1dd718e045b1" (UID: "642eeea2-1300-4204-a1ef-1dd718e045b1"). InnerVolumeSpecName "kube-api-access-6d4dn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:44:04 crc kubenswrapper[4720]: I1013 17:44:04.479632 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/642eeea2-1300-4204-a1ef-1dd718e045b1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "642eeea2-1300-4204-a1ef-1dd718e045b1" (UID: "642eeea2-1300-4204-a1ef-1dd718e045b1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:44:04 crc kubenswrapper[4720]: I1013 17:44:04.483415 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/642eeea2-1300-4204-a1ef-1dd718e045b1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "642eeea2-1300-4204-a1ef-1dd718e045b1" (UID: "642eeea2-1300-4204-a1ef-1dd718e045b1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:44:04 crc kubenswrapper[4720]: I1013 17:44:04.504077 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/642eeea2-1300-4204-a1ef-1dd718e045b1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "642eeea2-1300-4204-a1ef-1dd718e045b1" (UID: "642eeea2-1300-4204-a1ef-1dd718e045b1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:44:04 crc kubenswrapper[4720]: I1013 17:44:04.506336 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/642eeea2-1300-4204-a1ef-1dd718e045b1-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "642eeea2-1300-4204-a1ef-1dd718e045b1" (UID: "642eeea2-1300-4204-a1ef-1dd718e045b1"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:44:04 crc kubenswrapper[4720]: I1013 17:44:04.515486 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/642eeea2-1300-4204-a1ef-1dd718e045b1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "642eeea2-1300-4204-a1ef-1dd718e045b1" (UID: "642eeea2-1300-4204-a1ef-1dd718e045b1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:44:04 crc kubenswrapper[4720]: I1013 17:44:04.518041 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/642eeea2-1300-4204-a1ef-1dd718e045b1-config" (OuterVolumeSpecName: "config") pod "642eeea2-1300-4204-a1ef-1dd718e045b1" (UID: "642eeea2-1300-4204-a1ef-1dd718e045b1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:44:04 crc kubenswrapper[4720]: I1013 17:44:04.524806 4720 generic.go:334] "Generic (PLEG): container finished" podID="642eeea2-1300-4204-a1ef-1dd718e045b1" containerID="3709571105c309180427f073ce0f9330d14ed0ee103563536cb1c1408e8e9c82" exitCode=0 Oct 13 17:44:04 crc kubenswrapper[4720]: I1013 17:44:04.524856 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-t2szn" Oct 13 17:44:04 crc kubenswrapper[4720]: I1013 17:44:04.524877 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-t2szn" event={"ID":"642eeea2-1300-4204-a1ef-1dd718e045b1","Type":"ContainerDied","Data":"3709571105c309180427f073ce0f9330d14ed0ee103563536cb1c1408e8e9c82"} Oct 13 17:44:04 crc kubenswrapper[4720]: I1013 17:44:04.525253 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-t2szn" event={"ID":"642eeea2-1300-4204-a1ef-1dd718e045b1","Type":"ContainerDied","Data":"874005d034497d895bb738c531a4901d8fbcaf9718c49f46dad7537a5a86badb"} Oct 13 17:44:04 crc kubenswrapper[4720]: I1013 17:44:04.525277 4720 scope.go:117] "RemoveContainer" containerID="3709571105c309180427f073ce0f9330d14ed0ee103563536cb1c1408e8e9c82" Oct 13 17:44:04 crc kubenswrapper[4720]: I1013 17:44:04.538364 4720 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/642eeea2-1300-4204-a1ef-1dd718e045b1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 13 17:44:04 crc kubenswrapper[4720]: I1013 17:44:04.538403 4720 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/642eeea2-1300-4204-a1ef-1dd718e045b1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 13 17:44:04 crc kubenswrapper[4720]: I1013 17:44:04.538416 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6d4dn\" (UniqueName: \"kubernetes.io/projected/642eeea2-1300-4204-a1ef-1dd718e045b1-kube-api-access-6d4dn\") on node \"crc\" DevicePath \"\"" Oct 13 17:44:04 crc kubenswrapper[4720]: I1013 17:44:04.538430 4720 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/642eeea2-1300-4204-a1ef-1dd718e045b1-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Oct 13 17:44:04 crc kubenswrapper[4720]: I1013 17:44:04.538442 4720 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/642eeea2-1300-4204-a1ef-1dd718e045b1-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 13 17:44:04 crc kubenswrapper[4720]: I1013 17:44:04.538455 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/642eeea2-1300-4204-a1ef-1dd718e045b1-config\") on node \"crc\" DevicePath \"\"" Oct 13 17:44:04 crc kubenswrapper[4720]: I1013 17:44:04.538466 4720 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/642eeea2-1300-4204-a1ef-1dd718e045b1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 13 17:44:04 crc kubenswrapper[4720]: I1013 17:44:04.581114 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-t2szn"] Oct 13 17:44:04 crc kubenswrapper[4720]: I1013 17:44:04.587578 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-t2szn"] Oct 13 17:44:04 crc kubenswrapper[4720]: I1013 17:44:04.591182 4720 scope.go:117] "RemoveContainer" containerID="7ad370c0174225ceb32aba0c9c1fa9d3179d577d86990afd73cc288ce7c04777" Oct 13 17:44:04 crc kubenswrapper[4720]: I1013 17:44:04.613541 4720 scope.go:117] "RemoveContainer" containerID="3709571105c309180427f073ce0f9330d14ed0ee103563536cb1c1408e8e9c82" Oct 13 17:44:04 crc kubenswrapper[4720]: E1013 17:44:04.614237 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3709571105c309180427f073ce0f9330d14ed0ee103563536cb1c1408e8e9c82\": container with ID starting with 3709571105c309180427f073ce0f9330d14ed0ee103563536cb1c1408e8e9c82 not found: ID does not exist" containerID="3709571105c309180427f073ce0f9330d14ed0ee103563536cb1c1408e8e9c82" Oct 13 17:44:04 crc kubenswrapper[4720]: I1013 17:44:04.614271 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3709571105c309180427f073ce0f9330d14ed0ee103563536cb1c1408e8e9c82"} err="failed to get container status \"3709571105c309180427f073ce0f9330d14ed0ee103563536cb1c1408e8e9c82\": rpc error: code = NotFound desc = could not find container \"3709571105c309180427f073ce0f9330d14ed0ee103563536cb1c1408e8e9c82\": container with ID starting with 3709571105c309180427f073ce0f9330d14ed0ee103563536cb1c1408e8e9c82 not found: ID does not exist" Oct 13 17:44:04 crc kubenswrapper[4720]: I1013 17:44:04.614295 4720 scope.go:117] "RemoveContainer" containerID="7ad370c0174225ceb32aba0c9c1fa9d3179d577d86990afd73cc288ce7c04777" Oct 13 17:44:04 crc kubenswrapper[4720]: E1013 17:44:04.614614 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ad370c0174225ceb32aba0c9c1fa9d3179d577d86990afd73cc288ce7c04777\": container with ID starting with 7ad370c0174225ceb32aba0c9c1fa9d3179d577d86990afd73cc288ce7c04777 not found: ID does not exist" containerID="7ad370c0174225ceb32aba0c9c1fa9d3179d577d86990afd73cc288ce7c04777" Oct 13 17:44:04 crc kubenswrapper[4720]: I1013 17:44:04.614654 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ad370c0174225ceb32aba0c9c1fa9d3179d577d86990afd73cc288ce7c04777"} err="failed to get container status \"7ad370c0174225ceb32aba0c9c1fa9d3179d577d86990afd73cc288ce7c04777\": rpc error: code = NotFound desc = could not find container \"7ad370c0174225ceb32aba0c9c1fa9d3179d577d86990afd73cc288ce7c04777\": container with ID starting with 7ad370c0174225ceb32aba0c9c1fa9d3179d577d86990afd73cc288ce7c04777 not found: ID does not exist" Oct 13 17:44:05 crc kubenswrapper[4720]: I1013 17:44:05.189778 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="642eeea2-1300-4204-a1ef-1dd718e045b1" path="/var/lib/kubelet/pods/642eeea2-1300-4204-a1ef-1dd718e045b1/volumes" Oct 13 17:44:16 crc kubenswrapper[4720]: I1013 17:44:16.682518 4720 generic.go:334] "Generic (PLEG): container finished" podID="234df878-2921-45dc-854c-b3840afdbd45" containerID="55c063df5588e2a6409f4ff2b497618f0ae0a03e9485caeebc6e06a6a2218c7f" exitCode=0 Oct 13 17:44:16 crc kubenswrapper[4720]: I1013 17:44:16.682646 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"234df878-2921-45dc-854c-b3840afdbd45","Type":"ContainerDied","Data":"55c063df5588e2a6409f4ff2b497618f0ae0a03e9485caeebc6e06a6a2218c7f"} Oct 13 17:44:16 crc kubenswrapper[4720]: I1013 17:44:16.959798 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fj58w"] Oct 13 17:44:16 crc kubenswrapper[4720]: E1013 17:44:16.960281 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="642eeea2-1300-4204-a1ef-1dd718e045b1" containerName="dnsmasq-dns" Oct 13 17:44:16 crc kubenswrapper[4720]: I1013 17:44:16.960303 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="642eeea2-1300-4204-a1ef-1dd718e045b1" containerName="dnsmasq-dns" Oct 13 17:44:16 crc kubenswrapper[4720]: E1013 17:44:16.960335 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f91af118-9710-48b0-893b-c41d53a4088b" containerName="dnsmasq-dns" Oct 13 17:44:16 crc kubenswrapper[4720]: I1013 17:44:16.960343 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="f91af118-9710-48b0-893b-c41d53a4088b" containerName="dnsmasq-dns" Oct 13 17:44:16 crc kubenswrapper[4720]: E1013 17:44:16.960368 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f91af118-9710-48b0-893b-c41d53a4088b" containerName="init" Oct 13 17:44:16 crc kubenswrapper[4720]: I1013 17:44:16.960377 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="f91af118-9710-48b0-893b-c41d53a4088b" containerName="init" Oct 13 17:44:16 crc kubenswrapper[4720]: E1013 17:44:16.960407 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="642eeea2-1300-4204-a1ef-1dd718e045b1" containerName="init" Oct 13 17:44:16 crc kubenswrapper[4720]: I1013 17:44:16.960418 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="642eeea2-1300-4204-a1ef-1dd718e045b1" containerName="init" Oct 13 17:44:16 crc kubenswrapper[4720]: I1013 17:44:16.960686 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="642eeea2-1300-4204-a1ef-1dd718e045b1" containerName="dnsmasq-dns" Oct 13 17:44:16 crc kubenswrapper[4720]: I1013 17:44:16.960711 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="f91af118-9710-48b0-893b-c41d53a4088b" containerName="dnsmasq-dns" Oct 13 17:44:16 crc kubenswrapper[4720]: I1013 17:44:16.961465 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fj58w" Oct 13 17:44:16 crc kubenswrapper[4720]: I1013 17:44:16.966587 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 13 17:44:16 crc kubenswrapper[4720]: I1013 17:44:16.967412 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 13 17:44:16 crc kubenswrapper[4720]: I1013 17:44:16.967728 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 13 17:44:16 crc kubenswrapper[4720]: I1013 17:44:16.967976 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l2fds" Oct 13 17:44:16 crc kubenswrapper[4720]: I1013 17:44:16.982201 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fj58w"] Oct 13 17:44:17 crc kubenswrapper[4720]: I1013 17:44:17.152766 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28ae3d76-f715-46df-be4a-d621a7467347-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fj58w\" (UID: \"28ae3d76-f715-46df-be4a-d621a7467347\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fj58w" Oct 13 17:44:17 crc kubenswrapper[4720]: I1013 17:44:17.152834 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kmqz\" (UniqueName: \"kubernetes.io/projected/28ae3d76-f715-46df-be4a-d621a7467347-kube-api-access-4kmqz\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fj58w\" (UID: \"28ae3d76-f715-46df-be4a-d621a7467347\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fj58w" Oct 13 17:44:17 crc kubenswrapper[4720]: I1013 17:44:17.153017 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/28ae3d76-f715-46df-be4a-d621a7467347-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fj58w\" (UID: \"28ae3d76-f715-46df-be4a-d621a7467347\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fj58w" Oct 13 17:44:17 crc kubenswrapper[4720]: I1013 17:44:17.153258 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/28ae3d76-f715-46df-be4a-d621a7467347-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fj58w\" (UID: \"28ae3d76-f715-46df-be4a-d621a7467347\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fj58w" Oct 13 17:44:17 crc kubenswrapper[4720]: I1013 17:44:17.254885 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28ae3d76-f715-46df-be4a-d621a7467347-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fj58w\" (UID: \"28ae3d76-f715-46df-be4a-d621a7467347\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fj58w" Oct 13 17:44:17 crc kubenswrapper[4720]: I1013 17:44:17.254953 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kmqz\" (UniqueName: \"kubernetes.io/projected/28ae3d76-f715-46df-be4a-d621a7467347-kube-api-access-4kmqz\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fj58w\" (UID: \"28ae3d76-f715-46df-be4a-d621a7467347\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fj58w" Oct 13 17:44:17 crc kubenswrapper[4720]: I1013 17:44:17.255029 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/28ae3d76-f715-46df-be4a-d621a7467347-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fj58w\" (UID: \"28ae3d76-f715-46df-be4a-d621a7467347\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fj58w" Oct 13 17:44:17 crc kubenswrapper[4720]: I1013 17:44:17.255154 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/28ae3d76-f715-46df-be4a-d621a7467347-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fj58w\" (UID: \"28ae3d76-f715-46df-be4a-d621a7467347\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fj58w" Oct 13 17:44:17 crc kubenswrapper[4720]: I1013 17:44:17.260886 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28ae3d76-f715-46df-be4a-d621a7467347-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fj58w\" (UID: \"28ae3d76-f715-46df-be4a-d621a7467347\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fj58w" Oct 13 17:44:17 crc kubenswrapper[4720]: I1013 17:44:17.261480 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/28ae3d76-f715-46df-be4a-d621a7467347-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fj58w\" (UID: \"28ae3d76-f715-46df-be4a-d621a7467347\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fj58w" Oct 13 17:44:17 crc kubenswrapper[4720]: I1013 17:44:17.265205 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/28ae3d76-f715-46df-be4a-d621a7467347-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fj58w\" (UID: \"28ae3d76-f715-46df-be4a-d621a7467347\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fj58w" Oct 13 17:44:17 crc kubenswrapper[4720]: I1013 17:44:17.275819 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kmqz\" (UniqueName: \"kubernetes.io/projected/28ae3d76-f715-46df-be4a-d621a7467347-kube-api-access-4kmqz\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fj58w\" (UID: \"28ae3d76-f715-46df-be4a-d621a7467347\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fj58w" Oct 13 17:44:17 crc kubenswrapper[4720]: I1013 17:44:17.298601 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fj58w" Oct 13 17:44:17 crc kubenswrapper[4720]: I1013 17:44:17.702957 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"234df878-2921-45dc-854c-b3840afdbd45","Type":"ContainerStarted","Data":"d539af9691dad7c58fa68baea443d12ec18bc1cc650db01898de3dc654c4a04a"} Oct 13 17:44:17 crc kubenswrapper[4720]: I1013 17:44:17.703465 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Oct 13 17:44:17 crc kubenswrapper[4720]: I1013 17:44:17.739460 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.73944396 podStartE2EDuration="36.73944396s" podCreationTimestamp="2025-10-13 17:43:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:44:17.736848343 +0000 UTC m=+1203.194098465" watchObservedRunningTime="2025-10-13 17:44:17.73944396 +0000 UTC m=+1203.196694092" Oct 13 17:44:17 crc kubenswrapper[4720]: I1013 17:44:17.868423 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fj58w"] Oct 13 17:44:17 crc kubenswrapper[4720]: W1013 17:44:17.872508 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28ae3d76_f715_46df_be4a_d621a7467347.slice/crio-f1ac04e9b5dbbc85eb8829263d0e9d9db0403c3efdeaf23cdd87246c8aa6823c WatchSource:0}: Error finding container f1ac04e9b5dbbc85eb8829263d0e9d9db0403c3efdeaf23cdd87246c8aa6823c: Status 404 returned error can't find the container with id f1ac04e9b5dbbc85eb8829263d0e9d9db0403c3efdeaf23cdd87246c8aa6823c Oct 13 17:44:17 crc kubenswrapper[4720]: I1013 17:44:17.875585 4720 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 13 17:44:18 crc kubenswrapper[4720]: I1013 17:44:18.715861 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fj58w" event={"ID":"28ae3d76-f715-46df-be4a-d621a7467347","Type":"ContainerStarted","Data":"f1ac04e9b5dbbc85eb8829263d0e9d9db0403c3efdeaf23cdd87246c8aa6823c"} Oct 13 17:44:18 crc kubenswrapper[4720]: I1013 17:44:18.718254 4720 generic.go:334] "Generic (PLEG): container finished" podID="0bc24914-0bdd-4fa7-a859-a4d4f06f0455" containerID="494fe785ad8c54ba2dc25510339534cbbcab5031224745f8b0db3fd42cf05b86" exitCode=0 Oct 13 17:44:18 crc kubenswrapper[4720]: I1013 17:44:18.718340 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0bc24914-0bdd-4fa7-a859-a4d4f06f0455","Type":"ContainerDied","Data":"494fe785ad8c54ba2dc25510339534cbbcab5031224745f8b0db3fd42cf05b86"} Oct 13 17:44:19 crc kubenswrapper[4720]: I1013 17:44:19.730655 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0bc24914-0bdd-4fa7-a859-a4d4f06f0455","Type":"ContainerStarted","Data":"b341b5198b7b93dac529c1695cc41af64f5b91c371fb98a70e4a4f156671d905"} Oct 13 17:44:19 crc kubenswrapper[4720]: I1013 17:44:19.731620 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:44:19 crc kubenswrapper[4720]: I1013 17:44:19.756279 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.756261658 podStartE2EDuration="37.756261658s" podCreationTimestamp="2025-10-13 17:43:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 17:44:19.755379595 +0000 UTC m=+1205.212629737" watchObservedRunningTime="2025-10-13 17:44:19.756261658 +0000 UTC m=+1205.213511790" Oct 13 17:44:27 crc kubenswrapper[4720]: I1013 17:44:27.807605 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fj58w" event={"ID":"28ae3d76-f715-46df-be4a-d621a7467347","Type":"ContainerStarted","Data":"5b3d10c8cf5fffe036aa3a3538621cbdd3deba2d2484e1eb4126475da9be94fb"} Oct 13 17:44:27 crc kubenswrapper[4720]: I1013 17:44:27.841900 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fj58w" podStartSLOduration=2.958961434 podStartE2EDuration="11.841879488s" podCreationTimestamp="2025-10-13 17:44:16 +0000 UTC" firstStartedPulling="2025-10-13 17:44:17.875318488 +0000 UTC m=+1203.332568630" lastFinishedPulling="2025-10-13 17:44:26.758236502 +0000 UTC m=+1212.215486684" observedRunningTime="2025-10-13 17:44:27.832612079 +0000 UTC m=+1213.289862211" watchObservedRunningTime="2025-10-13 17:44:27.841879488 +0000 UTC m=+1213.299129620" Oct 13 17:44:31 crc kubenswrapper[4720]: I1013 17:44:31.616348 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Oct 13 17:44:32 crc kubenswrapper[4720]: I1013 17:44:32.592455 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Oct 13 17:44:38 crc kubenswrapper[4720]: I1013 17:44:38.956822 4720 generic.go:334] "Generic (PLEG): container finished" podID="28ae3d76-f715-46df-be4a-d621a7467347" containerID="5b3d10c8cf5fffe036aa3a3538621cbdd3deba2d2484e1eb4126475da9be94fb" exitCode=0 Oct 13 17:44:38 crc kubenswrapper[4720]: I1013 17:44:38.957032 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fj58w" event={"ID":"28ae3d76-f715-46df-be4a-d621a7467347","Type":"ContainerDied","Data":"5b3d10c8cf5fffe036aa3a3538621cbdd3deba2d2484e1eb4126475da9be94fb"} Oct 13 17:44:40 crc kubenswrapper[4720]: I1013 17:44:40.494385 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fj58w" Oct 13 17:44:40 crc kubenswrapper[4720]: I1013 17:44:40.669977 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/28ae3d76-f715-46df-be4a-d621a7467347-ssh-key\") pod \"28ae3d76-f715-46df-be4a-d621a7467347\" (UID: \"28ae3d76-f715-46df-be4a-d621a7467347\") " Oct 13 17:44:40 crc kubenswrapper[4720]: I1013 17:44:40.670101 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28ae3d76-f715-46df-be4a-d621a7467347-repo-setup-combined-ca-bundle\") pod \"28ae3d76-f715-46df-be4a-d621a7467347\" (UID: \"28ae3d76-f715-46df-be4a-d621a7467347\") " Oct 13 17:44:40 crc kubenswrapper[4720]: I1013 17:44:40.670176 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/28ae3d76-f715-46df-be4a-d621a7467347-inventory\") pod \"28ae3d76-f715-46df-be4a-d621a7467347\" (UID: \"28ae3d76-f715-46df-be4a-d621a7467347\") " Oct 13 17:44:40 crc kubenswrapper[4720]: I1013 17:44:40.670288 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kmqz\" (UniqueName: \"kubernetes.io/projected/28ae3d76-f715-46df-be4a-d621a7467347-kube-api-access-4kmqz\") pod \"28ae3d76-f715-46df-be4a-d621a7467347\" (UID: \"28ae3d76-f715-46df-be4a-d621a7467347\") " Oct 13 17:44:40 crc kubenswrapper[4720]: I1013 17:44:40.678706 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28ae3d76-f715-46df-be4a-d621a7467347-kube-api-access-4kmqz" (OuterVolumeSpecName: "kube-api-access-4kmqz") pod "28ae3d76-f715-46df-be4a-d621a7467347" (UID: "28ae3d76-f715-46df-be4a-d621a7467347"). InnerVolumeSpecName "kube-api-access-4kmqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:44:40 crc kubenswrapper[4720]: I1013 17:44:40.680448 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28ae3d76-f715-46df-be4a-d621a7467347-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "28ae3d76-f715-46df-be4a-d621a7467347" (UID: "28ae3d76-f715-46df-be4a-d621a7467347"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:44:40 crc kubenswrapper[4720]: I1013 17:44:40.722557 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28ae3d76-f715-46df-be4a-d621a7467347-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "28ae3d76-f715-46df-be4a-d621a7467347" (UID: "28ae3d76-f715-46df-be4a-d621a7467347"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:44:40 crc kubenswrapper[4720]: I1013 17:44:40.724635 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28ae3d76-f715-46df-be4a-d621a7467347-inventory" (OuterVolumeSpecName: "inventory") pod "28ae3d76-f715-46df-be4a-d621a7467347" (UID: "28ae3d76-f715-46df-be4a-d621a7467347"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:44:40 crc kubenswrapper[4720]: I1013 17:44:40.772470 4720 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/28ae3d76-f715-46df-be4a-d621a7467347-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 13 17:44:40 crc kubenswrapper[4720]: I1013 17:44:40.772503 4720 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28ae3d76-f715-46df-be4a-d621a7467347-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 17:44:40 crc kubenswrapper[4720]: I1013 17:44:40.772514 4720 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/28ae3d76-f715-46df-be4a-d621a7467347-inventory\") on node \"crc\" DevicePath \"\"" Oct 13 17:44:40 crc kubenswrapper[4720]: I1013 17:44:40.772525 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kmqz\" (UniqueName: \"kubernetes.io/projected/28ae3d76-f715-46df-be4a-d621a7467347-kube-api-access-4kmqz\") on node \"crc\" DevicePath \"\"" Oct 13 17:44:40 crc kubenswrapper[4720]: I1013 17:44:40.980694 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fj58w" event={"ID":"28ae3d76-f715-46df-be4a-d621a7467347","Type":"ContainerDied","Data":"f1ac04e9b5dbbc85eb8829263d0e9d9db0403c3efdeaf23cdd87246c8aa6823c"} Oct 13 17:44:40 crc kubenswrapper[4720]: I1013 17:44:40.980746 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1ac04e9b5dbbc85eb8829263d0e9d9db0403c3efdeaf23cdd87246c8aa6823c" Oct 13 17:44:40 crc kubenswrapper[4720]: I1013 17:44:40.980822 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fj58w" Oct 13 17:44:41 crc kubenswrapper[4720]: I1013 17:44:41.152386 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-fvrff"] Oct 13 17:44:41 crc kubenswrapper[4720]: E1013 17:44:41.152865 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28ae3d76-f715-46df-be4a-d621a7467347" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Oct 13 17:44:41 crc kubenswrapper[4720]: I1013 17:44:41.152886 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="28ae3d76-f715-46df-be4a-d621a7467347" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Oct 13 17:44:41 crc kubenswrapper[4720]: I1013 17:44:41.153181 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="28ae3d76-f715-46df-be4a-d621a7467347" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Oct 13 17:44:41 crc kubenswrapper[4720]: I1013 17:44:41.154168 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fvrff" Oct 13 17:44:41 crc kubenswrapper[4720]: I1013 17:44:41.156176 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 13 17:44:41 crc kubenswrapper[4720]: I1013 17:44:41.156818 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 13 17:44:41 crc kubenswrapper[4720]: I1013 17:44:41.157007 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l2fds" Oct 13 17:44:41 crc kubenswrapper[4720]: I1013 17:44:41.157052 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 13 17:44:41 crc kubenswrapper[4720]: I1013 17:44:41.181797 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-fvrff"] Oct 13 17:44:41 crc kubenswrapper[4720]: I1013 17:44:41.280100 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1571f354-4e14-443b-b5fa-b0158ed87248-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-fvrff\" (UID: \"1571f354-4e14-443b-b5fa-b0158ed87248\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fvrff" Oct 13 17:44:41 crc kubenswrapper[4720]: I1013 17:44:41.280172 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1571f354-4e14-443b-b5fa-b0158ed87248-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-fvrff\" (UID: \"1571f354-4e14-443b-b5fa-b0158ed87248\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fvrff" Oct 13 17:44:41 crc kubenswrapper[4720]: I1013 17:44:41.280331 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9mz7\" (UniqueName: \"kubernetes.io/projected/1571f354-4e14-443b-b5fa-b0158ed87248-kube-api-access-l9mz7\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-fvrff\" (UID: \"1571f354-4e14-443b-b5fa-b0158ed87248\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fvrff" Oct 13 17:44:41 crc kubenswrapper[4720]: I1013 17:44:41.382377 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1571f354-4e14-443b-b5fa-b0158ed87248-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-fvrff\" (UID: \"1571f354-4e14-443b-b5fa-b0158ed87248\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fvrff" Oct 13 17:44:41 crc kubenswrapper[4720]: I1013 17:44:41.382910 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1571f354-4e14-443b-b5fa-b0158ed87248-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-fvrff\" (UID: \"1571f354-4e14-443b-b5fa-b0158ed87248\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fvrff" Oct 13 17:44:41 crc kubenswrapper[4720]: I1013 17:44:41.383014 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9mz7\" (UniqueName: \"kubernetes.io/projected/1571f354-4e14-443b-b5fa-b0158ed87248-kube-api-access-l9mz7\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-fvrff\" (UID: \"1571f354-4e14-443b-b5fa-b0158ed87248\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fvrff" Oct 13 17:44:41 crc kubenswrapper[4720]: I1013 17:44:41.386284 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1571f354-4e14-443b-b5fa-b0158ed87248-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-fvrff\" (UID: \"1571f354-4e14-443b-b5fa-b0158ed87248\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fvrff" Oct 13 17:44:41 crc kubenswrapper[4720]: I1013 17:44:41.388031 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1571f354-4e14-443b-b5fa-b0158ed87248-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-fvrff\" (UID: \"1571f354-4e14-443b-b5fa-b0158ed87248\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fvrff" Oct 13 17:44:41 crc kubenswrapper[4720]: I1013 17:44:41.400071 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9mz7\" (UniqueName: \"kubernetes.io/projected/1571f354-4e14-443b-b5fa-b0158ed87248-kube-api-access-l9mz7\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-fvrff\" (UID: \"1571f354-4e14-443b-b5fa-b0158ed87248\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fvrff" Oct 13 17:44:41 crc kubenswrapper[4720]: I1013 17:44:41.487440 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fvrff" Oct 13 17:44:41 crc kubenswrapper[4720]: I1013 17:44:41.828733 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-fvrff"] Oct 13 17:44:41 crc kubenswrapper[4720]: I1013 17:44:41.989766 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fvrff" event={"ID":"1571f354-4e14-443b-b5fa-b0158ed87248","Type":"ContainerStarted","Data":"f16a110448ed846acf2552ebc4d44f077b877ddd55005a12bbd46a48c87ef21f"} Oct 13 17:44:43 crc kubenswrapper[4720]: I1013 17:44:43.000819 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fvrff" event={"ID":"1571f354-4e14-443b-b5fa-b0158ed87248","Type":"ContainerStarted","Data":"6ae907a7ce3e7af842580b0cbdafea31fc03c41d76fddc810e3b9580189a8956"} Oct 13 17:44:43 crc kubenswrapper[4720]: I1013 17:44:43.021777 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fvrff" podStartSLOduration=1.5849881639999999 podStartE2EDuration="2.021757306s" podCreationTimestamp="2025-10-13 17:44:41 +0000 UTC" firstStartedPulling="2025-10-13 17:44:41.827872774 +0000 UTC m=+1227.285122906" lastFinishedPulling="2025-10-13 17:44:42.264641916 +0000 UTC m=+1227.721892048" observedRunningTime="2025-10-13 17:44:43.016794748 +0000 UTC m=+1228.474044880" watchObservedRunningTime="2025-10-13 17:44:43.021757306 +0000 UTC m=+1228.479007438" Oct 13 17:44:45 crc kubenswrapper[4720]: I1013 17:44:45.212704 4720 patch_prober.go:28] interesting pod/machine-config-daemon-htwnl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 17:44:45 crc kubenswrapper[4720]: I1013 17:44:45.213359 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 17:44:46 crc kubenswrapper[4720]: I1013 17:44:46.031069 4720 generic.go:334] "Generic (PLEG): container finished" podID="1571f354-4e14-443b-b5fa-b0158ed87248" containerID="6ae907a7ce3e7af842580b0cbdafea31fc03c41d76fddc810e3b9580189a8956" exitCode=0 Oct 13 17:44:46 crc kubenswrapper[4720]: I1013 17:44:46.031108 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fvrff" event={"ID":"1571f354-4e14-443b-b5fa-b0158ed87248","Type":"ContainerDied","Data":"6ae907a7ce3e7af842580b0cbdafea31fc03c41d76fddc810e3b9580189a8956"} Oct 13 17:44:47 crc kubenswrapper[4720]: I1013 17:44:47.590596 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fvrff" Oct 13 17:44:47 crc kubenswrapper[4720]: I1013 17:44:47.716433 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1571f354-4e14-443b-b5fa-b0158ed87248-inventory\") pod \"1571f354-4e14-443b-b5fa-b0158ed87248\" (UID: \"1571f354-4e14-443b-b5fa-b0158ed87248\") " Oct 13 17:44:47 crc kubenswrapper[4720]: I1013 17:44:47.716619 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9mz7\" (UniqueName: \"kubernetes.io/projected/1571f354-4e14-443b-b5fa-b0158ed87248-kube-api-access-l9mz7\") pod \"1571f354-4e14-443b-b5fa-b0158ed87248\" (UID: \"1571f354-4e14-443b-b5fa-b0158ed87248\") " Oct 13 17:44:47 crc kubenswrapper[4720]: I1013 17:44:47.716788 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1571f354-4e14-443b-b5fa-b0158ed87248-ssh-key\") pod \"1571f354-4e14-443b-b5fa-b0158ed87248\" (UID: \"1571f354-4e14-443b-b5fa-b0158ed87248\") " Oct 13 17:44:47 crc kubenswrapper[4720]: I1013 17:44:47.732414 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1571f354-4e14-443b-b5fa-b0158ed87248-kube-api-access-l9mz7" (OuterVolumeSpecName: "kube-api-access-l9mz7") pod "1571f354-4e14-443b-b5fa-b0158ed87248" (UID: "1571f354-4e14-443b-b5fa-b0158ed87248"). InnerVolumeSpecName "kube-api-access-l9mz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:44:47 crc kubenswrapper[4720]: I1013 17:44:47.752653 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1571f354-4e14-443b-b5fa-b0158ed87248-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "1571f354-4e14-443b-b5fa-b0158ed87248" (UID: "1571f354-4e14-443b-b5fa-b0158ed87248"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:44:47 crc kubenswrapper[4720]: I1013 17:44:47.765031 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1571f354-4e14-443b-b5fa-b0158ed87248-inventory" (OuterVolumeSpecName: "inventory") pod "1571f354-4e14-443b-b5fa-b0158ed87248" (UID: "1571f354-4e14-443b-b5fa-b0158ed87248"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:44:47 crc kubenswrapper[4720]: I1013 17:44:47.820654 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9mz7\" (UniqueName: \"kubernetes.io/projected/1571f354-4e14-443b-b5fa-b0158ed87248-kube-api-access-l9mz7\") on node \"crc\" DevicePath \"\"" Oct 13 17:44:47 crc kubenswrapper[4720]: I1013 17:44:47.820709 4720 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1571f354-4e14-443b-b5fa-b0158ed87248-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 13 17:44:47 crc kubenswrapper[4720]: I1013 17:44:47.820728 4720 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1571f354-4e14-443b-b5fa-b0158ed87248-inventory\") on node \"crc\" DevicePath \"\"" Oct 13 17:44:48 crc kubenswrapper[4720]: I1013 17:44:48.058345 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fvrff" event={"ID":"1571f354-4e14-443b-b5fa-b0158ed87248","Type":"ContainerDied","Data":"f16a110448ed846acf2552ebc4d44f077b877ddd55005a12bbd46a48c87ef21f"} Oct 13 17:44:48 crc kubenswrapper[4720]: I1013 17:44:48.058410 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f16a110448ed846acf2552ebc4d44f077b877ddd55005a12bbd46a48c87ef21f" Oct 13 17:44:48 crc kubenswrapper[4720]: I1013 17:44:48.058444 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fvrff" Oct 13 17:44:48 crc kubenswrapper[4720]: I1013 17:44:48.152572 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gqz6r"] Oct 13 17:44:48 crc kubenswrapper[4720]: E1013 17:44:48.159245 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1571f354-4e14-443b-b5fa-b0158ed87248" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Oct 13 17:44:48 crc kubenswrapper[4720]: I1013 17:44:48.159462 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="1571f354-4e14-443b-b5fa-b0158ed87248" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Oct 13 17:44:48 crc kubenswrapper[4720]: I1013 17:44:48.163504 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="1571f354-4e14-443b-b5fa-b0158ed87248" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Oct 13 17:44:48 crc kubenswrapper[4720]: I1013 17:44:48.168972 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gqz6r" Oct 13 17:44:48 crc kubenswrapper[4720]: I1013 17:44:48.172182 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l2fds" Oct 13 17:44:48 crc kubenswrapper[4720]: I1013 17:44:48.179408 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 13 17:44:48 crc kubenswrapper[4720]: I1013 17:44:48.179689 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 13 17:44:48 crc kubenswrapper[4720]: I1013 17:44:48.179904 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 13 17:44:48 crc kubenswrapper[4720]: I1013 17:44:48.211303 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gqz6r"] Oct 13 17:44:48 crc kubenswrapper[4720]: I1013 17:44:48.225875 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d7953eee-f335-4fdc-9834-caa5a4695476-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-gqz6r\" (UID: \"d7953eee-f335-4fdc-9834-caa5a4695476\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gqz6r" Oct 13 17:44:48 crc kubenswrapper[4720]: I1013 17:44:48.226393 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7953eee-f335-4fdc-9834-caa5a4695476-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-gqz6r\" (UID: \"d7953eee-f335-4fdc-9834-caa5a4695476\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gqz6r" Oct 13 17:44:48 crc kubenswrapper[4720]: I1013 17:44:48.226502 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkhvk\" (UniqueName: \"kubernetes.io/projected/d7953eee-f335-4fdc-9834-caa5a4695476-kube-api-access-gkhvk\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-gqz6r\" (UID: \"d7953eee-f335-4fdc-9834-caa5a4695476\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gqz6r" Oct 13 17:44:48 crc kubenswrapper[4720]: I1013 17:44:48.226750 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d7953eee-f335-4fdc-9834-caa5a4695476-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-gqz6r\" (UID: \"d7953eee-f335-4fdc-9834-caa5a4695476\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gqz6r" Oct 13 17:44:48 crc kubenswrapper[4720]: I1013 17:44:48.327817 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d7953eee-f335-4fdc-9834-caa5a4695476-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-gqz6r\" (UID: \"d7953eee-f335-4fdc-9834-caa5a4695476\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gqz6r" Oct 13 17:44:48 crc kubenswrapper[4720]: I1013 17:44:48.327909 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d7953eee-f335-4fdc-9834-caa5a4695476-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-gqz6r\" (UID: \"d7953eee-f335-4fdc-9834-caa5a4695476\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gqz6r" Oct 13 17:44:48 crc kubenswrapper[4720]: I1013 17:44:48.328015 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7953eee-f335-4fdc-9834-caa5a4695476-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-gqz6r\" (UID: \"d7953eee-f335-4fdc-9834-caa5a4695476\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gqz6r" Oct 13 17:44:48 crc kubenswrapper[4720]: I1013 17:44:48.328045 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkhvk\" (UniqueName: \"kubernetes.io/projected/d7953eee-f335-4fdc-9834-caa5a4695476-kube-api-access-gkhvk\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-gqz6r\" (UID: \"d7953eee-f335-4fdc-9834-caa5a4695476\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gqz6r" Oct 13 17:44:48 crc kubenswrapper[4720]: I1013 17:44:48.332852 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d7953eee-f335-4fdc-9834-caa5a4695476-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-gqz6r\" (UID: \"d7953eee-f335-4fdc-9834-caa5a4695476\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gqz6r" Oct 13 17:44:48 crc kubenswrapper[4720]: I1013 17:44:48.339299 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7953eee-f335-4fdc-9834-caa5a4695476-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-gqz6r\" (UID: \"d7953eee-f335-4fdc-9834-caa5a4695476\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gqz6r" Oct 13 17:44:48 crc kubenswrapper[4720]: I1013 17:44:48.339861 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d7953eee-f335-4fdc-9834-caa5a4695476-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-gqz6r\" (UID: \"d7953eee-f335-4fdc-9834-caa5a4695476\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gqz6r" Oct 13 17:44:48 crc kubenswrapper[4720]: I1013 17:44:48.343063 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkhvk\" (UniqueName: \"kubernetes.io/projected/d7953eee-f335-4fdc-9834-caa5a4695476-kube-api-access-gkhvk\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-gqz6r\" (UID: \"d7953eee-f335-4fdc-9834-caa5a4695476\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gqz6r" Oct 13 17:44:48 crc kubenswrapper[4720]: I1013 17:44:48.496799 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gqz6r" Oct 13 17:44:48 crc kubenswrapper[4720]: I1013 17:44:48.823159 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gqz6r"] Oct 13 17:44:49 crc kubenswrapper[4720]: I1013 17:44:49.070252 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gqz6r" event={"ID":"d7953eee-f335-4fdc-9834-caa5a4695476","Type":"ContainerStarted","Data":"475be890d17ff7a99fe94164712e241bfd77895bb4edcbf7dc4db912fe5e00b7"} Oct 13 17:44:50 crc kubenswrapper[4720]: I1013 17:44:50.082644 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gqz6r" event={"ID":"d7953eee-f335-4fdc-9834-caa5a4695476","Type":"ContainerStarted","Data":"751baef06d1d9c61feeedef50c4938b74f38a832a32cd1e85e360a23c0a50c45"} Oct 13 17:44:50 crc kubenswrapper[4720]: I1013 17:44:50.102955 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gqz6r" podStartSLOduration=1.697333108 podStartE2EDuration="2.102938429s" podCreationTimestamp="2025-10-13 17:44:48 +0000 UTC" firstStartedPulling="2025-10-13 17:44:48.833985164 +0000 UTC m=+1234.291235296" lastFinishedPulling="2025-10-13 17:44:49.239590475 +0000 UTC m=+1234.696840617" observedRunningTime="2025-10-13 17:44:50.097906409 +0000 UTC m=+1235.555156541" watchObservedRunningTime="2025-10-13 17:44:50.102938429 +0000 UTC m=+1235.560188561" Oct 13 17:45:00 crc kubenswrapper[4720]: I1013 17:45:00.170360 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29339625-d4qnb"] Oct 13 17:45:00 crc kubenswrapper[4720]: I1013 17:45:00.172689 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29339625-d4qnb" Oct 13 17:45:00 crc kubenswrapper[4720]: I1013 17:45:00.174858 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 13 17:45:00 crc kubenswrapper[4720]: I1013 17:45:00.174955 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 13 17:45:00 crc kubenswrapper[4720]: I1013 17:45:00.203447 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29339625-d4qnb"] Oct 13 17:45:00 crc kubenswrapper[4720]: I1013 17:45:00.210043 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3c19bc4b-bf72-4e25-aee2-310efe50630f-secret-volume\") pod \"collect-profiles-29339625-d4qnb\" (UID: \"3c19bc4b-bf72-4e25-aee2-310efe50630f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339625-d4qnb" Oct 13 17:45:00 crc kubenswrapper[4720]: I1013 17:45:00.210182 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c19bc4b-bf72-4e25-aee2-310efe50630f-config-volume\") pod \"collect-profiles-29339625-d4qnb\" (UID: \"3c19bc4b-bf72-4e25-aee2-310efe50630f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339625-d4qnb" Oct 13 17:45:00 crc kubenswrapper[4720]: I1013 17:45:00.210370 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx554\" (UniqueName: \"kubernetes.io/projected/3c19bc4b-bf72-4e25-aee2-310efe50630f-kube-api-access-lx554\") pod \"collect-profiles-29339625-d4qnb\" (UID: \"3c19bc4b-bf72-4e25-aee2-310efe50630f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339625-d4qnb" Oct 13 17:45:00 crc kubenswrapper[4720]: I1013 17:45:00.312227 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3c19bc4b-bf72-4e25-aee2-310efe50630f-secret-volume\") pod \"collect-profiles-29339625-d4qnb\" (UID: \"3c19bc4b-bf72-4e25-aee2-310efe50630f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339625-d4qnb" Oct 13 17:45:00 crc kubenswrapper[4720]: I1013 17:45:00.312331 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c19bc4b-bf72-4e25-aee2-310efe50630f-config-volume\") pod \"collect-profiles-29339625-d4qnb\" (UID: \"3c19bc4b-bf72-4e25-aee2-310efe50630f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339625-d4qnb" Oct 13 17:45:00 crc kubenswrapper[4720]: I1013 17:45:00.312475 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lx554\" (UniqueName: \"kubernetes.io/projected/3c19bc4b-bf72-4e25-aee2-310efe50630f-kube-api-access-lx554\") pod \"collect-profiles-29339625-d4qnb\" (UID: \"3c19bc4b-bf72-4e25-aee2-310efe50630f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339625-d4qnb" Oct 13 17:45:00 crc kubenswrapper[4720]: I1013 17:45:00.313958 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c19bc4b-bf72-4e25-aee2-310efe50630f-config-volume\") pod \"collect-profiles-29339625-d4qnb\" (UID: \"3c19bc4b-bf72-4e25-aee2-310efe50630f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339625-d4qnb" Oct 13 17:45:00 crc kubenswrapper[4720]: I1013 17:45:00.321687 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3c19bc4b-bf72-4e25-aee2-310efe50630f-secret-volume\") pod \"collect-profiles-29339625-d4qnb\" (UID: \"3c19bc4b-bf72-4e25-aee2-310efe50630f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339625-d4qnb" Oct 13 17:45:00 crc kubenswrapper[4720]: I1013 17:45:00.343287 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lx554\" (UniqueName: \"kubernetes.io/projected/3c19bc4b-bf72-4e25-aee2-310efe50630f-kube-api-access-lx554\") pod \"collect-profiles-29339625-d4qnb\" (UID: \"3c19bc4b-bf72-4e25-aee2-310efe50630f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339625-d4qnb" Oct 13 17:45:00 crc kubenswrapper[4720]: I1013 17:45:00.509799 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29339625-d4qnb" Oct 13 17:45:01 crc kubenswrapper[4720]: I1013 17:45:01.048110 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29339625-d4qnb"] Oct 13 17:45:01 crc kubenswrapper[4720]: W1013 17:45:01.061927 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c19bc4b_bf72_4e25_aee2_310efe50630f.slice/crio-332a9dae453526778c2f8c184f926097d4e042a01a6bc172d905a19af4119017 WatchSource:0}: Error finding container 332a9dae453526778c2f8c184f926097d4e042a01a6bc172d905a19af4119017: Status 404 returned error can't find the container with id 332a9dae453526778c2f8c184f926097d4e042a01a6bc172d905a19af4119017 Oct 13 17:45:01 crc kubenswrapper[4720]: I1013 17:45:01.210643 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29339625-d4qnb" event={"ID":"3c19bc4b-bf72-4e25-aee2-310efe50630f","Type":"ContainerStarted","Data":"332a9dae453526778c2f8c184f926097d4e042a01a6bc172d905a19af4119017"} Oct 13 17:45:02 crc kubenswrapper[4720]: I1013 17:45:02.228843 4720 generic.go:334] "Generic (PLEG): container finished" podID="3c19bc4b-bf72-4e25-aee2-310efe50630f" containerID="805d1ec758d1bf3bfa0ab3daed7f06b6259d33f7f3050d16847ebe0cd1ff4bcf" exitCode=0 Oct 13 17:45:02 crc kubenswrapper[4720]: I1013 17:45:02.228909 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29339625-d4qnb" event={"ID":"3c19bc4b-bf72-4e25-aee2-310efe50630f","Type":"ContainerDied","Data":"805d1ec758d1bf3bfa0ab3daed7f06b6259d33f7f3050d16847ebe0cd1ff4bcf"} Oct 13 17:45:03 crc kubenswrapper[4720]: I1013 17:45:03.657008 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29339625-d4qnb" Oct 13 17:45:03 crc kubenswrapper[4720]: I1013 17:45:03.781908 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lx554\" (UniqueName: \"kubernetes.io/projected/3c19bc4b-bf72-4e25-aee2-310efe50630f-kube-api-access-lx554\") pod \"3c19bc4b-bf72-4e25-aee2-310efe50630f\" (UID: \"3c19bc4b-bf72-4e25-aee2-310efe50630f\") " Oct 13 17:45:03 crc kubenswrapper[4720]: I1013 17:45:03.782099 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c19bc4b-bf72-4e25-aee2-310efe50630f-config-volume\") pod \"3c19bc4b-bf72-4e25-aee2-310efe50630f\" (UID: \"3c19bc4b-bf72-4e25-aee2-310efe50630f\") " Oct 13 17:45:03 crc kubenswrapper[4720]: I1013 17:45:03.782258 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3c19bc4b-bf72-4e25-aee2-310efe50630f-secret-volume\") pod \"3c19bc4b-bf72-4e25-aee2-310efe50630f\" (UID: \"3c19bc4b-bf72-4e25-aee2-310efe50630f\") " Oct 13 17:45:03 crc kubenswrapper[4720]: I1013 17:45:03.782883 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c19bc4b-bf72-4e25-aee2-310efe50630f-config-volume" (OuterVolumeSpecName: "config-volume") pod "3c19bc4b-bf72-4e25-aee2-310efe50630f" (UID: "3c19bc4b-bf72-4e25-aee2-310efe50630f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:45:03 crc kubenswrapper[4720]: I1013 17:45:03.790376 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c19bc4b-bf72-4e25-aee2-310efe50630f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3c19bc4b-bf72-4e25-aee2-310efe50630f" (UID: "3c19bc4b-bf72-4e25-aee2-310efe50630f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:45:03 crc kubenswrapper[4720]: I1013 17:45:03.791109 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c19bc4b-bf72-4e25-aee2-310efe50630f-kube-api-access-lx554" (OuterVolumeSpecName: "kube-api-access-lx554") pod "3c19bc4b-bf72-4e25-aee2-310efe50630f" (UID: "3c19bc4b-bf72-4e25-aee2-310efe50630f"). InnerVolumeSpecName "kube-api-access-lx554". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:45:03 crc kubenswrapper[4720]: I1013 17:45:03.884932 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lx554\" (UniqueName: \"kubernetes.io/projected/3c19bc4b-bf72-4e25-aee2-310efe50630f-kube-api-access-lx554\") on node \"crc\" DevicePath \"\"" Oct 13 17:45:03 crc kubenswrapper[4720]: I1013 17:45:03.884988 4720 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c19bc4b-bf72-4e25-aee2-310efe50630f-config-volume\") on node \"crc\" DevicePath \"\"" Oct 13 17:45:03 crc kubenswrapper[4720]: I1013 17:45:03.885007 4720 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3c19bc4b-bf72-4e25-aee2-310efe50630f-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 13 17:45:04 crc kubenswrapper[4720]: I1013 17:45:04.263146 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29339625-d4qnb" event={"ID":"3c19bc4b-bf72-4e25-aee2-310efe50630f","Type":"ContainerDied","Data":"332a9dae453526778c2f8c184f926097d4e042a01a6bc172d905a19af4119017"} Oct 13 17:45:04 crc kubenswrapper[4720]: I1013 17:45:04.263221 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29339625-d4qnb" Oct 13 17:45:04 crc kubenswrapper[4720]: I1013 17:45:04.263235 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="332a9dae453526778c2f8c184f926097d4e042a01a6bc172d905a19af4119017" Oct 13 17:45:15 crc kubenswrapper[4720]: I1013 17:45:15.213233 4720 patch_prober.go:28] interesting pod/machine-config-daemon-htwnl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 17:45:15 crc kubenswrapper[4720]: I1013 17:45:15.213830 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 17:45:45 crc kubenswrapper[4720]: I1013 17:45:45.213568 4720 patch_prober.go:28] interesting pod/machine-config-daemon-htwnl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 17:45:45 crc kubenswrapper[4720]: I1013 17:45:45.214302 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 17:45:45 crc kubenswrapper[4720]: I1013 17:45:45.214373 4720 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" Oct 13 17:45:45 crc kubenswrapper[4720]: I1013 17:45:45.215578 4720 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ee3a413fb70fae37f659cff124cc855967143ab2544217b22584306b14bb1b9a"} pod="openshift-machine-config-operator/machine-config-daemon-htwnl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 13 17:45:45 crc kubenswrapper[4720]: I1013 17:45:45.215689 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerName="machine-config-daemon" containerID="cri-o://ee3a413fb70fae37f659cff124cc855967143ab2544217b22584306b14bb1b9a" gracePeriod=600 Oct 13 17:45:45 crc kubenswrapper[4720]: I1013 17:45:45.798341 4720 generic.go:334] "Generic (PLEG): container finished" podID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerID="ee3a413fb70fae37f659cff124cc855967143ab2544217b22584306b14bb1b9a" exitCode=0 Oct 13 17:45:45 crc kubenswrapper[4720]: I1013 17:45:45.798440 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" event={"ID":"ce442c80-fcde-4b79-b6f9-f8f25771dfd4","Type":"ContainerDied","Data":"ee3a413fb70fae37f659cff124cc855967143ab2544217b22584306b14bb1b9a"} Oct 13 17:45:45 crc kubenswrapper[4720]: I1013 17:45:45.798804 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" event={"ID":"ce442c80-fcde-4b79-b6f9-f8f25771dfd4","Type":"ContainerStarted","Data":"eabc7e2122617e64bac6ae4330db40b5b4e9867a18ff888fd3f7afb5cfc64f2f"} Oct 13 17:45:45 crc kubenswrapper[4720]: I1013 17:45:45.798840 4720 scope.go:117] "RemoveContainer" containerID="c1278ace50a45373a8479a2cc48b2ad98ee5d6f328dbb131bbb45680a5894fc3" Oct 13 17:46:17 crc kubenswrapper[4720]: I1013 17:46:17.191074 4720 scope.go:117] "RemoveContainer" containerID="3872641b485d9425673af80d8a8482b6b99a69f7999cfa891a974a8fe63c1c1c" Oct 13 17:46:17 crc kubenswrapper[4720]: I1013 17:46:17.239517 4720 scope.go:117] "RemoveContainer" containerID="c14fb6faa8b00dd9e510c736f924ec85392019e51fc549152333fb08188c5cf7" Oct 13 17:46:17 crc kubenswrapper[4720]: I1013 17:46:17.441684 4720 scope.go:117] "RemoveContainer" containerID="376eab0bc0716b64ad3f2fd2a966e08f673cb6d62ce53f42923577d22092150f" Oct 13 17:46:17 crc kubenswrapper[4720]: I1013 17:46:17.476242 4720 scope.go:117] "RemoveContainer" containerID="c8e5ab5295419fad6f74268ce31c7466cb0d45be3ec0dfaf7ece64cae1beb14c" Oct 13 17:47:17 crc kubenswrapper[4720]: I1013 17:47:17.600917 4720 scope.go:117] "RemoveContainer" containerID="929b941fbabd15a3a11f8e9db999ac85bec2aac99787b7835b38aecf06b96d9b" Oct 13 17:47:17 crc kubenswrapper[4720]: I1013 17:47:17.638207 4720 scope.go:117] "RemoveContainer" containerID="d6124ef920638a00efcc516961d520ffae3592268b676ed7b8bf8600cbd993de" Oct 13 17:47:43 crc kubenswrapper[4720]: I1013 17:47:43.467365 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wbqtv"] Oct 13 17:47:43 crc kubenswrapper[4720]: E1013 17:47:43.468576 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c19bc4b-bf72-4e25-aee2-310efe50630f" containerName="collect-profiles" Oct 13 17:47:43 crc kubenswrapper[4720]: I1013 17:47:43.468596 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c19bc4b-bf72-4e25-aee2-310efe50630f" containerName="collect-profiles" Oct 13 17:47:43 crc kubenswrapper[4720]: I1013 17:47:43.468937 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c19bc4b-bf72-4e25-aee2-310efe50630f" containerName="collect-profiles" Oct 13 17:47:43 crc kubenswrapper[4720]: I1013 17:47:43.471274 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wbqtv" Oct 13 17:47:43 crc kubenswrapper[4720]: I1013 17:47:43.485337 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wbqtv"] Oct 13 17:47:43 crc kubenswrapper[4720]: I1013 17:47:43.518997 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvhxh\" (UniqueName: \"kubernetes.io/projected/6a8ff58c-4e28-4bc3-8a47-237a960a517c-kube-api-access-cvhxh\") pod \"redhat-marketplace-wbqtv\" (UID: \"6a8ff58c-4e28-4bc3-8a47-237a960a517c\") " pod="openshift-marketplace/redhat-marketplace-wbqtv" Oct 13 17:47:43 crc kubenswrapper[4720]: I1013 17:47:43.519123 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a8ff58c-4e28-4bc3-8a47-237a960a517c-utilities\") pod \"redhat-marketplace-wbqtv\" (UID: \"6a8ff58c-4e28-4bc3-8a47-237a960a517c\") " pod="openshift-marketplace/redhat-marketplace-wbqtv" Oct 13 17:47:43 crc kubenswrapper[4720]: I1013 17:47:43.519221 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a8ff58c-4e28-4bc3-8a47-237a960a517c-catalog-content\") pod \"redhat-marketplace-wbqtv\" (UID: \"6a8ff58c-4e28-4bc3-8a47-237a960a517c\") " pod="openshift-marketplace/redhat-marketplace-wbqtv" Oct 13 17:47:43 crc kubenswrapper[4720]: I1013 17:47:43.620583 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvhxh\" (UniqueName: \"kubernetes.io/projected/6a8ff58c-4e28-4bc3-8a47-237a960a517c-kube-api-access-cvhxh\") pod \"redhat-marketplace-wbqtv\" (UID: \"6a8ff58c-4e28-4bc3-8a47-237a960a517c\") " pod="openshift-marketplace/redhat-marketplace-wbqtv" Oct 13 17:47:43 crc kubenswrapper[4720]: I1013 17:47:43.620871 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a8ff58c-4e28-4bc3-8a47-237a960a517c-utilities\") pod \"redhat-marketplace-wbqtv\" (UID: \"6a8ff58c-4e28-4bc3-8a47-237a960a517c\") " pod="openshift-marketplace/redhat-marketplace-wbqtv" Oct 13 17:47:43 crc kubenswrapper[4720]: I1013 17:47:43.620992 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a8ff58c-4e28-4bc3-8a47-237a960a517c-catalog-content\") pod \"redhat-marketplace-wbqtv\" (UID: \"6a8ff58c-4e28-4bc3-8a47-237a960a517c\") " pod="openshift-marketplace/redhat-marketplace-wbqtv" Oct 13 17:47:43 crc kubenswrapper[4720]: I1013 17:47:43.621836 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a8ff58c-4e28-4bc3-8a47-237a960a517c-catalog-content\") pod \"redhat-marketplace-wbqtv\" (UID: \"6a8ff58c-4e28-4bc3-8a47-237a960a517c\") " pod="openshift-marketplace/redhat-marketplace-wbqtv" Oct 13 17:47:43 crc kubenswrapper[4720]: I1013 17:47:43.621854 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a8ff58c-4e28-4bc3-8a47-237a960a517c-utilities\") pod \"redhat-marketplace-wbqtv\" (UID: \"6a8ff58c-4e28-4bc3-8a47-237a960a517c\") " pod="openshift-marketplace/redhat-marketplace-wbqtv" Oct 13 17:47:43 crc kubenswrapper[4720]: I1013 17:47:43.646284 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvhxh\" (UniqueName: \"kubernetes.io/projected/6a8ff58c-4e28-4bc3-8a47-237a960a517c-kube-api-access-cvhxh\") pod \"redhat-marketplace-wbqtv\" (UID: \"6a8ff58c-4e28-4bc3-8a47-237a960a517c\") " pod="openshift-marketplace/redhat-marketplace-wbqtv" Oct 13 17:47:43 crc kubenswrapper[4720]: I1013 17:47:43.800810 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wbqtv" Oct 13 17:47:44 crc kubenswrapper[4720]: I1013 17:47:44.340905 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wbqtv"] Oct 13 17:47:45 crc kubenswrapper[4720]: I1013 17:47:45.212277 4720 patch_prober.go:28] interesting pod/machine-config-daemon-htwnl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 17:47:45 crc kubenswrapper[4720]: I1013 17:47:45.212579 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 17:47:45 crc kubenswrapper[4720]: I1013 17:47:45.214692 4720 generic.go:334] "Generic (PLEG): container finished" podID="6a8ff58c-4e28-4bc3-8a47-237a960a517c" containerID="e9eaeb333bff01a49fc96127842ff2db4cbb3117dc267f5a585155bfc30a01c1" exitCode=0 Oct 13 17:47:45 crc kubenswrapper[4720]: I1013 17:47:45.214731 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wbqtv" event={"ID":"6a8ff58c-4e28-4bc3-8a47-237a960a517c","Type":"ContainerDied","Data":"e9eaeb333bff01a49fc96127842ff2db4cbb3117dc267f5a585155bfc30a01c1"} Oct 13 17:47:45 crc kubenswrapper[4720]: I1013 17:47:45.214759 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wbqtv" event={"ID":"6a8ff58c-4e28-4bc3-8a47-237a960a517c","Type":"ContainerStarted","Data":"f9a11a7fa86558891985d2f98516cf6263cd5fa48168d7a7d05bd15783f8d962"} Oct 13 17:47:46 crc kubenswrapper[4720]: I1013 17:47:46.226557 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wbqtv" event={"ID":"6a8ff58c-4e28-4bc3-8a47-237a960a517c","Type":"ContainerStarted","Data":"914910f5873e8479919e445bd88f52aa3eafb8c7d5d52ef58856cbcf8d17bcb0"} Oct 13 17:47:46 crc kubenswrapper[4720]: I1013 17:47:46.642314 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wc648"] Oct 13 17:47:46 crc kubenswrapper[4720]: I1013 17:47:46.646936 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wc648" Oct 13 17:47:46 crc kubenswrapper[4720]: I1013 17:47:46.658354 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wc648"] Oct 13 17:47:46 crc kubenswrapper[4720]: I1013 17:47:46.715331 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea62eeb5-7f86-4555-ba88-fb04f9986df6-catalog-content\") pod \"certified-operators-wc648\" (UID: \"ea62eeb5-7f86-4555-ba88-fb04f9986df6\") " pod="openshift-marketplace/certified-operators-wc648" Oct 13 17:47:46 crc kubenswrapper[4720]: I1013 17:47:46.715419 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea62eeb5-7f86-4555-ba88-fb04f9986df6-utilities\") pod \"certified-operators-wc648\" (UID: \"ea62eeb5-7f86-4555-ba88-fb04f9986df6\") " pod="openshift-marketplace/certified-operators-wc648" Oct 13 17:47:46 crc kubenswrapper[4720]: I1013 17:47:46.715552 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbzhw\" (UniqueName: \"kubernetes.io/projected/ea62eeb5-7f86-4555-ba88-fb04f9986df6-kube-api-access-hbzhw\") pod \"certified-operators-wc648\" (UID: \"ea62eeb5-7f86-4555-ba88-fb04f9986df6\") " pod="openshift-marketplace/certified-operators-wc648" Oct 13 17:47:46 crc kubenswrapper[4720]: I1013 17:47:46.817337 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbzhw\" (UniqueName: \"kubernetes.io/projected/ea62eeb5-7f86-4555-ba88-fb04f9986df6-kube-api-access-hbzhw\") pod \"certified-operators-wc648\" (UID: \"ea62eeb5-7f86-4555-ba88-fb04f9986df6\") " pod="openshift-marketplace/certified-operators-wc648" Oct 13 17:47:46 crc kubenswrapper[4720]: I1013 17:47:46.817515 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea62eeb5-7f86-4555-ba88-fb04f9986df6-catalog-content\") pod \"certified-operators-wc648\" (UID: \"ea62eeb5-7f86-4555-ba88-fb04f9986df6\") " pod="openshift-marketplace/certified-operators-wc648" Oct 13 17:47:46 crc kubenswrapper[4720]: I1013 17:47:46.817614 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea62eeb5-7f86-4555-ba88-fb04f9986df6-utilities\") pod \"certified-operators-wc648\" (UID: \"ea62eeb5-7f86-4555-ba88-fb04f9986df6\") " pod="openshift-marketplace/certified-operators-wc648" Oct 13 17:47:46 crc kubenswrapper[4720]: I1013 17:47:46.818073 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea62eeb5-7f86-4555-ba88-fb04f9986df6-catalog-content\") pod \"certified-operators-wc648\" (UID: \"ea62eeb5-7f86-4555-ba88-fb04f9986df6\") " pod="openshift-marketplace/certified-operators-wc648" Oct 13 17:47:46 crc kubenswrapper[4720]: I1013 17:47:46.818135 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea62eeb5-7f86-4555-ba88-fb04f9986df6-utilities\") pod \"certified-operators-wc648\" (UID: \"ea62eeb5-7f86-4555-ba88-fb04f9986df6\") " pod="openshift-marketplace/certified-operators-wc648" Oct 13 17:47:46 crc kubenswrapper[4720]: I1013 17:47:46.860056 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbzhw\" (UniqueName: \"kubernetes.io/projected/ea62eeb5-7f86-4555-ba88-fb04f9986df6-kube-api-access-hbzhw\") pod \"certified-operators-wc648\" (UID: \"ea62eeb5-7f86-4555-ba88-fb04f9986df6\") " pod="openshift-marketplace/certified-operators-wc648" Oct 13 17:47:46 crc kubenswrapper[4720]: I1013 17:47:46.978275 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wc648" Oct 13 17:47:47 crc kubenswrapper[4720]: I1013 17:47:47.242973 4720 generic.go:334] "Generic (PLEG): container finished" podID="6a8ff58c-4e28-4bc3-8a47-237a960a517c" containerID="914910f5873e8479919e445bd88f52aa3eafb8c7d5d52ef58856cbcf8d17bcb0" exitCode=0 Oct 13 17:47:47 crc kubenswrapper[4720]: I1013 17:47:47.243052 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wbqtv" event={"ID":"6a8ff58c-4e28-4bc3-8a47-237a960a517c","Type":"ContainerDied","Data":"914910f5873e8479919e445bd88f52aa3eafb8c7d5d52ef58856cbcf8d17bcb0"} Oct 13 17:47:47 crc kubenswrapper[4720]: I1013 17:47:47.496959 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wc648"] Oct 13 17:47:48 crc kubenswrapper[4720]: I1013 17:47:48.253055 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wbqtv" event={"ID":"6a8ff58c-4e28-4bc3-8a47-237a960a517c","Type":"ContainerStarted","Data":"b0e46a63dfee8f9149386f2466e4d0e53f8bec4dc95661f2ecfef7d1769ca652"} Oct 13 17:47:48 crc kubenswrapper[4720]: I1013 17:47:48.254971 4720 generic.go:334] "Generic (PLEG): container finished" podID="ea62eeb5-7f86-4555-ba88-fb04f9986df6" containerID="aab7826696705f02a92c73b91e989e7b5fdea1ddd0e661f9b41cd8edafd3a1fe" exitCode=0 Oct 13 17:47:48 crc kubenswrapper[4720]: I1013 17:47:48.255004 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wc648" event={"ID":"ea62eeb5-7f86-4555-ba88-fb04f9986df6","Type":"ContainerDied","Data":"aab7826696705f02a92c73b91e989e7b5fdea1ddd0e661f9b41cd8edafd3a1fe"} Oct 13 17:47:48 crc kubenswrapper[4720]: I1013 17:47:48.255023 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wc648" event={"ID":"ea62eeb5-7f86-4555-ba88-fb04f9986df6","Type":"ContainerStarted","Data":"c1ed44439aeeefaa34acaec2ced273e98b5ee6f135c6090c9ea8b1ad9b6a5fbc"} Oct 13 17:47:48 crc kubenswrapper[4720]: I1013 17:47:48.281412 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wbqtv" podStartSLOduration=3.5456689409999997 podStartE2EDuration="5.281387572s" podCreationTimestamp="2025-10-13 17:47:43 +0000 UTC" firstStartedPulling="2025-10-13 17:47:45.216805863 +0000 UTC m=+1410.674055985" lastFinishedPulling="2025-10-13 17:47:46.952524484 +0000 UTC m=+1412.409774616" observedRunningTime="2025-10-13 17:47:48.269515347 +0000 UTC m=+1413.726765489" watchObservedRunningTime="2025-10-13 17:47:48.281387572 +0000 UTC m=+1413.738637714" Oct 13 17:47:49 crc kubenswrapper[4720]: I1013 17:47:49.267090 4720 generic.go:334] "Generic (PLEG): container finished" podID="d7953eee-f335-4fdc-9834-caa5a4695476" containerID="751baef06d1d9c61feeedef50c4938b74f38a832a32cd1e85e360a23c0a50c45" exitCode=0 Oct 13 17:47:49 crc kubenswrapper[4720]: I1013 17:47:49.267203 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gqz6r" event={"ID":"d7953eee-f335-4fdc-9834-caa5a4695476","Type":"ContainerDied","Data":"751baef06d1d9c61feeedef50c4938b74f38a832a32cd1e85e360a23c0a50c45"} Oct 13 17:47:50 crc kubenswrapper[4720]: I1013 17:47:50.739756 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gqz6r" Oct 13 17:47:50 crc kubenswrapper[4720]: I1013 17:47:50.792264 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7953eee-f335-4fdc-9834-caa5a4695476-bootstrap-combined-ca-bundle\") pod \"d7953eee-f335-4fdc-9834-caa5a4695476\" (UID: \"d7953eee-f335-4fdc-9834-caa5a4695476\") " Oct 13 17:47:50 crc kubenswrapper[4720]: I1013 17:47:50.792361 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d7953eee-f335-4fdc-9834-caa5a4695476-ssh-key\") pod \"d7953eee-f335-4fdc-9834-caa5a4695476\" (UID: \"d7953eee-f335-4fdc-9834-caa5a4695476\") " Oct 13 17:47:50 crc kubenswrapper[4720]: I1013 17:47:50.793327 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d7953eee-f335-4fdc-9834-caa5a4695476-inventory\") pod \"d7953eee-f335-4fdc-9834-caa5a4695476\" (UID: \"d7953eee-f335-4fdc-9834-caa5a4695476\") " Oct 13 17:47:50 crc kubenswrapper[4720]: I1013 17:47:50.793947 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkhvk\" (UniqueName: \"kubernetes.io/projected/d7953eee-f335-4fdc-9834-caa5a4695476-kube-api-access-gkhvk\") pod \"d7953eee-f335-4fdc-9834-caa5a4695476\" (UID: \"d7953eee-f335-4fdc-9834-caa5a4695476\") " Oct 13 17:47:50 crc kubenswrapper[4720]: I1013 17:47:50.797472 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7953eee-f335-4fdc-9834-caa5a4695476-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "d7953eee-f335-4fdc-9834-caa5a4695476" (UID: "d7953eee-f335-4fdc-9834-caa5a4695476"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:47:50 crc kubenswrapper[4720]: I1013 17:47:50.798153 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7953eee-f335-4fdc-9834-caa5a4695476-kube-api-access-gkhvk" (OuterVolumeSpecName: "kube-api-access-gkhvk") pod "d7953eee-f335-4fdc-9834-caa5a4695476" (UID: "d7953eee-f335-4fdc-9834-caa5a4695476"). InnerVolumeSpecName "kube-api-access-gkhvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:47:50 crc kubenswrapper[4720]: I1013 17:47:50.819369 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7953eee-f335-4fdc-9834-caa5a4695476-inventory" (OuterVolumeSpecName: "inventory") pod "d7953eee-f335-4fdc-9834-caa5a4695476" (UID: "d7953eee-f335-4fdc-9834-caa5a4695476"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:47:50 crc kubenswrapper[4720]: I1013 17:47:50.821989 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7953eee-f335-4fdc-9834-caa5a4695476-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d7953eee-f335-4fdc-9834-caa5a4695476" (UID: "d7953eee-f335-4fdc-9834-caa5a4695476"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:47:50 crc kubenswrapper[4720]: I1013 17:47:50.897631 4720 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7953eee-f335-4fdc-9834-caa5a4695476-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 17:47:50 crc kubenswrapper[4720]: I1013 17:47:50.897659 4720 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d7953eee-f335-4fdc-9834-caa5a4695476-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 13 17:47:50 crc kubenswrapper[4720]: I1013 17:47:50.897668 4720 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d7953eee-f335-4fdc-9834-caa5a4695476-inventory\") on node \"crc\" DevicePath \"\"" Oct 13 17:47:50 crc kubenswrapper[4720]: I1013 17:47:50.897679 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkhvk\" (UniqueName: \"kubernetes.io/projected/d7953eee-f335-4fdc-9834-caa5a4695476-kube-api-access-gkhvk\") on node \"crc\" DevicePath \"\"" Oct 13 17:47:51 crc kubenswrapper[4720]: I1013 17:47:51.302134 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gqz6r" event={"ID":"d7953eee-f335-4fdc-9834-caa5a4695476","Type":"ContainerDied","Data":"475be890d17ff7a99fe94164712e241bfd77895bb4edcbf7dc4db912fe5e00b7"} Oct 13 17:47:51 crc kubenswrapper[4720]: I1013 17:47:51.302220 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="475be890d17ff7a99fe94164712e241bfd77895bb4edcbf7dc4db912fe5e00b7" Oct 13 17:47:51 crc kubenswrapper[4720]: I1013 17:47:51.302295 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-gqz6r" Oct 13 17:47:51 crc kubenswrapper[4720]: I1013 17:47:51.375707 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xqf7d"] Oct 13 17:47:51 crc kubenswrapper[4720]: E1013 17:47:51.376841 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7953eee-f335-4fdc-9834-caa5a4695476" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 13 17:47:51 crc kubenswrapper[4720]: I1013 17:47:51.376868 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7953eee-f335-4fdc-9834-caa5a4695476" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 13 17:47:51 crc kubenswrapper[4720]: I1013 17:47:51.377126 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7953eee-f335-4fdc-9834-caa5a4695476" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 13 17:47:51 crc kubenswrapper[4720]: I1013 17:47:51.378360 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xqf7d" Oct 13 17:47:51 crc kubenswrapper[4720]: I1013 17:47:51.382965 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l2fds" Oct 13 17:47:51 crc kubenswrapper[4720]: I1013 17:47:51.383113 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 13 17:47:51 crc kubenswrapper[4720]: I1013 17:47:51.383113 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 13 17:47:51 crc kubenswrapper[4720]: I1013 17:47:51.383183 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 13 17:47:51 crc kubenswrapper[4720]: I1013 17:47:51.404220 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xqf7d"] Oct 13 17:47:51 crc kubenswrapper[4720]: I1013 17:47:51.507392 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2b7d23a3-f722-47e4-85af-fe733bfc5fdc-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xqf7d\" (UID: \"2b7d23a3-f722-47e4-85af-fe733bfc5fdc\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xqf7d" Oct 13 17:47:51 crc kubenswrapper[4720]: I1013 17:47:51.507498 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b7d23a3-f722-47e4-85af-fe733bfc5fdc-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xqf7d\" (UID: \"2b7d23a3-f722-47e4-85af-fe733bfc5fdc\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xqf7d" Oct 13 17:47:51 crc kubenswrapper[4720]: I1013 17:47:51.507520 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krqd5\" (UniqueName: \"kubernetes.io/projected/2b7d23a3-f722-47e4-85af-fe733bfc5fdc-kube-api-access-krqd5\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xqf7d\" (UID: \"2b7d23a3-f722-47e4-85af-fe733bfc5fdc\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xqf7d" Oct 13 17:47:51 crc kubenswrapper[4720]: I1013 17:47:51.608415 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2b7d23a3-f722-47e4-85af-fe733bfc5fdc-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xqf7d\" (UID: \"2b7d23a3-f722-47e4-85af-fe733bfc5fdc\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xqf7d" Oct 13 17:47:51 crc kubenswrapper[4720]: I1013 17:47:51.608543 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b7d23a3-f722-47e4-85af-fe733bfc5fdc-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xqf7d\" (UID: \"2b7d23a3-f722-47e4-85af-fe733bfc5fdc\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xqf7d" Oct 13 17:47:51 crc kubenswrapper[4720]: I1013 17:47:51.608566 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krqd5\" (UniqueName: \"kubernetes.io/projected/2b7d23a3-f722-47e4-85af-fe733bfc5fdc-kube-api-access-krqd5\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xqf7d\" (UID: \"2b7d23a3-f722-47e4-85af-fe733bfc5fdc\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xqf7d" Oct 13 17:47:51 crc kubenswrapper[4720]: I1013 17:47:51.624338 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2b7d23a3-f722-47e4-85af-fe733bfc5fdc-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xqf7d\" (UID: \"2b7d23a3-f722-47e4-85af-fe733bfc5fdc\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xqf7d" Oct 13 17:47:51 crc kubenswrapper[4720]: I1013 17:47:51.626877 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b7d23a3-f722-47e4-85af-fe733bfc5fdc-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xqf7d\" (UID: \"2b7d23a3-f722-47e4-85af-fe733bfc5fdc\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xqf7d" Oct 13 17:47:51 crc kubenswrapper[4720]: I1013 17:47:51.628241 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krqd5\" (UniqueName: \"kubernetes.io/projected/2b7d23a3-f722-47e4-85af-fe733bfc5fdc-kube-api-access-krqd5\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xqf7d\" (UID: \"2b7d23a3-f722-47e4-85af-fe733bfc5fdc\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xqf7d" Oct 13 17:47:51 crc kubenswrapper[4720]: I1013 17:47:51.708176 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xqf7d" Oct 13 17:47:53 crc kubenswrapper[4720]: I1013 17:47:53.801334 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wbqtv" Oct 13 17:47:53 crc kubenswrapper[4720]: I1013 17:47:53.801596 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wbqtv" Oct 13 17:47:53 crc kubenswrapper[4720]: I1013 17:47:53.851692 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wbqtv" Oct 13 17:47:54 crc kubenswrapper[4720]: I1013 17:47:54.436879 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wbqtv" Oct 13 17:47:54 crc kubenswrapper[4720]: I1013 17:47:54.495145 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wbqtv"] Oct 13 17:47:54 crc kubenswrapper[4720]: I1013 17:47:54.826591 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xqf7d"] Oct 13 17:47:55 crc kubenswrapper[4720]: I1013 17:47:55.378485 4720 generic.go:334] "Generic (PLEG): container finished" podID="ea62eeb5-7f86-4555-ba88-fb04f9986df6" containerID="6d73e21a662c5e711f637ed7224f3d5caa40ea500da02af4ccd99b1ca32ed98c" exitCode=0 Oct 13 17:47:55 crc kubenswrapper[4720]: I1013 17:47:55.378902 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wc648" event={"ID":"ea62eeb5-7f86-4555-ba88-fb04f9986df6","Type":"ContainerDied","Data":"6d73e21a662c5e711f637ed7224f3d5caa40ea500da02af4ccd99b1ca32ed98c"} Oct 13 17:47:55 crc kubenswrapper[4720]: I1013 17:47:55.383474 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xqf7d" event={"ID":"2b7d23a3-f722-47e4-85af-fe733bfc5fdc","Type":"ContainerStarted","Data":"23b3a2e56629d312a7650122b68abbf6fe46bc11095ab308c2a3dab51b55e95f"} Oct 13 17:47:56 crc kubenswrapper[4720]: I1013 17:47:56.394763 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wc648" event={"ID":"ea62eeb5-7f86-4555-ba88-fb04f9986df6","Type":"ContainerStarted","Data":"69c84a55654972c356f71352314e6517f2e18ea8b7a95735af93f2904b8ba852"} Oct 13 17:47:56 crc kubenswrapper[4720]: I1013 17:47:56.396914 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wbqtv" podUID="6a8ff58c-4e28-4bc3-8a47-237a960a517c" containerName="registry-server" containerID="cri-o://b0e46a63dfee8f9149386f2466e4d0e53f8bec4dc95661f2ecfef7d1769ca652" gracePeriod=2 Oct 13 17:47:56 crc kubenswrapper[4720]: I1013 17:47:56.397310 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xqf7d" event={"ID":"2b7d23a3-f722-47e4-85af-fe733bfc5fdc","Type":"ContainerStarted","Data":"cce115976f7829c8377bcd89a60079d7eba24371bfe5df55db7ae5d92dac6cee"} Oct 13 17:47:56 crc kubenswrapper[4720]: I1013 17:47:56.431899 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wc648" podStartSLOduration=2.8289079360000002 podStartE2EDuration="10.431878171s" podCreationTimestamp="2025-10-13 17:47:46 +0000 UTC" firstStartedPulling="2025-10-13 17:47:48.256761218 +0000 UTC m=+1413.714011350" lastFinishedPulling="2025-10-13 17:47:55.859731443 +0000 UTC m=+1421.316981585" observedRunningTime="2025-10-13 17:47:56.424259925 +0000 UTC m=+1421.881510077" watchObservedRunningTime="2025-10-13 17:47:56.431878171 +0000 UTC m=+1421.889128323" Oct 13 17:47:56 crc kubenswrapper[4720]: I1013 17:47:56.450723 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xqf7d" podStartSLOduration=4.95559939 podStartE2EDuration="5.450700925s" podCreationTimestamp="2025-10-13 17:47:51 +0000 UTC" firstStartedPulling="2025-10-13 17:47:54.831623278 +0000 UTC m=+1420.288873410" lastFinishedPulling="2025-10-13 17:47:55.326724773 +0000 UTC m=+1420.783974945" observedRunningTime="2025-10-13 17:47:56.44816915 +0000 UTC m=+1421.905419292" watchObservedRunningTime="2025-10-13 17:47:56.450700925 +0000 UTC m=+1421.907951057" Oct 13 17:47:56 crc kubenswrapper[4720]: I1013 17:47:56.887715 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wbqtv" Oct 13 17:47:56 crc kubenswrapper[4720]: I1013 17:47:56.915138 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a8ff58c-4e28-4bc3-8a47-237a960a517c-catalog-content\") pod \"6a8ff58c-4e28-4bc3-8a47-237a960a517c\" (UID: \"6a8ff58c-4e28-4bc3-8a47-237a960a517c\") " Oct 13 17:47:56 crc kubenswrapper[4720]: I1013 17:47:56.915422 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a8ff58c-4e28-4bc3-8a47-237a960a517c-utilities\") pod \"6a8ff58c-4e28-4bc3-8a47-237a960a517c\" (UID: \"6a8ff58c-4e28-4bc3-8a47-237a960a517c\") " Oct 13 17:47:56 crc kubenswrapper[4720]: I1013 17:47:56.915574 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvhxh\" (UniqueName: \"kubernetes.io/projected/6a8ff58c-4e28-4bc3-8a47-237a960a517c-kube-api-access-cvhxh\") pod \"6a8ff58c-4e28-4bc3-8a47-237a960a517c\" (UID: \"6a8ff58c-4e28-4bc3-8a47-237a960a517c\") " Oct 13 17:47:56 crc kubenswrapper[4720]: I1013 17:47:56.917287 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a8ff58c-4e28-4bc3-8a47-237a960a517c-utilities" (OuterVolumeSpecName: "utilities") pod "6a8ff58c-4e28-4bc3-8a47-237a960a517c" (UID: "6a8ff58c-4e28-4bc3-8a47-237a960a517c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:47:56 crc kubenswrapper[4720]: I1013 17:47:56.928579 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a8ff58c-4e28-4bc3-8a47-237a960a517c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6a8ff58c-4e28-4bc3-8a47-237a960a517c" (UID: "6a8ff58c-4e28-4bc3-8a47-237a960a517c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:47:56 crc kubenswrapper[4720]: I1013 17:47:56.931282 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a8ff58c-4e28-4bc3-8a47-237a960a517c-kube-api-access-cvhxh" (OuterVolumeSpecName: "kube-api-access-cvhxh") pod "6a8ff58c-4e28-4bc3-8a47-237a960a517c" (UID: "6a8ff58c-4e28-4bc3-8a47-237a960a517c"). InnerVolumeSpecName "kube-api-access-cvhxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:47:56 crc kubenswrapper[4720]: I1013 17:47:56.983298 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wc648" Oct 13 17:47:56 crc kubenswrapper[4720]: I1013 17:47:56.983353 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wc648" Oct 13 17:47:57 crc kubenswrapper[4720]: I1013 17:47:57.018100 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvhxh\" (UniqueName: \"kubernetes.io/projected/6a8ff58c-4e28-4bc3-8a47-237a960a517c-kube-api-access-cvhxh\") on node \"crc\" DevicePath \"\"" Oct 13 17:47:57 crc kubenswrapper[4720]: I1013 17:47:57.018130 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a8ff58c-4e28-4bc3-8a47-237a960a517c-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 17:47:57 crc kubenswrapper[4720]: I1013 17:47:57.018139 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a8ff58c-4e28-4bc3-8a47-237a960a517c-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 17:47:57 crc kubenswrapper[4720]: I1013 17:47:57.409260 4720 generic.go:334] "Generic (PLEG): container finished" podID="6a8ff58c-4e28-4bc3-8a47-237a960a517c" containerID="b0e46a63dfee8f9149386f2466e4d0e53f8bec4dc95661f2ecfef7d1769ca652" exitCode=0 Oct 13 17:47:57 crc kubenswrapper[4720]: I1013 17:47:57.409324 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wbqtv" Oct 13 17:47:57 crc kubenswrapper[4720]: I1013 17:47:57.409356 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wbqtv" event={"ID":"6a8ff58c-4e28-4bc3-8a47-237a960a517c","Type":"ContainerDied","Data":"b0e46a63dfee8f9149386f2466e4d0e53f8bec4dc95661f2ecfef7d1769ca652"} Oct 13 17:47:57 crc kubenswrapper[4720]: I1013 17:47:57.409449 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wbqtv" event={"ID":"6a8ff58c-4e28-4bc3-8a47-237a960a517c","Type":"ContainerDied","Data":"f9a11a7fa86558891985d2f98516cf6263cd5fa48168d7a7d05bd15783f8d962"} Oct 13 17:47:57 crc kubenswrapper[4720]: I1013 17:47:57.409472 4720 scope.go:117] "RemoveContainer" containerID="b0e46a63dfee8f9149386f2466e4d0e53f8bec4dc95661f2ecfef7d1769ca652" Oct 13 17:47:57 crc kubenswrapper[4720]: I1013 17:47:57.436831 4720 scope.go:117] "RemoveContainer" containerID="914910f5873e8479919e445bd88f52aa3eafb8c7d5d52ef58856cbcf8d17bcb0" Oct 13 17:47:57 crc kubenswrapper[4720]: I1013 17:47:57.446894 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wbqtv"] Oct 13 17:47:57 crc kubenswrapper[4720]: I1013 17:47:57.458124 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wbqtv"] Oct 13 17:47:57 crc kubenswrapper[4720]: I1013 17:47:57.478844 4720 scope.go:117] "RemoveContainer" containerID="e9eaeb333bff01a49fc96127842ff2db4cbb3117dc267f5a585155bfc30a01c1" Oct 13 17:47:57 crc kubenswrapper[4720]: I1013 17:47:57.515633 4720 scope.go:117] "RemoveContainer" containerID="b0e46a63dfee8f9149386f2466e4d0e53f8bec4dc95661f2ecfef7d1769ca652" Oct 13 17:47:57 crc kubenswrapper[4720]: E1013 17:47:57.516039 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0e46a63dfee8f9149386f2466e4d0e53f8bec4dc95661f2ecfef7d1769ca652\": container with ID starting with b0e46a63dfee8f9149386f2466e4d0e53f8bec4dc95661f2ecfef7d1769ca652 not found: ID does not exist" containerID="b0e46a63dfee8f9149386f2466e4d0e53f8bec4dc95661f2ecfef7d1769ca652" Oct 13 17:47:57 crc kubenswrapper[4720]: I1013 17:47:57.516072 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0e46a63dfee8f9149386f2466e4d0e53f8bec4dc95661f2ecfef7d1769ca652"} err="failed to get container status \"b0e46a63dfee8f9149386f2466e4d0e53f8bec4dc95661f2ecfef7d1769ca652\": rpc error: code = NotFound desc = could not find container \"b0e46a63dfee8f9149386f2466e4d0e53f8bec4dc95661f2ecfef7d1769ca652\": container with ID starting with b0e46a63dfee8f9149386f2466e4d0e53f8bec4dc95661f2ecfef7d1769ca652 not found: ID does not exist" Oct 13 17:47:57 crc kubenswrapper[4720]: I1013 17:47:57.516095 4720 scope.go:117] "RemoveContainer" containerID="914910f5873e8479919e445bd88f52aa3eafb8c7d5d52ef58856cbcf8d17bcb0" Oct 13 17:47:57 crc kubenswrapper[4720]: E1013 17:47:57.516559 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"914910f5873e8479919e445bd88f52aa3eafb8c7d5d52ef58856cbcf8d17bcb0\": container with ID starting with 914910f5873e8479919e445bd88f52aa3eafb8c7d5d52ef58856cbcf8d17bcb0 not found: ID does not exist" containerID="914910f5873e8479919e445bd88f52aa3eafb8c7d5d52ef58856cbcf8d17bcb0" Oct 13 17:47:57 crc kubenswrapper[4720]: I1013 17:47:57.516593 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"914910f5873e8479919e445bd88f52aa3eafb8c7d5d52ef58856cbcf8d17bcb0"} err="failed to get container status \"914910f5873e8479919e445bd88f52aa3eafb8c7d5d52ef58856cbcf8d17bcb0\": rpc error: code = NotFound desc = could not find container \"914910f5873e8479919e445bd88f52aa3eafb8c7d5d52ef58856cbcf8d17bcb0\": container with ID starting with 914910f5873e8479919e445bd88f52aa3eafb8c7d5d52ef58856cbcf8d17bcb0 not found: ID does not exist" Oct 13 17:47:57 crc kubenswrapper[4720]: I1013 17:47:57.516612 4720 scope.go:117] "RemoveContainer" containerID="e9eaeb333bff01a49fc96127842ff2db4cbb3117dc267f5a585155bfc30a01c1" Oct 13 17:47:57 crc kubenswrapper[4720]: E1013 17:47:57.516838 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9eaeb333bff01a49fc96127842ff2db4cbb3117dc267f5a585155bfc30a01c1\": container with ID starting with e9eaeb333bff01a49fc96127842ff2db4cbb3117dc267f5a585155bfc30a01c1 not found: ID does not exist" containerID="e9eaeb333bff01a49fc96127842ff2db4cbb3117dc267f5a585155bfc30a01c1" Oct 13 17:47:57 crc kubenswrapper[4720]: I1013 17:47:57.516876 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9eaeb333bff01a49fc96127842ff2db4cbb3117dc267f5a585155bfc30a01c1"} err="failed to get container status \"e9eaeb333bff01a49fc96127842ff2db4cbb3117dc267f5a585155bfc30a01c1\": rpc error: code = NotFound desc = could not find container \"e9eaeb333bff01a49fc96127842ff2db4cbb3117dc267f5a585155bfc30a01c1\": container with ID starting with e9eaeb333bff01a49fc96127842ff2db4cbb3117dc267f5a585155bfc30a01c1 not found: ID does not exist" Oct 13 17:47:58 crc kubenswrapper[4720]: I1013 17:47:58.027166 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-wc648" podUID="ea62eeb5-7f86-4555-ba88-fb04f9986df6" containerName="registry-server" probeResult="failure" output=< Oct 13 17:47:58 crc kubenswrapper[4720]: timeout: failed to connect service ":50051" within 1s Oct 13 17:47:58 crc kubenswrapper[4720]: > Oct 13 17:47:59 crc kubenswrapper[4720]: I1013 17:47:59.196579 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a8ff58c-4e28-4bc3-8a47-237a960a517c" path="/var/lib/kubelet/pods/6a8ff58c-4e28-4bc3-8a47-237a960a517c/volumes" Oct 13 17:48:07 crc kubenswrapper[4720]: I1013 17:48:07.067963 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wc648" Oct 13 17:48:07 crc kubenswrapper[4720]: I1013 17:48:07.135716 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wc648" Oct 13 17:48:07 crc kubenswrapper[4720]: I1013 17:48:07.225505 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wc648"] Oct 13 17:48:07 crc kubenswrapper[4720]: I1013 17:48:07.328887 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b6t2r"] Oct 13 17:48:07 crc kubenswrapper[4720]: I1013 17:48:07.329135 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-b6t2r" podUID="c05230f9-fc44-4ffe-98bd-fcca7521d582" containerName="registry-server" containerID="cri-o://90988b96cd031c9966c7c9c333bac005b1c4adaa3d07879f2752515f38a1288f" gracePeriod=2 Oct 13 17:48:07 crc kubenswrapper[4720]: I1013 17:48:07.524473 4720 generic.go:334] "Generic (PLEG): container finished" podID="c05230f9-fc44-4ffe-98bd-fcca7521d582" containerID="90988b96cd031c9966c7c9c333bac005b1c4adaa3d07879f2752515f38a1288f" exitCode=0 Oct 13 17:48:07 crc kubenswrapper[4720]: I1013 17:48:07.524545 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b6t2r" event={"ID":"c05230f9-fc44-4ffe-98bd-fcca7521d582","Type":"ContainerDied","Data":"90988b96cd031c9966c7c9c333bac005b1c4adaa3d07879f2752515f38a1288f"} Oct 13 17:48:07 crc kubenswrapper[4720]: I1013 17:48:07.795661 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b6t2r" Oct 13 17:48:07 crc kubenswrapper[4720]: I1013 17:48:07.897847 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c05230f9-fc44-4ffe-98bd-fcca7521d582-catalog-content\") pod \"c05230f9-fc44-4ffe-98bd-fcca7521d582\" (UID: \"c05230f9-fc44-4ffe-98bd-fcca7521d582\") " Oct 13 17:48:07 crc kubenswrapper[4720]: I1013 17:48:07.898019 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jjrb\" (UniqueName: \"kubernetes.io/projected/c05230f9-fc44-4ffe-98bd-fcca7521d582-kube-api-access-5jjrb\") pod \"c05230f9-fc44-4ffe-98bd-fcca7521d582\" (UID: \"c05230f9-fc44-4ffe-98bd-fcca7521d582\") " Oct 13 17:48:07 crc kubenswrapper[4720]: I1013 17:48:07.898070 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c05230f9-fc44-4ffe-98bd-fcca7521d582-utilities\") pod \"c05230f9-fc44-4ffe-98bd-fcca7521d582\" (UID: \"c05230f9-fc44-4ffe-98bd-fcca7521d582\") " Oct 13 17:48:07 crc kubenswrapper[4720]: I1013 17:48:07.898965 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c05230f9-fc44-4ffe-98bd-fcca7521d582-utilities" (OuterVolumeSpecName: "utilities") pod "c05230f9-fc44-4ffe-98bd-fcca7521d582" (UID: "c05230f9-fc44-4ffe-98bd-fcca7521d582"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:48:07 crc kubenswrapper[4720]: I1013 17:48:07.919351 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c05230f9-fc44-4ffe-98bd-fcca7521d582-kube-api-access-5jjrb" (OuterVolumeSpecName: "kube-api-access-5jjrb") pod "c05230f9-fc44-4ffe-98bd-fcca7521d582" (UID: "c05230f9-fc44-4ffe-98bd-fcca7521d582"). InnerVolumeSpecName "kube-api-access-5jjrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:48:07 crc kubenswrapper[4720]: I1013 17:48:07.956561 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c05230f9-fc44-4ffe-98bd-fcca7521d582-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c05230f9-fc44-4ffe-98bd-fcca7521d582" (UID: "c05230f9-fc44-4ffe-98bd-fcca7521d582"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:48:07 crc kubenswrapper[4720]: I1013 17:48:07.999555 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c05230f9-fc44-4ffe-98bd-fcca7521d582-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 17:48:07 crc kubenswrapper[4720]: I1013 17:48:07.999589 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jjrb\" (UniqueName: \"kubernetes.io/projected/c05230f9-fc44-4ffe-98bd-fcca7521d582-kube-api-access-5jjrb\") on node \"crc\" DevicePath \"\"" Oct 13 17:48:07 crc kubenswrapper[4720]: I1013 17:48:07.999600 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c05230f9-fc44-4ffe-98bd-fcca7521d582-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 17:48:08 crc kubenswrapper[4720]: I1013 17:48:08.536563 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b6t2r" event={"ID":"c05230f9-fc44-4ffe-98bd-fcca7521d582","Type":"ContainerDied","Data":"4563fa0470400ee9719b1668789d51bd60d501d12f5d8a7ee9bba16ea2c1eff5"} Oct 13 17:48:08 crc kubenswrapper[4720]: I1013 17:48:08.536874 4720 scope.go:117] "RemoveContainer" containerID="90988b96cd031c9966c7c9c333bac005b1c4adaa3d07879f2752515f38a1288f" Oct 13 17:48:08 crc kubenswrapper[4720]: I1013 17:48:08.536653 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b6t2r" Oct 13 17:48:08 crc kubenswrapper[4720]: I1013 17:48:08.561451 4720 scope.go:117] "RemoveContainer" containerID="ce8a264a081c49fffbc9ed7c4af381961639023ac029bc3aadc7183546fb82b2" Oct 13 17:48:08 crc kubenswrapper[4720]: I1013 17:48:08.573895 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b6t2r"] Oct 13 17:48:08 crc kubenswrapper[4720]: I1013 17:48:08.589427 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-b6t2r"] Oct 13 17:48:08 crc kubenswrapper[4720]: I1013 17:48:08.612851 4720 scope.go:117] "RemoveContainer" containerID="8fb9eb8429009ae01343d9c2d5d08d89637b37a20633dc280105b142db2b0faf" Oct 13 17:48:09 crc kubenswrapper[4720]: I1013 17:48:09.185484 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c05230f9-fc44-4ffe-98bd-fcca7521d582" path="/var/lib/kubelet/pods/c05230f9-fc44-4ffe-98bd-fcca7521d582/volumes" Oct 13 17:48:15 crc kubenswrapper[4720]: I1013 17:48:15.212364 4720 patch_prober.go:28] interesting pod/machine-config-daemon-htwnl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 17:48:15 crc kubenswrapper[4720]: I1013 17:48:15.213252 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 17:48:39 crc kubenswrapper[4720]: I1013 17:48:39.095136 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-h7rfq"] Oct 13 17:48:39 crc kubenswrapper[4720]: E1013 17:48:39.096911 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c05230f9-fc44-4ffe-98bd-fcca7521d582" containerName="extract-content" Oct 13 17:48:39 crc kubenswrapper[4720]: I1013 17:48:39.096935 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="c05230f9-fc44-4ffe-98bd-fcca7521d582" containerName="extract-content" Oct 13 17:48:39 crc kubenswrapper[4720]: E1013 17:48:39.096963 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a8ff58c-4e28-4bc3-8a47-237a960a517c" containerName="extract-utilities" Oct 13 17:48:39 crc kubenswrapper[4720]: I1013 17:48:39.096974 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a8ff58c-4e28-4bc3-8a47-237a960a517c" containerName="extract-utilities" Oct 13 17:48:39 crc kubenswrapper[4720]: E1013 17:48:39.096992 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a8ff58c-4e28-4bc3-8a47-237a960a517c" containerName="registry-server" Oct 13 17:48:39 crc kubenswrapper[4720]: I1013 17:48:39.097002 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a8ff58c-4e28-4bc3-8a47-237a960a517c" containerName="registry-server" Oct 13 17:48:39 crc kubenswrapper[4720]: E1013 17:48:39.097017 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c05230f9-fc44-4ffe-98bd-fcca7521d582" containerName="registry-server" Oct 13 17:48:39 crc kubenswrapper[4720]: I1013 17:48:39.097027 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="c05230f9-fc44-4ffe-98bd-fcca7521d582" containerName="registry-server" Oct 13 17:48:39 crc kubenswrapper[4720]: E1013 17:48:39.097067 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c05230f9-fc44-4ffe-98bd-fcca7521d582" containerName="extract-utilities" Oct 13 17:48:39 crc kubenswrapper[4720]: I1013 17:48:39.097076 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="c05230f9-fc44-4ffe-98bd-fcca7521d582" containerName="extract-utilities" Oct 13 17:48:39 crc kubenswrapper[4720]: E1013 17:48:39.097103 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a8ff58c-4e28-4bc3-8a47-237a960a517c" containerName="extract-content" Oct 13 17:48:39 crc kubenswrapper[4720]: I1013 17:48:39.097113 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a8ff58c-4e28-4bc3-8a47-237a960a517c" containerName="extract-content" Oct 13 17:48:39 crc kubenswrapper[4720]: I1013 17:48:39.097474 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="c05230f9-fc44-4ffe-98bd-fcca7521d582" containerName="registry-server" Oct 13 17:48:39 crc kubenswrapper[4720]: I1013 17:48:39.097521 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a8ff58c-4e28-4bc3-8a47-237a960a517c" containerName="registry-server" Oct 13 17:48:39 crc kubenswrapper[4720]: I1013 17:48:39.099474 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h7rfq" Oct 13 17:48:39 crc kubenswrapper[4720]: I1013 17:48:39.145262 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h7rfq"] Oct 13 17:48:39 crc kubenswrapper[4720]: I1013 17:48:39.191477 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5034f8f1-adf9-4b90-bb73-eaa719126486-catalog-content\") pod \"community-operators-h7rfq\" (UID: \"5034f8f1-adf9-4b90-bb73-eaa719126486\") " pod="openshift-marketplace/community-operators-h7rfq" Oct 13 17:48:39 crc kubenswrapper[4720]: I1013 17:48:39.191558 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5034f8f1-adf9-4b90-bb73-eaa719126486-utilities\") pod \"community-operators-h7rfq\" (UID: \"5034f8f1-adf9-4b90-bb73-eaa719126486\") " pod="openshift-marketplace/community-operators-h7rfq" Oct 13 17:48:39 crc kubenswrapper[4720]: I1013 17:48:39.191656 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbfws\" (UniqueName: \"kubernetes.io/projected/5034f8f1-adf9-4b90-bb73-eaa719126486-kube-api-access-hbfws\") pod \"community-operators-h7rfq\" (UID: \"5034f8f1-adf9-4b90-bb73-eaa719126486\") " pod="openshift-marketplace/community-operators-h7rfq" Oct 13 17:48:39 crc kubenswrapper[4720]: I1013 17:48:39.293957 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbfws\" (UniqueName: \"kubernetes.io/projected/5034f8f1-adf9-4b90-bb73-eaa719126486-kube-api-access-hbfws\") pod \"community-operators-h7rfq\" (UID: \"5034f8f1-adf9-4b90-bb73-eaa719126486\") " pod="openshift-marketplace/community-operators-h7rfq" Oct 13 17:48:39 crc kubenswrapper[4720]: I1013 17:48:39.294137 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5034f8f1-adf9-4b90-bb73-eaa719126486-catalog-content\") pod \"community-operators-h7rfq\" (UID: \"5034f8f1-adf9-4b90-bb73-eaa719126486\") " pod="openshift-marketplace/community-operators-h7rfq" Oct 13 17:48:39 crc kubenswrapper[4720]: I1013 17:48:39.294240 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5034f8f1-adf9-4b90-bb73-eaa719126486-utilities\") pod \"community-operators-h7rfq\" (UID: \"5034f8f1-adf9-4b90-bb73-eaa719126486\") " pod="openshift-marketplace/community-operators-h7rfq" Oct 13 17:48:39 crc kubenswrapper[4720]: I1013 17:48:39.294701 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5034f8f1-adf9-4b90-bb73-eaa719126486-utilities\") pod \"community-operators-h7rfq\" (UID: \"5034f8f1-adf9-4b90-bb73-eaa719126486\") " pod="openshift-marketplace/community-operators-h7rfq" Oct 13 17:48:39 crc kubenswrapper[4720]: I1013 17:48:39.294739 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5034f8f1-adf9-4b90-bb73-eaa719126486-catalog-content\") pod \"community-operators-h7rfq\" (UID: \"5034f8f1-adf9-4b90-bb73-eaa719126486\") " pod="openshift-marketplace/community-operators-h7rfq" Oct 13 17:48:39 crc kubenswrapper[4720]: I1013 17:48:39.318689 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbfws\" (UniqueName: \"kubernetes.io/projected/5034f8f1-adf9-4b90-bb73-eaa719126486-kube-api-access-hbfws\") pod \"community-operators-h7rfq\" (UID: \"5034f8f1-adf9-4b90-bb73-eaa719126486\") " pod="openshift-marketplace/community-operators-h7rfq" Oct 13 17:48:39 crc kubenswrapper[4720]: I1013 17:48:39.428679 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h7rfq" Oct 13 17:48:39 crc kubenswrapper[4720]: I1013 17:48:39.947086 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h7rfq"] Oct 13 17:48:40 crc kubenswrapper[4720]: I1013 17:48:40.954755 4720 generic.go:334] "Generic (PLEG): container finished" podID="5034f8f1-adf9-4b90-bb73-eaa719126486" containerID="bcba894a9effa1dd95db1c447196363784d84cba76c1516bcb0a92c39005d101" exitCode=0 Oct 13 17:48:40 crc kubenswrapper[4720]: I1013 17:48:40.954810 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7rfq" event={"ID":"5034f8f1-adf9-4b90-bb73-eaa719126486","Type":"ContainerDied","Data":"bcba894a9effa1dd95db1c447196363784d84cba76c1516bcb0a92c39005d101"} Oct 13 17:48:40 crc kubenswrapper[4720]: I1013 17:48:40.955158 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7rfq" event={"ID":"5034f8f1-adf9-4b90-bb73-eaa719126486","Type":"ContainerStarted","Data":"62e9a88c8a7382b67ce2f4b607bd38360b4b4924cc62e03c11a4a099e74b7493"} Oct 13 17:48:42 crc kubenswrapper[4720]: I1013 17:48:42.982383 4720 generic.go:334] "Generic (PLEG): container finished" podID="5034f8f1-adf9-4b90-bb73-eaa719126486" containerID="f43497e4b13f6d6e7e32f4a2f131fbc763edbae9324a1ed576cc9339b51e43fb" exitCode=0 Oct 13 17:48:42 crc kubenswrapper[4720]: I1013 17:48:42.982499 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7rfq" event={"ID":"5034f8f1-adf9-4b90-bb73-eaa719126486","Type":"ContainerDied","Data":"f43497e4b13f6d6e7e32f4a2f131fbc763edbae9324a1ed576cc9339b51e43fb"} Oct 13 17:48:43 crc kubenswrapper[4720]: I1013 17:48:43.995114 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7rfq" event={"ID":"5034f8f1-adf9-4b90-bb73-eaa719126486","Type":"ContainerStarted","Data":"7002160e37252c1bab92399753f3da7593c0c3c36e1982a67e59627484e1d3dc"} Oct 13 17:48:44 crc kubenswrapper[4720]: I1013 17:48:44.022335 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-h7rfq" podStartSLOduration=2.347464894 podStartE2EDuration="5.022306849s" podCreationTimestamp="2025-10-13 17:48:39 +0000 UTC" firstStartedPulling="2025-10-13 17:48:40.957536166 +0000 UTC m=+1466.414786298" lastFinishedPulling="2025-10-13 17:48:43.632378101 +0000 UTC m=+1469.089628253" observedRunningTime="2025-10-13 17:48:44.01610835 +0000 UTC m=+1469.473358502" watchObservedRunningTime="2025-10-13 17:48:44.022306849 +0000 UTC m=+1469.479556991" Oct 13 17:48:45 crc kubenswrapper[4720]: I1013 17:48:45.212629 4720 patch_prober.go:28] interesting pod/machine-config-daemon-htwnl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 17:48:45 crc kubenswrapper[4720]: I1013 17:48:45.212709 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 17:48:45 crc kubenswrapper[4720]: I1013 17:48:45.212762 4720 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" Oct 13 17:48:45 crc kubenswrapper[4720]: I1013 17:48:45.213743 4720 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"eabc7e2122617e64bac6ae4330db40b5b4e9867a18ff888fd3f7afb5cfc64f2f"} pod="openshift-machine-config-operator/machine-config-daemon-htwnl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 13 17:48:45 crc kubenswrapper[4720]: I1013 17:48:45.213821 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerName="machine-config-daemon" containerID="cri-o://eabc7e2122617e64bac6ae4330db40b5b4e9867a18ff888fd3f7afb5cfc64f2f" gracePeriod=600 Oct 13 17:48:45 crc kubenswrapper[4720]: E1013 17:48:45.350079 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 17:48:46 crc kubenswrapper[4720]: I1013 17:48:46.017156 4720 generic.go:334] "Generic (PLEG): container finished" podID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerID="eabc7e2122617e64bac6ae4330db40b5b4e9867a18ff888fd3f7afb5cfc64f2f" exitCode=0 Oct 13 17:48:46 crc kubenswrapper[4720]: I1013 17:48:46.017208 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" event={"ID":"ce442c80-fcde-4b79-b6f9-f8f25771dfd4","Type":"ContainerDied","Data":"eabc7e2122617e64bac6ae4330db40b5b4e9867a18ff888fd3f7afb5cfc64f2f"} Oct 13 17:48:46 crc kubenswrapper[4720]: I1013 17:48:46.017245 4720 scope.go:117] "RemoveContainer" containerID="ee3a413fb70fae37f659cff124cc855967143ab2544217b22584306b14bb1b9a" Oct 13 17:48:46 crc kubenswrapper[4720]: I1013 17:48:46.017823 4720 scope.go:117] "RemoveContainer" containerID="eabc7e2122617e64bac6ae4330db40b5b4e9867a18ff888fd3f7afb5cfc64f2f" Oct 13 17:48:46 crc kubenswrapper[4720]: E1013 17:48:46.018062 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 17:48:49 crc kubenswrapper[4720]: I1013 17:48:49.428983 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-h7rfq" Oct 13 17:48:49 crc kubenswrapper[4720]: I1013 17:48:49.429416 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-h7rfq" Oct 13 17:48:49 crc kubenswrapper[4720]: I1013 17:48:49.490650 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-h7rfq" Oct 13 17:48:50 crc kubenswrapper[4720]: I1013 17:48:50.135298 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-h7rfq" Oct 13 17:48:50 crc kubenswrapper[4720]: I1013 17:48:50.187611 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h7rfq"] Oct 13 17:48:52 crc kubenswrapper[4720]: I1013 17:48:52.093617 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-h7rfq" podUID="5034f8f1-adf9-4b90-bb73-eaa719126486" containerName="registry-server" containerID="cri-o://7002160e37252c1bab92399753f3da7593c0c3c36e1982a67e59627484e1d3dc" gracePeriod=2 Oct 13 17:48:53 crc kubenswrapper[4720]: I1013 17:48:53.112396 4720 generic.go:334] "Generic (PLEG): container finished" podID="5034f8f1-adf9-4b90-bb73-eaa719126486" containerID="7002160e37252c1bab92399753f3da7593c0c3c36e1982a67e59627484e1d3dc" exitCode=0 Oct 13 17:48:53 crc kubenswrapper[4720]: I1013 17:48:53.112662 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7rfq" event={"ID":"5034f8f1-adf9-4b90-bb73-eaa719126486","Type":"ContainerDied","Data":"7002160e37252c1bab92399753f3da7593c0c3c36e1982a67e59627484e1d3dc"} Oct 13 17:48:53 crc kubenswrapper[4720]: I1013 17:48:53.112694 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7rfq" event={"ID":"5034f8f1-adf9-4b90-bb73-eaa719126486","Type":"ContainerDied","Data":"62e9a88c8a7382b67ce2f4b607bd38360b4b4924cc62e03c11a4a099e74b7493"} Oct 13 17:48:53 crc kubenswrapper[4720]: I1013 17:48:53.112708 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62e9a88c8a7382b67ce2f4b607bd38360b4b4924cc62e03c11a4a099e74b7493" Oct 13 17:48:53 crc kubenswrapper[4720]: I1013 17:48:53.151101 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h7rfq" Oct 13 17:48:53 crc kubenswrapper[4720]: I1013 17:48:53.274373 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5034f8f1-adf9-4b90-bb73-eaa719126486-utilities\") pod \"5034f8f1-adf9-4b90-bb73-eaa719126486\" (UID: \"5034f8f1-adf9-4b90-bb73-eaa719126486\") " Oct 13 17:48:53 crc kubenswrapper[4720]: I1013 17:48:53.274884 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5034f8f1-adf9-4b90-bb73-eaa719126486-catalog-content\") pod \"5034f8f1-adf9-4b90-bb73-eaa719126486\" (UID: \"5034f8f1-adf9-4b90-bb73-eaa719126486\") " Oct 13 17:48:53 crc kubenswrapper[4720]: I1013 17:48:53.274951 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbfws\" (UniqueName: \"kubernetes.io/projected/5034f8f1-adf9-4b90-bb73-eaa719126486-kube-api-access-hbfws\") pod \"5034f8f1-adf9-4b90-bb73-eaa719126486\" (UID: \"5034f8f1-adf9-4b90-bb73-eaa719126486\") " Oct 13 17:48:53 crc kubenswrapper[4720]: I1013 17:48:53.275331 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5034f8f1-adf9-4b90-bb73-eaa719126486-utilities" (OuterVolumeSpecName: "utilities") pod "5034f8f1-adf9-4b90-bb73-eaa719126486" (UID: "5034f8f1-adf9-4b90-bb73-eaa719126486"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:48:53 crc kubenswrapper[4720]: I1013 17:48:53.275548 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5034f8f1-adf9-4b90-bb73-eaa719126486-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 17:48:53 crc kubenswrapper[4720]: I1013 17:48:53.291383 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5034f8f1-adf9-4b90-bb73-eaa719126486-kube-api-access-hbfws" (OuterVolumeSpecName: "kube-api-access-hbfws") pod "5034f8f1-adf9-4b90-bb73-eaa719126486" (UID: "5034f8f1-adf9-4b90-bb73-eaa719126486"). InnerVolumeSpecName "kube-api-access-hbfws". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:48:53 crc kubenswrapper[4720]: I1013 17:48:53.319940 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5034f8f1-adf9-4b90-bb73-eaa719126486-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5034f8f1-adf9-4b90-bb73-eaa719126486" (UID: "5034f8f1-adf9-4b90-bb73-eaa719126486"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:48:53 crc kubenswrapper[4720]: I1013 17:48:53.378175 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5034f8f1-adf9-4b90-bb73-eaa719126486-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 17:48:53 crc kubenswrapper[4720]: I1013 17:48:53.378300 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbfws\" (UniqueName: \"kubernetes.io/projected/5034f8f1-adf9-4b90-bb73-eaa719126486-kube-api-access-hbfws\") on node \"crc\" DevicePath \"\"" Oct 13 17:48:54 crc kubenswrapper[4720]: I1013 17:48:54.124039 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h7rfq" Oct 13 17:48:54 crc kubenswrapper[4720]: I1013 17:48:54.180842 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h7rfq"] Oct 13 17:48:54 crc kubenswrapper[4720]: I1013 17:48:54.197245 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-h7rfq"] Oct 13 17:48:55 crc kubenswrapper[4720]: I1013 17:48:55.185461 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5034f8f1-adf9-4b90-bb73-eaa719126486" path="/var/lib/kubelet/pods/5034f8f1-adf9-4b90-bb73-eaa719126486/volumes" Oct 13 17:48:56 crc kubenswrapper[4720]: I1013 17:48:56.168953 4720 scope.go:117] "RemoveContainer" containerID="eabc7e2122617e64bac6ae4330db40b5b4e9867a18ff888fd3f7afb5cfc64f2f" Oct 13 17:48:56 crc kubenswrapper[4720]: E1013 17:48:56.169203 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 17:49:09 crc kubenswrapper[4720]: I1013 17:49:09.168546 4720 scope.go:117] "RemoveContainer" containerID="eabc7e2122617e64bac6ae4330db40b5b4e9867a18ff888fd3f7afb5cfc64f2f" Oct 13 17:49:09 crc kubenswrapper[4720]: E1013 17:49:09.170550 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 17:49:10 crc kubenswrapper[4720]: I1013 17:49:10.042936 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-6rql4"] Oct 13 17:49:10 crc kubenswrapper[4720]: I1013 17:49:10.066920 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-kd8vn"] Oct 13 17:49:10 crc kubenswrapper[4720]: I1013 17:49:10.076235 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-kd8vn"] Oct 13 17:49:10 crc kubenswrapper[4720]: I1013 17:49:10.084377 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-6rql4"] Oct 13 17:49:11 crc kubenswrapper[4720]: I1013 17:49:11.038651 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-tp2xb"] Oct 13 17:49:11 crc kubenswrapper[4720]: I1013 17:49:11.047057 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-tp2xb"] Oct 13 17:49:11 crc kubenswrapper[4720]: I1013 17:49:11.182222 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="828a8af5-455b-48ff-ab12-433c76df235c" path="/var/lib/kubelet/pods/828a8af5-455b-48ff-ab12-433c76df235c/volumes" Oct 13 17:49:11 crc kubenswrapper[4720]: I1013 17:49:11.182984 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a331f8b0-56f2-48ce-a85c-a1ef1c57d7dc" path="/var/lib/kubelet/pods/a331f8b0-56f2-48ce-a85c-a1ef1c57d7dc/volumes" Oct 13 17:49:11 crc kubenswrapper[4720]: I1013 17:49:11.183548 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e022bab5-923a-4ce0-9027-d3dbffd6aa51" path="/var/lib/kubelet/pods/e022bab5-923a-4ce0-9027-d3dbffd6aa51/volumes" Oct 13 17:49:17 crc kubenswrapper[4720]: I1013 17:49:17.781993 4720 scope.go:117] "RemoveContainer" containerID="f8de1c1c2a52d5e983c06301e7add77830473448c98f55a1fef2c9c3f5f89bcb" Oct 13 17:49:17 crc kubenswrapper[4720]: I1013 17:49:17.821338 4720 scope.go:117] "RemoveContainer" containerID="6af41f4743bec3c706117e16f192c7dddac9166aebc88ef11636a84354d0a274" Oct 13 17:49:17 crc kubenswrapper[4720]: I1013 17:49:17.898661 4720 scope.go:117] "RemoveContainer" containerID="b9a3b9381496578cc7754c68200a759343b2ca01c293742b49ff69901105eb48" Oct 13 17:49:20 crc kubenswrapper[4720]: I1013 17:49:20.028076 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-3f72-account-create-gcnc4"] Oct 13 17:49:20 crc kubenswrapper[4720]: I1013 17:49:20.035938 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-8fd1-account-create-mx8jn"] Oct 13 17:49:20 crc kubenswrapper[4720]: I1013 17:49:20.044531 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-3f72-account-create-gcnc4"] Oct 13 17:49:20 crc kubenswrapper[4720]: I1013 17:49:20.051499 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-8fd1-account-create-mx8jn"] Oct 13 17:49:21 crc kubenswrapper[4720]: I1013 17:49:21.177092 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="360a98a6-bd5c-403b-a307-81dbb7396962" path="/var/lib/kubelet/pods/360a98a6-bd5c-403b-a307-81dbb7396962/volumes" Oct 13 17:49:21 crc kubenswrapper[4720]: I1013 17:49:21.177665 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ea781a5-9df7-4e60-909b-0e437f4c512b" path="/var/lib/kubelet/pods/7ea781a5-9df7-4e60-909b-0e437f4c512b/volumes" Oct 13 17:49:22 crc kubenswrapper[4720]: I1013 17:49:22.045111 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-a6b5-account-create-s5d2r"] Oct 13 17:49:22 crc kubenswrapper[4720]: I1013 17:49:22.057738 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-a6b5-account-create-s5d2r"] Oct 13 17:49:23 crc kubenswrapper[4720]: I1013 17:49:23.169744 4720 scope.go:117] "RemoveContainer" containerID="eabc7e2122617e64bac6ae4330db40b5b4e9867a18ff888fd3f7afb5cfc64f2f" Oct 13 17:49:23 crc kubenswrapper[4720]: E1013 17:49:23.170376 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 17:49:23 crc kubenswrapper[4720]: I1013 17:49:23.181317 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="222effe0-fa9c-41a0-ae72-6ce0248ee52d" path="/var/lib/kubelet/pods/222effe0-fa9c-41a0-ae72-6ce0248ee52d/volumes" Oct 13 17:49:31 crc kubenswrapper[4720]: I1013 17:49:31.529474 4720 generic.go:334] "Generic (PLEG): container finished" podID="2b7d23a3-f722-47e4-85af-fe733bfc5fdc" containerID="cce115976f7829c8377bcd89a60079d7eba24371bfe5df55db7ae5d92dac6cee" exitCode=0 Oct 13 17:49:31 crc kubenswrapper[4720]: I1013 17:49:31.529599 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xqf7d" event={"ID":"2b7d23a3-f722-47e4-85af-fe733bfc5fdc","Type":"ContainerDied","Data":"cce115976f7829c8377bcd89a60079d7eba24371bfe5df55db7ae5d92dac6cee"} Oct 13 17:49:33 crc kubenswrapper[4720]: I1013 17:49:33.059763 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xqf7d" Oct 13 17:49:33 crc kubenswrapper[4720]: I1013 17:49:33.183433 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b7d23a3-f722-47e4-85af-fe733bfc5fdc-inventory\") pod \"2b7d23a3-f722-47e4-85af-fe733bfc5fdc\" (UID: \"2b7d23a3-f722-47e4-85af-fe733bfc5fdc\") " Oct 13 17:49:33 crc kubenswrapper[4720]: I1013 17:49:33.183545 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2b7d23a3-f722-47e4-85af-fe733bfc5fdc-ssh-key\") pod \"2b7d23a3-f722-47e4-85af-fe733bfc5fdc\" (UID: \"2b7d23a3-f722-47e4-85af-fe733bfc5fdc\") " Oct 13 17:49:33 crc kubenswrapper[4720]: I1013 17:49:33.183608 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krqd5\" (UniqueName: \"kubernetes.io/projected/2b7d23a3-f722-47e4-85af-fe733bfc5fdc-kube-api-access-krqd5\") pod \"2b7d23a3-f722-47e4-85af-fe733bfc5fdc\" (UID: \"2b7d23a3-f722-47e4-85af-fe733bfc5fdc\") " Oct 13 17:49:33 crc kubenswrapper[4720]: I1013 17:49:33.192484 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b7d23a3-f722-47e4-85af-fe733bfc5fdc-kube-api-access-krqd5" (OuterVolumeSpecName: "kube-api-access-krqd5") pod "2b7d23a3-f722-47e4-85af-fe733bfc5fdc" (UID: "2b7d23a3-f722-47e4-85af-fe733bfc5fdc"). InnerVolumeSpecName "kube-api-access-krqd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:49:33 crc kubenswrapper[4720]: I1013 17:49:33.224532 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b7d23a3-f722-47e4-85af-fe733bfc5fdc-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2b7d23a3-f722-47e4-85af-fe733bfc5fdc" (UID: "2b7d23a3-f722-47e4-85af-fe733bfc5fdc"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:49:33 crc kubenswrapper[4720]: I1013 17:49:33.230345 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b7d23a3-f722-47e4-85af-fe733bfc5fdc-inventory" (OuterVolumeSpecName: "inventory") pod "2b7d23a3-f722-47e4-85af-fe733bfc5fdc" (UID: "2b7d23a3-f722-47e4-85af-fe733bfc5fdc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:49:33 crc kubenswrapper[4720]: I1013 17:49:33.286711 4720 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b7d23a3-f722-47e4-85af-fe733bfc5fdc-inventory\") on node \"crc\" DevicePath \"\"" Oct 13 17:49:33 crc kubenswrapper[4720]: I1013 17:49:33.286742 4720 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2b7d23a3-f722-47e4-85af-fe733bfc5fdc-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 13 17:49:33 crc kubenswrapper[4720]: I1013 17:49:33.286754 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krqd5\" (UniqueName: \"kubernetes.io/projected/2b7d23a3-f722-47e4-85af-fe733bfc5fdc-kube-api-access-krqd5\") on node \"crc\" DevicePath \"\"" Oct 13 17:49:33 crc kubenswrapper[4720]: I1013 17:49:33.552917 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xqf7d" event={"ID":"2b7d23a3-f722-47e4-85af-fe733bfc5fdc","Type":"ContainerDied","Data":"23b3a2e56629d312a7650122b68abbf6fe46bc11095ab308c2a3dab51b55e95f"} Oct 13 17:49:33 crc kubenswrapper[4720]: I1013 17:49:33.552963 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23b3a2e56629d312a7650122b68abbf6fe46bc11095ab308c2a3dab51b55e95f" Oct 13 17:49:33 crc kubenswrapper[4720]: I1013 17:49:33.553266 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xqf7d" Oct 13 17:49:33 crc kubenswrapper[4720]: I1013 17:49:33.648376 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vf6s4"] Oct 13 17:49:33 crc kubenswrapper[4720]: E1013 17:49:33.648764 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b7d23a3-f722-47e4-85af-fe733bfc5fdc" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Oct 13 17:49:33 crc kubenswrapper[4720]: I1013 17:49:33.648784 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b7d23a3-f722-47e4-85af-fe733bfc5fdc" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Oct 13 17:49:33 crc kubenswrapper[4720]: E1013 17:49:33.648805 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5034f8f1-adf9-4b90-bb73-eaa719126486" containerName="registry-server" Oct 13 17:49:33 crc kubenswrapper[4720]: I1013 17:49:33.648813 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="5034f8f1-adf9-4b90-bb73-eaa719126486" containerName="registry-server" Oct 13 17:49:33 crc kubenswrapper[4720]: E1013 17:49:33.648831 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5034f8f1-adf9-4b90-bb73-eaa719126486" containerName="extract-utilities" Oct 13 17:49:33 crc kubenswrapper[4720]: I1013 17:49:33.648837 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="5034f8f1-adf9-4b90-bb73-eaa719126486" containerName="extract-utilities" Oct 13 17:49:33 crc kubenswrapper[4720]: E1013 17:49:33.648855 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5034f8f1-adf9-4b90-bb73-eaa719126486" containerName="extract-content" Oct 13 17:49:33 crc kubenswrapper[4720]: I1013 17:49:33.648861 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="5034f8f1-adf9-4b90-bb73-eaa719126486" containerName="extract-content" Oct 13 17:49:33 crc kubenswrapper[4720]: I1013 17:49:33.649042 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b7d23a3-f722-47e4-85af-fe733bfc5fdc" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Oct 13 17:49:33 crc kubenswrapper[4720]: I1013 17:49:33.649061 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="5034f8f1-adf9-4b90-bb73-eaa719126486" containerName="registry-server" Oct 13 17:49:33 crc kubenswrapper[4720]: I1013 17:49:33.649656 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vf6s4" Oct 13 17:49:33 crc kubenswrapper[4720]: I1013 17:49:33.652318 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 13 17:49:33 crc kubenswrapper[4720]: I1013 17:49:33.652382 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 13 17:49:33 crc kubenswrapper[4720]: I1013 17:49:33.652575 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l2fds" Oct 13 17:49:33 crc kubenswrapper[4720]: I1013 17:49:33.652595 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 13 17:49:33 crc kubenswrapper[4720]: I1013 17:49:33.664051 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vf6s4"] Oct 13 17:49:33 crc kubenswrapper[4720]: I1013 17:49:33.794611 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5gr6\" (UniqueName: \"kubernetes.io/projected/2dedc602-7303-4ca5-8d61-143a7975c01c-kube-api-access-b5gr6\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vf6s4\" (UID: \"2dedc602-7303-4ca5-8d61-143a7975c01c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vf6s4" Oct 13 17:49:33 crc kubenswrapper[4720]: I1013 17:49:33.794736 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2dedc602-7303-4ca5-8d61-143a7975c01c-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vf6s4\" (UID: \"2dedc602-7303-4ca5-8d61-143a7975c01c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vf6s4" Oct 13 17:49:33 crc kubenswrapper[4720]: I1013 17:49:33.794778 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2dedc602-7303-4ca5-8d61-143a7975c01c-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vf6s4\" (UID: \"2dedc602-7303-4ca5-8d61-143a7975c01c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vf6s4" Oct 13 17:49:33 crc kubenswrapper[4720]: I1013 17:49:33.896911 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5gr6\" (UniqueName: \"kubernetes.io/projected/2dedc602-7303-4ca5-8d61-143a7975c01c-kube-api-access-b5gr6\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vf6s4\" (UID: \"2dedc602-7303-4ca5-8d61-143a7975c01c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vf6s4" Oct 13 17:49:33 crc kubenswrapper[4720]: I1013 17:49:33.897235 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2dedc602-7303-4ca5-8d61-143a7975c01c-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vf6s4\" (UID: \"2dedc602-7303-4ca5-8d61-143a7975c01c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vf6s4" Oct 13 17:49:33 crc kubenswrapper[4720]: I1013 17:49:33.897364 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2dedc602-7303-4ca5-8d61-143a7975c01c-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vf6s4\" (UID: \"2dedc602-7303-4ca5-8d61-143a7975c01c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vf6s4" Oct 13 17:49:33 crc kubenswrapper[4720]: I1013 17:49:33.903851 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2dedc602-7303-4ca5-8d61-143a7975c01c-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vf6s4\" (UID: \"2dedc602-7303-4ca5-8d61-143a7975c01c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vf6s4" Oct 13 17:49:33 crc kubenswrapper[4720]: I1013 17:49:33.903990 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2dedc602-7303-4ca5-8d61-143a7975c01c-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vf6s4\" (UID: \"2dedc602-7303-4ca5-8d61-143a7975c01c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vf6s4" Oct 13 17:49:33 crc kubenswrapper[4720]: I1013 17:49:33.916253 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5gr6\" (UniqueName: \"kubernetes.io/projected/2dedc602-7303-4ca5-8d61-143a7975c01c-kube-api-access-b5gr6\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vf6s4\" (UID: \"2dedc602-7303-4ca5-8d61-143a7975c01c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vf6s4" Oct 13 17:49:33 crc kubenswrapper[4720]: I1013 17:49:33.969900 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vf6s4" Oct 13 17:49:34 crc kubenswrapper[4720]: I1013 17:49:34.579054 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vf6s4"] Oct 13 17:49:34 crc kubenswrapper[4720]: I1013 17:49:34.586663 4720 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 13 17:49:35 crc kubenswrapper[4720]: I1013 17:49:35.577552 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vf6s4" event={"ID":"2dedc602-7303-4ca5-8d61-143a7975c01c","Type":"ContainerStarted","Data":"2eaf3c593ea365d669c3916cb8655af93569998c9161cb704e9fe96494821828"} Oct 13 17:49:35 crc kubenswrapper[4720]: I1013 17:49:35.578152 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vf6s4" event={"ID":"2dedc602-7303-4ca5-8d61-143a7975c01c","Type":"ContainerStarted","Data":"a20b92d9d11e6522399284f137a600d3593ce9c0eb5b806e0c26f416fb037f0f"} Oct 13 17:49:35 crc kubenswrapper[4720]: I1013 17:49:35.608821 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vf6s4" podStartSLOduration=1.969783084 podStartE2EDuration="2.608799034s" podCreationTimestamp="2025-10-13 17:49:33 +0000 UTC" firstStartedPulling="2025-10-13 17:49:34.586446556 +0000 UTC m=+1520.043696688" lastFinishedPulling="2025-10-13 17:49:35.225462496 +0000 UTC m=+1520.682712638" observedRunningTime="2025-10-13 17:49:35.595560463 +0000 UTC m=+1521.052810665" watchObservedRunningTime="2025-10-13 17:49:35.608799034 +0000 UTC m=+1521.066049166" Oct 13 17:49:37 crc kubenswrapper[4720]: I1013 17:49:37.168965 4720 scope.go:117] "RemoveContainer" containerID="eabc7e2122617e64bac6ae4330db40b5b4e9867a18ff888fd3f7afb5cfc64f2f" Oct 13 17:49:37 crc kubenswrapper[4720]: E1013 17:49:37.169425 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 17:49:42 crc kubenswrapper[4720]: I1013 17:49:42.041251 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-5jft7"] Oct 13 17:49:42 crc kubenswrapper[4720]: I1013 17:49:42.054306 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-b4hpj"] Oct 13 17:49:42 crc kubenswrapper[4720]: I1013 17:49:42.064483 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-ptmbs"] Oct 13 17:49:42 crc kubenswrapper[4720]: I1013 17:49:42.076403 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-5jft7"] Oct 13 17:49:42 crc kubenswrapper[4720]: I1013 17:49:42.085833 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-ptmbs"] Oct 13 17:49:42 crc kubenswrapper[4720]: I1013 17:49:42.097266 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-b4hpj"] Oct 13 17:49:43 crc kubenswrapper[4720]: I1013 17:49:43.179032 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="289efdb4-70d6-47fe-929d-c6c659cf5117" path="/var/lib/kubelet/pods/289efdb4-70d6-47fe-929d-c6c659cf5117/volumes" Oct 13 17:49:43 crc kubenswrapper[4720]: I1013 17:49:43.180797 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="392ec994-30b1-4ab2-b27d-fd2c59bd2eef" path="/var/lib/kubelet/pods/392ec994-30b1-4ab2-b27d-fd2c59bd2eef/volumes" Oct 13 17:49:43 crc kubenswrapper[4720]: I1013 17:49:43.181409 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="939e2da6-73bd-4929-8194-fa3d63d023bf" path="/var/lib/kubelet/pods/939e2da6-73bd-4929-8194-fa3d63d023bf/volumes" Oct 13 17:49:48 crc kubenswrapper[4720]: I1013 17:49:48.046358 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-99x2r"] Oct 13 17:49:48 crc kubenswrapper[4720]: I1013 17:49:48.054822 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-4f6d4"] Oct 13 17:49:48 crc kubenswrapper[4720]: I1013 17:49:48.064714 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-99x2r"] Oct 13 17:49:48 crc kubenswrapper[4720]: I1013 17:49:48.074008 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-4f6d4"] Oct 13 17:49:49 crc kubenswrapper[4720]: I1013 17:49:49.188432 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a20d69a3-6d9e-4066-b3d5-62eb5897451c" path="/var/lib/kubelet/pods/a20d69a3-6d9e-4066-b3d5-62eb5897451c/volumes" Oct 13 17:49:49 crc kubenswrapper[4720]: I1013 17:49:49.189732 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b035e5a0-e44b-4897-bcfa-c7112b8eee2d" path="/var/lib/kubelet/pods/b035e5a0-e44b-4897-bcfa-c7112b8eee2d/volumes" Oct 13 17:49:52 crc kubenswrapper[4720]: I1013 17:49:52.168607 4720 scope.go:117] "RemoveContainer" containerID="eabc7e2122617e64bac6ae4330db40b5b4e9867a18ff888fd3f7afb5cfc64f2f" Oct 13 17:49:52 crc kubenswrapper[4720]: E1013 17:49:52.169219 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 17:50:05 crc kubenswrapper[4720]: I1013 17:50:05.183338 4720 scope.go:117] "RemoveContainer" containerID="eabc7e2122617e64bac6ae4330db40b5b4e9867a18ff888fd3f7afb5cfc64f2f" Oct 13 17:50:05 crc kubenswrapper[4720]: E1013 17:50:05.184319 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 17:50:10 crc kubenswrapper[4720]: I1013 17:50:10.066532 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-fd8d-account-create-v5dkv"] Oct 13 17:50:10 crc kubenswrapper[4720]: I1013 17:50:10.084681 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-1089-account-create-rz4lq"] Oct 13 17:50:10 crc kubenswrapper[4720]: I1013 17:50:10.093920 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-8c7e-account-create-ftbcl"] Oct 13 17:50:10 crc kubenswrapper[4720]: I1013 17:50:10.103483 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-fd8d-account-create-v5dkv"] Oct 13 17:50:10 crc kubenswrapper[4720]: I1013 17:50:10.112471 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-8c7e-account-create-ftbcl"] Oct 13 17:50:10 crc kubenswrapper[4720]: I1013 17:50:10.121168 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-1089-account-create-rz4lq"] Oct 13 17:50:11 crc kubenswrapper[4720]: I1013 17:50:11.183918 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cab6f53-adba-4474-b8b6-195faff8e193" path="/var/lib/kubelet/pods/5cab6f53-adba-4474-b8b6-195faff8e193/volumes" Oct 13 17:50:11 crc kubenswrapper[4720]: I1013 17:50:11.186171 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77e2a869-ae9a-47cd-973e-bc3597a2365d" path="/var/lib/kubelet/pods/77e2a869-ae9a-47cd-973e-bc3597a2365d/volumes" Oct 13 17:50:11 crc kubenswrapper[4720]: I1013 17:50:11.187117 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6edabb2-1018-4d9a-b43f-2414235bbfdc" path="/var/lib/kubelet/pods/b6edabb2-1018-4d9a-b43f-2414235bbfdc/volumes" Oct 13 17:50:16 crc kubenswrapper[4720]: I1013 17:50:16.038956 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-2z4fc"] Oct 13 17:50:16 crc kubenswrapper[4720]: I1013 17:50:16.048264 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-2z4fc"] Oct 13 17:50:16 crc kubenswrapper[4720]: I1013 17:50:16.168298 4720 scope.go:117] "RemoveContainer" containerID="eabc7e2122617e64bac6ae4330db40b5b4e9867a18ff888fd3f7afb5cfc64f2f" Oct 13 17:50:16 crc kubenswrapper[4720]: E1013 17:50:16.168595 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 17:50:17 crc kubenswrapper[4720]: I1013 17:50:17.180492 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15d273d6-ce41-4aeb-88e1-42a1f9423737" path="/var/lib/kubelet/pods/15d273d6-ce41-4aeb-88e1-42a1f9423737/volumes" Oct 13 17:50:18 crc kubenswrapper[4720]: I1013 17:50:18.047894 4720 scope.go:117] "RemoveContainer" containerID="2b9d1689917fef9a21d52f9ea6f4db786a2a816cd2bcea0149a1c56163de5271" Oct 13 17:50:18 crc kubenswrapper[4720]: I1013 17:50:18.081542 4720 scope.go:117] "RemoveContainer" containerID="4a52864d77f90957914d2b23ad16fd2881818561c80e5d4b153da9f3e7f9ffec" Oct 13 17:50:18 crc kubenswrapper[4720]: I1013 17:50:18.138459 4720 scope.go:117] "RemoveContainer" containerID="75dcef59c7735fa127edc067f5f3fc3a1cc6c701e8cb9453bb54444af59e1a16" Oct 13 17:50:18 crc kubenswrapper[4720]: I1013 17:50:18.198746 4720 scope.go:117] "RemoveContainer" containerID="02b5c272e9cea9d9f00706c7d23a57a83f86359bf8d851d2d6e05a42964abb02" Oct 13 17:50:18 crc kubenswrapper[4720]: I1013 17:50:18.231005 4720 scope.go:117] "RemoveContainer" containerID="8f508b9803991b29ca92bc309a82aa31aac4204d4be5a41f8589fe1b3a36d365" Oct 13 17:50:18 crc kubenswrapper[4720]: I1013 17:50:18.265529 4720 scope.go:117] "RemoveContainer" containerID="55a9e6eaf4f40a5b8fb0a0a4bf7a94d966d9ee51c3be7c21032deb3985be3924" Oct 13 17:50:18 crc kubenswrapper[4720]: I1013 17:50:18.305868 4720 scope.go:117] "RemoveContainer" containerID="7ddce86b334fb8ab710d0db40f4fdd4b0f5396c166c8c2e50cfdc6fe7e47ab0b" Oct 13 17:50:18 crc kubenswrapper[4720]: I1013 17:50:18.353401 4720 scope.go:117] "RemoveContainer" containerID="562d51b684d6468108fdc600b7c30def921adc24bd379bd56c074de3d205d066" Oct 13 17:50:18 crc kubenswrapper[4720]: I1013 17:50:18.385623 4720 scope.go:117] "RemoveContainer" containerID="047b6fba1bf1e9677b7128d90bc39bff50e1b1504d25db5928f22242d61e3932" Oct 13 17:50:18 crc kubenswrapper[4720]: I1013 17:50:18.406330 4720 scope.go:117] "RemoveContainer" containerID="52f08e812059eb073cacc07c4dc78f199d4c5ab92775a2c50e87ad26c4841368" Oct 13 17:50:18 crc kubenswrapper[4720]: I1013 17:50:18.430219 4720 scope.go:117] "RemoveContainer" containerID="6a868f9ad6a67818916bb65e34ef887f4a262fd33b32889d9ba95be655d048e3" Oct 13 17:50:18 crc kubenswrapper[4720]: I1013 17:50:18.453554 4720 scope.go:117] "RemoveContainer" containerID="12392550598dfae26f34032162894283cd0ec232dcc69efa27258e452f1eaec7" Oct 13 17:50:30 crc kubenswrapper[4720]: I1013 17:50:30.169459 4720 scope.go:117] "RemoveContainer" containerID="eabc7e2122617e64bac6ae4330db40b5b4e9867a18ff888fd3f7afb5cfc64f2f" Oct 13 17:50:30 crc kubenswrapper[4720]: E1013 17:50:30.172115 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 17:50:37 crc kubenswrapper[4720]: I1013 17:50:37.042973 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-gmms8"] Oct 13 17:50:37 crc kubenswrapper[4720]: I1013 17:50:37.055264 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-gmms8"] Oct 13 17:50:37 crc kubenswrapper[4720]: I1013 17:50:37.184403 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2d841c9-2675-429d-9d02-09381a0d6f09" path="/var/lib/kubelet/pods/a2d841c9-2675-429d-9d02-09381a0d6f09/volumes" Oct 13 17:50:42 crc kubenswrapper[4720]: I1013 17:50:42.054005 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-fj7h4"] Oct 13 17:50:42 crc kubenswrapper[4720]: I1013 17:50:42.065640 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-fj7h4"] Oct 13 17:50:43 crc kubenswrapper[4720]: I1013 17:50:43.169139 4720 scope.go:117] "RemoveContainer" containerID="eabc7e2122617e64bac6ae4330db40b5b4e9867a18ff888fd3f7afb5cfc64f2f" Oct 13 17:50:43 crc kubenswrapper[4720]: E1013 17:50:43.169762 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 17:50:43 crc kubenswrapper[4720]: I1013 17:50:43.179505 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69895abb-fedb-4ef1-bd37-698c2384d0b0" path="/var/lib/kubelet/pods/69895abb-fedb-4ef1-bd37-698c2384d0b0/volumes" Oct 13 17:50:49 crc kubenswrapper[4720]: I1013 17:50:49.443077 4720 generic.go:334] "Generic (PLEG): container finished" podID="2dedc602-7303-4ca5-8d61-143a7975c01c" containerID="2eaf3c593ea365d669c3916cb8655af93569998c9161cb704e9fe96494821828" exitCode=0 Oct 13 17:50:49 crc kubenswrapper[4720]: I1013 17:50:49.443255 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vf6s4" event={"ID":"2dedc602-7303-4ca5-8d61-143a7975c01c","Type":"ContainerDied","Data":"2eaf3c593ea365d669c3916cb8655af93569998c9161cb704e9fe96494821828"} Oct 13 17:50:50 crc kubenswrapper[4720]: I1013 17:50:50.904707 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vf6s4" Oct 13 17:50:51 crc kubenswrapper[4720]: I1013 17:50:51.036663 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2dedc602-7303-4ca5-8d61-143a7975c01c-ssh-key\") pod \"2dedc602-7303-4ca5-8d61-143a7975c01c\" (UID: \"2dedc602-7303-4ca5-8d61-143a7975c01c\") " Oct 13 17:50:51 crc kubenswrapper[4720]: I1013 17:50:51.036750 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5gr6\" (UniqueName: \"kubernetes.io/projected/2dedc602-7303-4ca5-8d61-143a7975c01c-kube-api-access-b5gr6\") pod \"2dedc602-7303-4ca5-8d61-143a7975c01c\" (UID: \"2dedc602-7303-4ca5-8d61-143a7975c01c\") " Oct 13 17:50:51 crc kubenswrapper[4720]: I1013 17:50:51.036895 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2dedc602-7303-4ca5-8d61-143a7975c01c-inventory\") pod \"2dedc602-7303-4ca5-8d61-143a7975c01c\" (UID: \"2dedc602-7303-4ca5-8d61-143a7975c01c\") " Oct 13 17:50:51 crc kubenswrapper[4720]: I1013 17:50:51.044945 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dedc602-7303-4ca5-8d61-143a7975c01c-kube-api-access-b5gr6" (OuterVolumeSpecName: "kube-api-access-b5gr6") pod "2dedc602-7303-4ca5-8d61-143a7975c01c" (UID: "2dedc602-7303-4ca5-8d61-143a7975c01c"). InnerVolumeSpecName "kube-api-access-b5gr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:50:51 crc kubenswrapper[4720]: I1013 17:50:51.076401 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dedc602-7303-4ca5-8d61-143a7975c01c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2dedc602-7303-4ca5-8d61-143a7975c01c" (UID: "2dedc602-7303-4ca5-8d61-143a7975c01c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:50:51 crc kubenswrapper[4720]: I1013 17:50:51.078921 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dedc602-7303-4ca5-8d61-143a7975c01c-inventory" (OuterVolumeSpecName: "inventory") pod "2dedc602-7303-4ca5-8d61-143a7975c01c" (UID: "2dedc602-7303-4ca5-8d61-143a7975c01c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:50:51 crc kubenswrapper[4720]: I1013 17:50:51.141536 4720 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2dedc602-7303-4ca5-8d61-143a7975c01c-inventory\") on node \"crc\" DevicePath \"\"" Oct 13 17:50:51 crc kubenswrapper[4720]: I1013 17:50:51.141568 4720 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2dedc602-7303-4ca5-8d61-143a7975c01c-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 13 17:50:51 crc kubenswrapper[4720]: I1013 17:50:51.141578 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5gr6\" (UniqueName: \"kubernetes.io/projected/2dedc602-7303-4ca5-8d61-143a7975c01c-kube-api-access-b5gr6\") on node \"crc\" DevicePath \"\"" Oct 13 17:50:51 crc kubenswrapper[4720]: I1013 17:50:51.475225 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vf6s4" event={"ID":"2dedc602-7303-4ca5-8d61-143a7975c01c","Type":"ContainerDied","Data":"a20b92d9d11e6522399284f137a600d3593ce9c0eb5b806e0c26f416fb037f0f"} Oct 13 17:50:51 crc kubenswrapper[4720]: I1013 17:50:51.475276 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a20b92d9d11e6522399284f137a600d3593ce9c0eb5b806e0c26f416fb037f0f" Oct 13 17:50:51 crc kubenswrapper[4720]: I1013 17:50:51.475335 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vf6s4" Oct 13 17:50:51 crc kubenswrapper[4720]: I1013 17:50:51.583115 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-c7zjx"] Oct 13 17:50:51 crc kubenswrapper[4720]: E1013 17:50:51.583579 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dedc602-7303-4ca5-8d61-143a7975c01c" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Oct 13 17:50:51 crc kubenswrapper[4720]: I1013 17:50:51.583599 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dedc602-7303-4ca5-8d61-143a7975c01c" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Oct 13 17:50:51 crc kubenswrapper[4720]: I1013 17:50:51.583768 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dedc602-7303-4ca5-8d61-143a7975c01c" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Oct 13 17:50:51 crc kubenswrapper[4720]: I1013 17:50:51.584389 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-c7zjx" Oct 13 17:50:51 crc kubenswrapper[4720]: I1013 17:50:51.586391 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l2fds" Oct 13 17:50:51 crc kubenswrapper[4720]: I1013 17:50:51.586695 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 13 17:50:51 crc kubenswrapper[4720]: I1013 17:50:51.586715 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 13 17:50:51 crc kubenswrapper[4720]: I1013 17:50:51.587169 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 13 17:50:51 crc kubenswrapper[4720]: I1013 17:50:51.611408 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-c7zjx"] Oct 13 17:50:51 crc kubenswrapper[4720]: I1013 17:50:51.751919 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpw7z\" (UniqueName: \"kubernetes.io/projected/3d96fecb-1b5a-4d39-8f6f-82755c63a757-kube-api-access-gpw7z\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-c7zjx\" (UID: \"3d96fecb-1b5a-4d39-8f6f-82755c63a757\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-c7zjx" Oct 13 17:50:51 crc kubenswrapper[4720]: I1013 17:50:51.752163 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3d96fecb-1b5a-4d39-8f6f-82755c63a757-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-c7zjx\" (UID: \"3d96fecb-1b5a-4d39-8f6f-82755c63a757\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-c7zjx" Oct 13 17:50:51 crc kubenswrapper[4720]: I1013 17:50:51.752385 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3d96fecb-1b5a-4d39-8f6f-82755c63a757-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-c7zjx\" (UID: \"3d96fecb-1b5a-4d39-8f6f-82755c63a757\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-c7zjx" Oct 13 17:50:51 crc kubenswrapper[4720]: I1013 17:50:51.854249 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3d96fecb-1b5a-4d39-8f6f-82755c63a757-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-c7zjx\" (UID: \"3d96fecb-1b5a-4d39-8f6f-82755c63a757\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-c7zjx" Oct 13 17:50:51 crc kubenswrapper[4720]: I1013 17:50:51.854381 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpw7z\" (UniqueName: \"kubernetes.io/projected/3d96fecb-1b5a-4d39-8f6f-82755c63a757-kube-api-access-gpw7z\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-c7zjx\" (UID: \"3d96fecb-1b5a-4d39-8f6f-82755c63a757\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-c7zjx" Oct 13 17:50:51 crc kubenswrapper[4720]: I1013 17:50:51.854564 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3d96fecb-1b5a-4d39-8f6f-82755c63a757-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-c7zjx\" (UID: \"3d96fecb-1b5a-4d39-8f6f-82755c63a757\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-c7zjx" Oct 13 17:50:51 crc kubenswrapper[4720]: I1013 17:50:51.861364 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3d96fecb-1b5a-4d39-8f6f-82755c63a757-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-c7zjx\" (UID: \"3d96fecb-1b5a-4d39-8f6f-82755c63a757\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-c7zjx" Oct 13 17:50:51 crc kubenswrapper[4720]: I1013 17:50:51.863065 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3d96fecb-1b5a-4d39-8f6f-82755c63a757-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-c7zjx\" (UID: \"3d96fecb-1b5a-4d39-8f6f-82755c63a757\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-c7zjx" Oct 13 17:50:51 crc kubenswrapper[4720]: I1013 17:50:51.889678 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpw7z\" (UniqueName: \"kubernetes.io/projected/3d96fecb-1b5a-4d39-8f6f-82755c63a757-kube-api-access-gpw7z\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-c7zjx\" (UID: \"3d96fecb-1b5a-4d39-8f6f-82755c63a757\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-c7zjx" Oct 13 17:50:51 crc kubenswrapper[4720]: I1013 17:50:51.910537 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-c7zjx" Oct 13 17:50:52 crc kubenswrapper[4720]: I1013 17:50:52.246079 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-c7zjx"] Oct 13 17:50:52 crc kubenswrapper[4720]: W1013 17:50:52.247413 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d96fecb_1b5a_4d39_8f6f_82755c63a757.slice/crio-c4ce119e9546a2095054024aaafea062ebfc280152a2bd4cad299a5039c35305 WatchSource:0}: Error finding container c4ce119e9546a2095054024aaafea062ebfc280152a2bd4cad299a5039c35305: Status 404 returned error can't find the container with id c4ce119e9546a2095054024aaafea062ebfc280152a2bd4cad299a5039c35305 Oct 13 17:50:52 crc kubenswrapper[4720]: I1013 17:50:52.489446 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-c7zjx" event={"ID":"3d96fecb-1b5a-4d39-8f6f-82755c63a757","Type":"ContainerStarted","Data":"c4ce119e9546a2095054024aaafea062ebfc280152a2bd4cad299a5039c35305"} Oct 13 17:50:53 crc kubenswrapper[4720]: I1013 17:50:53.499611 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-c7zjx" event={"ID":"3d96fecb-1b5a-4d39-8f6f-82755c63a757","Type":"ContainerStarted","Data":"6d5191e264941108d21cb0b1ebf2011f536f531193c66e15c42dc4ba8a35e0db"} Oct 13 17:50:53 crc kubenswrapper[4720]: I1013 17:50:53.528279 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-c7zjx" podStartSLOduration=2.007488658 podStartE2EDuration="2.528260723s" podCreationTimestamp="2025-10-13 17:50:51 +0000 UTC" firstStartedPulling="2025-10-13 17:50:52.25078211 +0000 UTC m=+1597.708032242" lastFinishedPulling="2025-10-13 17:50:52.771554175 +0000 UTC m=+1598.228804307" observedRunningTime="2025-10-13 17:50:53.524715982 +0000 UTC m=+1598.981966114" watchObservedRunningTime="2025-10-13 17:50:53.528260723 +0000 UTC m=+1598.985510855" Oct 13 17:50:55 crc kubenswrapper[4720]: I1013 17:50:55.175292 4720 scope.go:117] "RemoveContainer" containerID="eabc7e2122617e64bac6ae4330db40b5b4e9867a18ff888fd3f7afb5cfc64f2f" Oct 13 17:50:55 crc kubenswrapper[4720]: E1013 17:50:55.175904 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 17:50:58 crc kubenswrapper[4720]: I1013 17:50:58.562801 4720 generic.go:334] "Generic (PLEG): container finished" podID="3d96fecb-1b5a-4d39-8f6f-82755c63a757" containerID="6d5191e264941108d21cb0b1ebf2011f536f531193c66e15c42dc4ba8a35e0db" exitCode=0 Oct 13 17:50:58 crc kubenswrapper[4720]: I1013 17:50:58.562950 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-c7zjx" event={"ID":"3d96fecb-1b5a-4d39-8f6f-82755c63a757","Type":"ContainerDied","Data":"6d5191e264941108d21cb0b1ebf2011f536f531193c66e15c42dc4ba8a35e0db"} Oct 13 17:51:00 crc kubenswrapper[4720]: I1013 17:51:00.070805 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-c7zjx" Oct 13 17:51:00 crc kubenswrapper[4720]: I1013 17:51:00.223479 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpw7z\" (UniqueName: \"kubernetes.io/projected/3d96fecb-1b5a-4d39-8f6f-82755c63a757-kube-api-access-gpw7z\") pod \"3d96fecb-1b5a-4d39-8f6f-82755c63a757\" (UID: \"3d96fecb-1b5a-4d39-8f6f-82755c63a757\") " Oct 13 17:51:00 crc kubenswrapper[4720]: I1013 17:51:00.223567 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3d96fecb-1b5a-4d39-8f6f-82755c63a757-inventory\") pod \"3d96fecb-1b5a-4d39-8f6f-82755c63a757\" (UID: \"3d96fecb-1b5a-4d39-8f6f-82755c63a757\") " Oct 13 17:51:00 crc kubenswrapper[4720]: I1013 17:51:00.223774 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3d96fecb-1b5a-4d39-8f6f-82755c63a757-ssh-key\") pod \"3d96fecb-1b5a-4d39-8f6f-82755c63a757\" (UID: \"3d96fecb-1b5a-4d39-8f6f-82755c63a757\") " Oct 13 17:51:00 crc kubenswrapper[4720]: I1013 17:51:00.229029 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d96fecb-1b5a-4d39-8f6f-82755c63a757-kube-api-access-gpw7z" (OuterVolumeSpecName: "kube-api-access-gpw7z") pod "3d96fecb-1b5a-4d39-8f6f-82755c63a757" (UID: "3d96fecb-1b5a-4d39-8f6f-82755c63a757"). InnerVolumeSpecName "kube-api-access-gpw7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:51:00 crc kubenswrapper[4720]: I1013 17:51:00.269717 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d96fecb-1b5a-4d39-8f6f-82755c63a757-inventory" (OuterVolumeSpecName: "inventory") pod "3d96fecb-1b5a-4d39-8f6f-82755c63a757" (UID: "3d96fecb-1b5a-4d39-8f6f-82755c63a757"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:51:00 crc kubenswrapper[4720]: I1013 17:51:00.272799 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d96fecb-1b5a-4d39-8f6f-82755c63a757-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "3d96fecb-1b5a-4d39-8f6f-82755c63a757" (UID: "3d96fecb-1b5a-4d39-8f6f-82755c63a757"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:51:00 crc kubenswrapper[4720]: I1013 17:51:00.326139 4720 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3d96fecb-1b5a-4d39-8f6f-82755c63a757-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 13 17:51:00 crc kubenswrapper[4720]: I1013 17:51:00.326206 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpw7z\" (UniqueName: \"kubernetes.io/projected/3d96fecb-1b5a-4d39-8f6f-82755c63a757-kube-api-access-gpw7z\") on node \"crc\" DevicePath \"\"" Oct 13 17:51:00 crc kubenswrapper[4720]: I1013 17:51:00.326226 4720 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3d96fecb-1b5a-4d39-8f6f-82755c63a757-inventory\") on node \"crc\" DevicePath \"\"" Oct 13 17:51:00 crc kubenswrapper[4720]: I1013 17:51:00.582780 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-c7zjx" event={"ID":"3d96fecb-1b5a-4d39-8f6f-82755c63a757","Type":"ContainerDied","Data":"c4ce119e9546a2095054024aaafea062ebfc280152a2bd4cad299a5039c35305"} Oct 13 17:51:00 crc kubenswrapper[4720]: I1013 17:51:00.582818 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4ce119e9546a2095054024aaafea062ebfc280152a2bd4cad299a5039c35305" Oct 13 17:51:00 crc kubenswrapper[4720]: I1013 17:51:00.582902 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-c7zjx" Oct 13 17:51:00 crc kubenswrapper[4720]: I1013 17:51:00.683996 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-7nx42"] Oct 13 17:51:00 crc kubenswrapper[4720]: E1013 17:51:00.684433 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d96fecb-1b5a-4d39-8f6f-82755c63a757" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Oct 13 17:51:00 crc kubenswrapper[4720]: I1013 17:51:00.684450 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d96fecb-1b5a-4d39-8f6f-82755c63a757" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Oct 13 17:51:00 crc kubenswrapper[4720]: I1013 17:51:00.684643 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d96fecb-1b5a-4d39-8f6f-82755c63a757" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Oct 13 17:51:00 crc kubenswrapper[4720]: I1013 17:51:00.685441 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7nx42" Oct 13 17:51:00 crc kubenswrapper[4720]: I1013 17:51:00.688382 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 13 17:51:00 crc kubenswrapper[4720]: I1013 17:51:00.688444 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l2fds" Oct 13 17:51:00 crc kubenswrapper[4720]: I1013 17:51:00.689806 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 13 17:51:00 crc kubenswrapper[4720]: I1013 17:51:00.689982 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 13 17:51:00 crc kubenswrapper[4720]: I1013 17:51:00.697332 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-7nx42"] Oct 13 17:51:00 crc kubenswrapper[4720]: I1013 17:51:00.835953 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8294bbb9-8b70-411f-af1f-cca84d7c5dbb-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7nx42\" (UID: \"8294bbb9-8b70-411f-af1f-cca84d7c5dbb\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7nx42" Oct 13 17:51:00 crc kubenswrapper[4720]: I1013 17:51:00.835985 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8294bbb9-8b70-411f-af1f-cca84d7c5dbb-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7nx42\" (UID: \"8294bbb9-8b70-411f-af1f-cca84d7c5dbb\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7nx42" Oct 13 17:51:00 crc kubenswrapper[4720]: I1013 17:51:00.836046 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8mxh\" (UniqueName: \"kubernetes.io/projected/8294bbb9-8b70-411f-af1f-cca84d7c5dbb-kube-api-access-s8mxh\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7nx42\" (UID: \"8294bbb9-8b70-411f-af1f-cca84d7c5dbb\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7nx42" Oct 13 17:51:00 crc kubenswrapper[4720]: I1013 17:51:00.938653 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8mxh\" (UniqueName: \"kubernetes.io/projected/8294bbb9-8b70-411f-af1f-cca84d7c5dbb-kube-api-access-s8mxh\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7nx42\" (UID: \"8294bbb9-8b70-411f-af1f-cca84d7c5dbb\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7nx42" Oct 13 17:51:00 crc kubenswrapper[4720]: I1013 17:51:00.939137 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8294bbb9-8b70-411f-af1f-cca84d7c5dbb-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7nx42\" (UID: \"8294bbb9-8b70-411f-af1f-cca84d7c5dbb\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7nx42" Oct 13 17:51:00 crc kubenswrapper[4720]: I1013 17:51:00.939163 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8294bbb9-8b70-411f-af1f-cca84d7c5dbb-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7nx42\" (UID: \"8294bbb9-8b70-411f-af1f-cca84d7c5dbb\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7nx42" Oct 13 17:51:00 crc kubenswrapper[4720]: I1013 17:51:00.944488 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8294bbb9-8b70-411f-af1f-cca84d7c5dbb-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7nx42\" (UID: \"8294bbb9-8b70-411f-af1f-cca84d7c5dbb\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7nx42" Oct 13 17:51:00 crc kubenswrapper[4720]: I1013 17:51:00.944488 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8294bbb9-8b70-411f-af1f-cca84d7c5dbb-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7nx42\" (UID: \"8294bbb9-8b70-411f-af1f-cca84d7c5dbb\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7nx42" Oct 13 17:51:00 crc kubenswrapper[4720]: I1013 17:51:00.966941 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8mxh\" (UniqueName: \"kubernetes.io/projected/8294bbb9-8b70-411f-af1f-cca84d7c5dbb-kube-api-access-s8mxh\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7nx42\" (UID: \"8294bbb9-8b70-411f-af1f-cca84d7c5dbb\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7nx42" Oct 13 17:51:01 crc kubenswrapper[4720]: I1013 17:51:01.040614 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7nx42" Oct 13 17:51:01 crc kubenswrapper[4720]: I1013 17:51:01.606211 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-7nx42"] Oct 13 17:51:02 crc kubenswrapper[4720]: I1013 17:51:02.034964 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-7qdd5"] Oct 13 17:51:02 crc kubenswrapper[4720]: I1013 17:51:02.052640 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-7qdd5"] Oct 13 17:51:02 crc kubenswrapper[4720]: I1013 17:51:02.607260 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7nx42" event={"ID":"8294bbb9-8b70-411f-af1f-cca84d7c5dbb","Type":"ContainerStarted","Data":"a78accfdc35c3f321e17a6a450a95669c38adc469c295a2b72cf6d14e0e45d16"} Oct 13 17:51:02 crc kubenswrapper[4720]: I1013 17:51:02.607602 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7nx42" event={"ID":"8294bbb9-8b70-411f-af1f-cca84d7c5dbb","Type":"ContainerStarted","Data":"289fbcf5200d81475e2e87f90a1c0a1aa15eedfc7604c78a751198e402d38db5"} Oct 13 17:51:02 crc kubenswrapper[4720]: I1013 17:51:02.632572 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7nx42" podStartSLOduration=2.170161378 podStartE2EDuration="2.63254164s" podCreationTimestamp="2025-10-13 17:51:00 +0000 UTC" firstStartedPulling="2025-10-13 17:51:01.616370313 +0000 UTC m=+1607.073620435" lastFinishedPulling="2025-10-13 17:51:02.078750555 +0000 UTC m=+1607.536000697" observedRunningTime="2025-10-13 17:51:02.625215552 +0000 UTC m=+1608.082465694" watchObservedRunningTime="2025-10-13 17:51:02.63254164 +0000 UTC m=+1608.089791812" Oct 13 17:51:03 crc kubenswrapper[4720]: I1013 17:51:03.042462 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-9rbjx"] Oct 13 17:51:03 crc kubenswrapper[4720]: I1013 17:51:03.062238 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-9xpdq"] Oct 13 17:51:03 crc kubenswrapper[4720]: I1013 17:51:03.074661 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-9dcbx"] Oct 13 17:51:03 crc kubenswrapper[4720]: I1013 17:51:03.084720 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-9xpdq"] Oct 13 17:51:03 crc kubenswrapper[4720]: I1013 17:51:03.094455 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-9dcbx"] Oct 13 17:51:03 crc kubenswrapper[4720]: I1013 17:51:03.105415 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-9rbjx"] Oct 13 17:51:03 crc kubenswrapper[4720]: I1013 17:51:03.184331 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4164e6d6-57c4-45f9-b835-5f8cdfbb5044" path="/var/lib/kubelet/pods/4164e6d6-57c4-45f9-b835-5f8cdfbb5044/volumes" Oct 13 17:51:03 crc kubenswrapper[4720]: I1013 17:51:03.184901 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="887aa549-67e8-4d03-acba-dede202496db" path="/var/lib/kubelet/pods/887aa549-67e8-4d03-acba-dede202496db/volumes" Oct 13 17:51:03 crc kubenswrapper[4720]: I1013 17:51:03.185567 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9259aed9-a7a7-4b45-baff-419fef6c83a3" path="/var/lib/kubelet/pods/9259aed9-a7a7-4b45-baff-419fef6c83a3/volumes" Oct 13 17:51:03 crc kubenswrapper[4720]: I1013 17:51:03.186113 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2353f5d-08a1-4365-b77b-321e29ade356" path="/var/lib/kubelet/pods/d2353f5d-08a1-4365-b77b-321e29ade356/volumes" Oct 13 17:51:06 crc kubenswrapper[4720]: I1013 17:51:06.035537 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-2htwh"] Oct 13 17:51:06 crc kubenswrapper[4720]: I1013 17:51:06.046857 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-2htwh"] Oct 13 17:51:07 crc kubenswrapper[4720]: I1013 17:51:07.196579 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="444e35b8-1d2a-4d83-be6c-2184ae0e3110" path="/var/lib/kubelet/pods/444e35b8-1d2a-4d83-be6c-2184ae0e3110/volumes" Oct 13 17:51:08 crc kubenswrapper[4720]: I1013 17:51:08.168391 4720 scope.go:117] "RemoveContainer" containerID="eabc7e2122617e64bac6ae4330db40b5b4e9867a18ff888fd3f7afb5cfc64f2f" Oct 13 17:51:08 crc kubenswrapper[4720]: E1013 17:51:08.169221 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 17:51:18 crc kubenswrapper[4720]: I1013 17:51:18.724784 4720 scope.go:117] "RemoveContainer" containerID="78769fb9f2065139b88842f2335a23af9000cff6a3514aad858132138aa5cbf2" Oct 13 17:51:18 crc kubenswrapper[4720]: I1013 17:51:18.785777 4720 scope.go:117] "RemoveContainer" containerID="938f13af43552e262b225bb2e1a67c6189c6054eca9ba9bcae1659d5faa0085e" Oct 13 17:51:18 crc kubenswrapper[4720]: I1013 17:51:18.829281 4720 scope.go:117] "RemoveContainer" containerID="ba6eb3aa77b3fdc8ec12a4d7e473fe166dc435b469d0dd0061cd6b618f7f5dfa" Oct 13 17:51:18 crc kubenswrapper[4720]: I1013 17:51:18.883722 4720 scope.go:117] "RemoveContainer" containerID="d55f1b63f3bfa6e9b56a3a991d6ddfd7117843ddcf1a45ec16a86f8b0efbc569" Oct 13 17:51:18 crc kubenswrapper[4720]: I1013 17:51:18.939956 4720 scope.go:117] "RemoveContainer" containerID="3defcbd6df54ceee0fd0568c4a93d350e5180a10ac4fbe28d11fde627ac1da9f" Oct 13 17:51:18 crc kubenswrapper[4720]: I1013 17:51:18.964716 4720 scope.go:117] "RemoveContainer" containerID="b12eda37cb379a954f446cc232021f40074563481f329d306d32f89488d64fdb" Oct 13 17:51:19 crc kubenswrapper[4720]: I1013 17:51:19.011947 4720 scope.go:117] "RemoveContainer" containerID="32ab64b85d3c54428b46ecbbbba11664fa8982c4d4cf2a5c971eee79b23cf8ac" Oct 13 17:51:19 crc kubenswrapper[4720]: I1013 17:51:19.041225 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-3646-account-create-t2hq4"] Oct 13 17:51:19 crc kubenswrapper[4720]: I1013 17:51:19.054214 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-3646-account-create-t2hq4"] Oct 13 17:51:19 crc kubenswrapper[4720]: I1013 17:51:19.181537 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb3ca24e-8779-41d4-b8cc-8a6bc524e81d" path="/var/lib/kubelet/pods/bb3ca24e-8779-41d4-b8cc-8a6bc524e81d/volumes" Oct 13 17:51:20 crc kubenswrapper[4720]: I1013 17:51:20.036844 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-557a-account-create-m5cvn"] Oct 13 17:51:20 crc kubenswrapper[4720]: I1013 17:51:20.054989 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-1187-account-create-kf7dn"] Oct 13 17:51:20 crc kubenswrapper[4720]: I1013 17:51:20.069318 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-1187-account-create-kf7dn"] Oct 13 17:51:20 crc kubenswrapper[4720]: I1013 17:51:20.077861 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-557a-account-create-m5cvn"] Oct 13 17:51:21 crc kubenswrapper[4720]: I1013 17:51:21.195101 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bdf4571-07ae-4672-937b-f445bf5578ad" path="/var/lib/kubelet/pods/0bdf4571-07ae-4672-937b-f445bf5578ad/volumes" Oct 13 17:51:21 crc kubenswrapper[4720]: I1013 17:51:21.196834 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe437be8-b701-42f7-9aae-ba75ccc6be20" path="/var/lib/kubelet/pods/fe437be8-b701-42f7-9aae-ba75ccc6be20/volumes" Oct 13 17:51:22 crc kubenswrapper[4720]: I1013 17:51:22.168998 4720 scope.go:117] "RemoveContainer" containerID="eabc7e2122617e64bac6ae4330db40b5b4e9867a18ff888fd3f7afb5cfc64f2f" Oct 13 17:51:22 crc kubenswrapper[4720]: E1013 17:51:22.169479 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 17:51:35 crc kubenswrapper[4720]: I1013 17:51:35.197483 4720 scope.go:117] "RemoveContainer" containerID="eabc7e2122617e64bac6ae4330db40b5b4e9867a18ff888fd3f7afb5cfc64f2f" Oct 13 17:51:35 crc kubenswrapper[4720]: E1013 17:51:35.198352 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 17:51:44 crc kubenswrapper[4720]: I1013 17:51:44.055306 4720 generic.go:334] "Generic (PLEG): container finished" podID="8294bbb9-8b70-411f-af1f-cca84d7c5dbb" containerID="a78accfdc35c3f321e17a6a450a95669c38adc469c295a2b72cf6d14e0e45d16" exitCode=0 Oct 13 17:51:44 crc kubenswrapper[4720]: I1013 17:51:44.055399 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7nx42" event={"ID":"8294bbb9-8b70-411f-af1f-cca84d7c5dbb","Type":"ContainerDied","Data":"a78accfdc35c3f321e17a6a450a95669c38adc469c295a2b72cf6d14e0e45d16"} Oct 13 17:51:45 crc kubenswrapper[4720]: I1013 17:51:45.543465 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7nx42" Oct 13 17:51:45 crc kubenswrapper[4720]: I1013 17:51:45.717739 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8294bbb9-8b70-411f-af1f-cca84d7c5dbb-inventory\") pod \"8294bbb9-8b70-411f-af1f-cca84d7c5dbb\" (UID: \"8294bbb9-8b70-411f-af1f-cca84d7c5dbb\") " Oct 13 17:51:45 crc kubenswrapper[4720]: I1013 17:51:45.718335 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8mxh\" (UniqueName: \"kubernetes.io/projected/8294bbb9-8b70-411f-af1f-cca84d7c5dbb-kube-api-access-s8mxh\") pod \"8294bbb9-8b70-411f-af1f-cca84d7c5dbb\" (UID: \"8294bbb9-8b70-411f-af1f-cca84d7c5dbb\") " Oct 13 17:51:45 crc kubenswrapper[4720]: I1013 17:51:45.719066 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8294bbb9-8b70-411f-af1f-cca84d7c5dbb-ssh-key\") pod \"8294bbb9-8b70-411f-af1f-cca84d7c5dbb\" (UID: \"8294bbb9-8b70-411f-af1f-cca84d7c5dbb\") " Oct 13 17:51:45 crc kubenswrapper[4720]: I1013 17:51:45.725689 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8294bbb9-8b70-411f-af1f-cca84d7c5dbb-kube-api-access-s8mxh" (OuterVolumeSpecName: "kube-api-access-s8mxh") pod "8294bbb9-8b70-411f-af1f-cca84d7c5dbb" (UID: "8294bbb9-8b70-411f-af1f-cca84d7c5dbb"). InnerVolumeSpecName "kube-api-access-s8mxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:51:45 crc kubenswrapper[4720]: I1013 17:51:45.746914 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8294bbb9-8b70-411f-af1f-cca84d7c5dbb-inventory" (OuterVolumeSpecName: "inventory") pod "8294bbb9-8b70-411f-af1f-cca84d7c5dbb" (UID: "8294bbb9-8b70-411f-af1f-cca84d7c5dbb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:51:45 crc kubenswrapper[4720]: I1013 17:51:45.753327 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8294bbb9-8b70-411f-af1f-cca84d7c5dbb-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "8294bbb9-8b70-411f-af1f-cca84d7c5dbb" (UID: "8294bbb9-8b70-411f-af1f-cca84d7c5dbb"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:51:45 crc kubenswrapper[4720]: I1013 17:51:45.822662 4720 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8294bbb9-8b70-411f-af1f-cca84d7c5dbb-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 13 17:51:45 crc kubenswrapper[4720]: I1013 17:51:45.822724 4720 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8294bbb9-8b70-411f-af1f-cca84d7c5dbb-inventory\") on node \"crc\" DevicePath \"\"" Oct 13 17:51:45 crc kubenswrapper[4720]: I1013 17:51:45.822824 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8mxh\" (UniqueName: \"kubernetes.io/projected/8294bbb9-8b70-411f-af1f-cca84d7c5dbb-kube-api-access-s8mxh\") on node \"crc\" DevicePath \"\"" Oct 13 17:51:46 crc kubenswrapper[4720]: I1013 17:51:46.080709 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7nx42" event={"ID":"8294bbb9-8b70-411f-af1f-cca84d7c5dbb","Type":"ContainerDied","Data":"289fbcf5200d81475e2e87f90a1c0a1aa15eedfc7604c78a751198e402d38db5"} Oct 13 17:51:46 crc kubenswrapper[4720]: I1013 17:51:46.081103 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="289fbcf5200d81475e2e87f90a1c0a1aa15eedfc7604c78a751198e402d38db5" Oct 13 17:51:46 crc kubenswrapper[4720]: I1013 17:51:46.080763 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7nx42" Oct 13 17:51:46 crc kubenswrapper[4720]: I1013 17:51:46.185290 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dtplp"] Oct 13 17:51:46 crc kubenswrapper[4720]: E1013 17:51:46.187691 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8294bbb9-8b70-411f-af1f-cca84d7c5dbb" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Oct 13 17:51:46 crc kubenswrapper[4720]: I1013 17:51:46.187722 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="8294bbb9-8b70-411f-af1f-cca84d7c5dbb" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Oct 13 17:51:46 crc kubenswrapper[4720]: I1013 17:51:46.188019 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="8294bbb9-8b70-411f-af1f-cca84d7c5dbb" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Oct 13 17:51:46 crc kubenswrapper[4720]: I1013 17:51:46.190108 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dtplp" Oct 13 17:51:46 crc kubenswrapper[4720]: I1013 17:51:46.191963 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 13 17:51:46 crc kubenswrapper[4720]: I1013 17:51:46.191989 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 13 17:51:46 crc kubenswrapper[4720]: I1013 17:51:46.193452 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l2fds" Oct 13 17:51:46 crc kubenswrapper[4720]: I1013 17:51:46.193650 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 13 17:51:46 crc kubenswrapper[4720]: I1013 17:51:46.206758 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dtplp"] Oct 13 17:51:46 crc kubenswrapper[4720]: I1013 17:51:46.230998 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4cjx\" (UniqueName: \"kubernetes.io/projected/41f1773b-8761-4e7e-bcfe-853ca5977b3b-kube-api-access-p4cjx\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-dtplp\" (UID: \"41f1773b-8761-4e7e-bcfe-853ca5977b3b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dtplp" Oct 13 17:51:46 crc kubenswrapper[4720]: I1013 17:51:46.231154 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/41f1773b-8761-4e7e-bcfe-853ca5977b3b-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-dtplp\" (UID: \"41f1773b-8761-4e7e-bcfe-853ca5977b3b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dtplp" Oct 13 17:51:46 crc kubenswrapper[4720]: I1013 17:51:46.231223 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/41f1773b-8761-4e7e-bcfe-853ca5977b3b-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-dtplp\" (UID: \"41f1773b-8761-4e7e-bcfe-853ca5977b3b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dtplp" Oct 13 17:51:46 crc kubenswrapper[4720]: I1013 17:51:46.333392 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4cjx\" (UniqueName: \"kubernetes.io/projected/41f1773b-8761-4e7e-bcfe-853ca5977b3b-kube-api-access-p4cjx\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-dtplp\" (UID: \"41f1773b-8761-4e7e-bcfe-853ca5977b3b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dtplp" Oct 13 17:51:46 crc kubenswrapper[4720]: I1013 17:51:46.333568 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/41f1773b-8761-4e7e-bcfe-853ca5977b3b-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-dtplp\" (UID: \"41f1773b-8761-4e7e-bcfe-853ca5977b3b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dtplp" Oct 13 17:51:46 crc kubenswrapper[4720]: I1013 17:51:46.333607 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/41f1773b-8761-4e7e-bcfe-853ca5977b3b-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-dtplp\" (UID: \"41f1773b-8761-4e7e-bcfe-853ca5977b3b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dtplp" Oct 13 17:51:46 crc kubenswrapper[4720]: I1013 17:51:46.353601 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/41f1773b-8761-4e7e-bcfe-853ca5977b3b-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-dtplp\" (UID: \"41f1773b-8761-4e7e-bcfe-853ca5977b3b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dtplp" Oct 13 17:51:46 crc kubenswrapper[4720]: I1013 17:51:46.353709 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/41f1773b-8761-4e7e-bcfe-853ca5977b3b-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-dtplp\" (UID: \"41f1773b-8761-4e7e-bcfe-853ca5977b3b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dtplp" Oct 13 17:51:46 crc kubenswrapper[4720]: I1013 17:51:46.359020 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4cjx\" (UniqueName: \"kubernetes.io/projected/41f1773b-8761-4e7e-bcfe-853ca5977b3b-kube-api-access-p4cjx\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-dtplp\" (UID: \"41f1773b-8761-4e7e-bcfe-853ca5977b3b\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dtplp" Oct 13 17:51:46 crc kubenswrapper[4720]: I1013 17:51:46.512520 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dtplp" Oct 13 17:51:47 crc kubenswrapper[4720]: I1013 17:51:47.107369 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dtplp"] Oct 13 17:51:48 crc kubenswrapper[4720]: I1013 17:51:48.101021 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dtplp" event={"ID":"41f1773b-8761-4e7e-bcfe-853ca5977b3b","Type":"ContainerStarted","Data":"58a4cb4d42a495f70cbd791c46cf7afedbbb8acd174654995b79437d9cc896e6"} Oct 13 17:51:48 crc kubenswrapper[4720]: I1013 17:51:48.102602 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dtplp" event={"ID":"41f1773b-8761-4e7e-bcfe-853ca5977b3b","Type":"ContainerStarted","Data":"669bcf6715088cdfad243f5411537041188bf73b10f8ad9764f73a541c3ba13a"} Oct 13 17:51:48 crc kubenswrapper[4720]: I1013 17:51:48.130464 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dtplp" podStartSLOduration=1.498512375 podStartE2EDuration="2.130440672s" podCreationTimestamp="2025-10-13 17:51:46 +0000 UTC" firstStartedPulling="2025-10-13 17:51:47.113506225 +0000 UTC m=+1652.570756357" lastFinishedPulling="2025-10-13 17:51:47.745434522 +0000 UTC m=+1653.202684654" observedRunningTime="2025-10-13 17:51:48.119422588 +0000 UTC m=+1653.576672740" watchObservedRunningTime="2025-10-13 17:51:48.130440672 +0000 UTC m=+1653.587690814" Oct 13 17:51:49 crc kubenswrapper[4720]: I1013 17:51:49.168272 4720 scope.go:117] "RemoveContainer" containerID="eabc7e2122617e64bac6ae4330db40b5b4e9867a18ff888fd3f7afb5cfc64f2f" Oct 13 17:51:49 crc kubenswrapper[4720]: E1013 17:51:49.168546 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 17:51:50 crc kubenswrapper[4720]: I1013 17:51:50.045406 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-5zp5c"] Oct 13 17:51:50 crc kubenswrapper[4720]: I1013 17:51:50.052890 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-5zp5c"] Oct 13 17:51:51 crc kubenswrapper[4720]: I1013 17:51:51.189714 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6dabb89f-c7f4-46e9-a7e8-0722a6f73bed" path="/var/lib/kubelet/pods/6dabb89f-c7f4-46e9-a7e8-0722a6f73bed/volumes" Oct 13 17:52:04 crc kubenswrapper[4720]: I1013 17:52:04.168887 4720 scope.go:117] "RemoveContainer" containerID="eabc7e2122617e64bac6ae4330db40b5b4e9867a18ff888fd3f7afb5cfc64f2f" Oct 13 17:52:04 crc kubenswrapper[4720]: E1013 17:52:04.170052 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 17:52:11 crc kubenswrapper[4720]: I1013 17:52:11.062341 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-5hfwq"] Oct 13 17:52:11 crc kubenswrapper[4720]: I1013 17:52:11.070433 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-5hfwq"] Oct 13 17:52:11 crc kubenswrapper[4720]: I1013 17:52:11.186374 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05ec558f-1d81-4986-a415-06281c3cff62" path="/var/lib/kubelet/pods/05ec558f-1d81-4986-a415-06281c3cff62/volumes" Oct 13 17:52:12 crc kubenswrapper[4720]: I1013 17:52:12.030698 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-4lv7c"] Oct 13 17:52:12 crc kubenswrapper[4720]: I1013 17:52:12.049401 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-4lv7c"] Oct 13 17:52:13 crc kubenswrapper[4720]: I1013 17:52:13.179503 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b0268da-5ad5-40c5-8e86-4e99da5245e8" path="/var/lib/kubelet/pods/2b0268da-5ad5-40c5-8e86-4e99da5245e8/volumes" Oct 13 17:52:19 crc kubenswrapper[4720]: I1013 17:52:19.151854 4720 scope.go:117] "RemoveContainer" containerID="1d7425afd8731fb7f75336af7275836c0fec787e6fa79a5dab2675615665268d" Oct 13 17:52:19 crc kubenswrapper[4720]: I1013 17:52:19.168702 4720 scope.go:117] "RemoveContainer" containerID="eabc7e2122617e64bac6ae4330db40b5b4e9867a18ff888fd3f7afb5cfc64f2f" Oct 13 17:52:19 crc kubenswrapper[4720]: E1013 17:52:19.169265 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 17:52:19 crc kubenswrapper[4720]: I1013 17:52:19.226739 4720 scope.go:117] "RemoveContainer" containerID="75f0c0b1d96c7216334aca5291634900346ca7d7c6f02397a4e800227cdaba71" Oct 13 17:52:19 crc kubenswrapper[4720]: I1013 17:52:19.270305 4720 scope.go:117] "RemoveContainer" containerID="c0ea5f1d207aed801988ac349addd0169004a654dd070fa02ba0427a5cd4e026" Oct 13 17:52:19 crc kubenswrapper[4720]: I1013 17:52:19.309920 4720 scope.go:117] "RemoveContainer" containerID="bc4b5f0abe9e521c901aaaee076f03cd7514c8c7f4dc44f1a017d8ca5f892c3c" Oct 13 17:52:19 crc kubenswrapper[4720]: I1013 17:52:19.368695 4720 scope.go:117] "RemoveContainer" containerID="8902f6642a3e92b7dfef5171683559267ebcae0d4c7ce10ff2e3a493ad439e5d" Oct 13 17:52:19 crc kubenswrapper[4720]: I1013 17:52:19.443048 4720 scope.go:117] "RemoveContainer" containerID="15827525cf8ca4db758bff78f963fccc5c80a428a61c37b4476c82959dbb9633" Oct 13 17:52:26 crc kubenswrapper[4720]: I1013 17:52:26.139372 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7q6jx"] Oct 13 17:52:26 crc kubenswrapper[4720]: I1013 17:52:26.141872 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7q6jx" Oct 13 17:52:26 crc kubenswrapper[4720]: I1013 17:52:26.165673 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7q6jx"] Oct 13 17:52:26 crc kubenswrapper[4720]: I1013 17:52:26.191833 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh44m\" (UniqueName: \"kubernetes.io/projected/2d561b02-89e7-4fd5-bc93-759db09f73aa-kube-api-access-xh44m\") pod \"redhat-operators-7q6jx\" (UID: \"2d561b02-89e7-4fd5-bc93-759db09f73aa\") " pod="openshift-marketplace/redhat-operators-7q6jx" Oct 13 17:52:26 crc kubenswrapper[4720]: I1013 17:52:26.191990 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d561b02-89e7-4fd5-bc93-759db09f73aa-utilities\") pod \"redhat-operators-7q6jx\" (UID: \"2d561b02-89e7-4fd5-bc93-759db09f73aa\") " pod="openshift-marketplace/redhat-operators-7q6jx" Oct 13 17:52:26 crc kubenswrapper[4720]: I1013 17:52:26.192034 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d561b02-89e7-4fd5-bc93-759db09f73aa-catalog-content\") pod \"redhat-operators-7q6jx\" (UID: \"2d561b02-89e7-4fd5-bc93-759db09f73aa\") " pod="openshift-marketplace/redhat-operators-7q6jx" Oct 13 17:52:26 crc kubenswrapper[4720]: I1013 17:52:26.293238 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xh44m\" (UniqueName: \"kubernetes.io/projected/2d561b02-89e7-4fd5-bc93-759db09f73aa-kube-api-access-xh44m\") pod \"redhat-operators-7q6jx\" (UID: \"2d561b02-89e7-4fd5-bc93-759db09f73aa\") " pod="openshift-marketplace/redhat-operators-7q6jx" Oct 13 17:52:26 crc kubenswrapper[4720]: I1013 17:52:26.293879 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d561b02-89e7-4fd5-bc93-759db09f73aa-utilities\") pod \"redhat-operators-7q6jx\" (UID: \"2d561b02-89e7-4fd5-bc93-759db09f73aa\") " pod="openshift-marketplace/redhat-operators-7q6jx" Oct 13 17:52:26 crc kubenswrapper[4720]: I1013 17:52:26.293968 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d561b02-89e7-4fd5-bc93-759db09f73aa-catalog-content\") pod \"redhat-operators-7q6jx\" (UID: \"2d561b02-89e7-4fd5-bc93-759db09f73aa\") " pod="openshift-marketplace/redhat-operators-7q6jx" Oct 13 17:52:26 crc kubenswrapper[4720]: I1013 17:52:26.294401 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d561b02-89e7-4fd5-bc93-759db09f73aa-utilities\") pod \"redhat-operators-7q6jx\" (UID: \"2d561b02-89e7-4fd5-bc93-759db09f73aa\") " pod="openshift-marketplace/redhat-operators-7q6jx" Oct 13 17:52:26 crc kubenswrapper[4720]: I1013 17:52:26.294577 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d561b02-89e7-4fd5-bc93-759db09f73aa-catalog-content\") pod \"redhat-operators-7q6jx\" (UID: \"2d561b02-89e7-4fd5-bc93-759db09f73aa\") " pod="openshift-marketplace/redhat-operators-7q6jx" Oct 13 17:52:26 crc kubenswrapper[4720]: I1013 17:52:26.314626 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xh44m\" (UniqueName: \"kubernetes.io/projected/2d561b02-89e7-4fd5-bc93-759db09f73aa-kube-api-access-xh44m\") pod \"redhat-operators-7q6jx\" (UID: \"2d561b02-89e7-4fd5-bc93-759db09f73aa\") " pod="openshift-marketplace/redhat-operators-7q6jx" Oct 13 17:52:26 crc kubenswrapper[4720]: I1013 17:52:26.476549 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7q6jx" Oct 13 17:52:27 crc kubenswrapper[4720]: I1013 17:52:27.000848 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7q6jx"] Oct 13 17:52:27 crc kubenswrapper[4720]: I1013 17:52:27.529120 4720 generic.go:334] "Generic (PLEG): container finished" podID="2d561b02-89e7-4fd5-bc93-759db09f73aa" containerID="e1f0f918ff38d84dd4d55a91eb25cd45ff54722ce5bbc673b7e6d65cf766faaa" exitCode=0 Oct 13 17:52:27 crc kubenswrapper[4720]: I1013 17:52:27.529159 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7q6jx" event={"ID":"2d561b02-89e7-4fd5-bc93-759db09f73aa","Type":"ContainerDied","Data":"e1f0f918ff38d84dd4d55a91eb25cd45ff54722ce5bbc673b7e6d65cf766faaa"} Oct 13 17:52:27 crc kubenswrapper[4720]: I1013 17:52:27.529394 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7q6jx" event={"ID":"2d561b02-89e7-4fd5-bc93-759db09f73aa","Type":"ContainerStarted","Data":"e8a489df8c4718653197f68b85ebb21b06512157b0947d8263c4e0724f34a590"} Oct 13 17:52:29 crc kubenswrapper[4720]: I1013 17:52:29.578639 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7q6jx" event={"ID":"2d561b02-89e7-4fd5-bc93-759db09f73aa","Type":"ContainerStarted","Data":"3ef052faaa429aa2d46b25d0912d8f29040091f7df9f0cc4c88d133724ace101"} Oct 13 17:52:30 crc kubenswrapper[4720]: I1013 17:52:30.170400 4720 scope.go:117] "RemoveContainer" containerID="eabc7e2122617e64bac6ae4330db40b5b4e9867a18ff888fd3f7afb5cfc64f2f" Oct 13 17:52:30 crc kubenswrapper[4720]: E1013 17:52:30.170960 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 17:52:30 crc kubenswrapper[4720]: I1013 17:52:30.589653 4720 generic.go:334] "Generic (PLEG): container finished" podID="2d561b02-89e7-4fd5-bc93-759db09f73aa" containerID="3ef052faaa429aa2d46b25d0912d8f29040091f7df9f0cc4c88d133724ace101" exitCode=0 Oct 13 17:52:30 crc kubenswrapper[4720]: I1013 17:52:30.589699 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7q6jx" event={"ID":"2d561b02-89e7-4fd5-bc93-759db09f73aa","Type":"ContainerDied","Data":"3ef052faaa429aa2d46b25d0912d8f29040091f7df9f0cc4c88d133724ace101"} Oct 13 17:52:31 crc kubenswrapper[4720]: I1013 17:52:31.604452 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7q6jx" event={"ID":"2d561b02-89e7-4fd5-bc93-759db09f73aa","Type":"ContainerStarted","Data":"f1de6da12dc85ef5c46b796397d2d8052aa26275fac04c4eb315186a836a62b7"} Oct 13 17:52:31 crc kubenswrapper[4720]: I1013 17:52:31.628025 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7q6jx" podStartSLOduration=1.905579707 podStartE2EDuration="5.62800552s" podCreationTimestamp="2025-10-13 17:52:26 +0000 UTC" firstStartedPulling="2025-10-13 17:52:27.531563223 +0000 UTC m=+1692.988813355" lastFinishedPulling="2025-10-13 17:52:31.253989016 +0000 UTC m=+1696.711239168" observedRunningTime="2025-10-13 17:52:31.624549821 +0000 UTC m=+1697.081800033" watchObservedRunningTime="2025-10-13 17:52:31.62800552 +0000 UTC m=+1697.085255652" Oct 13 17:52:36 crc kubenswrapper[4720]: I1013 17:52:36.477618 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7q6jx" Oct 13 17:52:36 crc kubenswrapper[4720]: I1013 17:52:36.478271 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7q6jx" Oct 13 17:52:37 crc kubenswrapper[4720]: I1013 17:52:37.523582 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7q6jx" podUID="2d561b02-89e7-4fd5-bc93-759db09f73aa" containerName="registry-server" probeResult="failure" output=< Oct 13 17:52:37 crc kubenswrapper[4720]: timeout: failed to connect service ":50051" within 1s Oct 13 17:52:37 crc kubenswrapper[4720]: > Oct 13 17:52:42 crc kubenswrapper[4720]: I1013 17:52:42.169319 4720 scope.go:117] "RemoveContainer" containerID="eabc7e2122617e64bac6ae4330db40b5b4e9867a18ff888fd3f7afb5cfc64f2f" Oct 13 17:52:42 crc kubenswrapper[4720]: E1013 17:52:42.170017 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 17:52:44 crc kubenswrapper[4720]: I1013 17:52:44.727058 4720 generic.go:334] "Generic (PLEG): container finished" podID="41f1773b-8761-4e7e-bcfe-853ca5977b3b" containerID="58a4cb4d42a495f70cbd791c46cf7afedbbb8acd174654995b79437d9cc896e6" exitCode=2 Oct 13 17:52:44 crc kubenswrapper[4720]: I1013 17:52:44.727168 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dtplp" event={"ID":"41f1773b-8761-4e7e-bcfe-853ca5977b3b","Type":"ContainerDied","Data":"58a4cb4d42a495f70cbd791c46cf7afedbbb8acd174654995b79437d9cc896e6"} Oct 13 17:52:46 crc kubenswrapper[4720]: I1013 17:52:46.231078 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dtplp" Oct 13 17:52:46 crc kubenswrapper[4720]: I1013 17:52:46.336406 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/41f1773b-8761-4e7e-bcfe-853ca5977b3b-ssh-key\") pod \"41f1773b-8761-4e7e-bcfe-853ca5977b3b\" (UID: \"41f1773b-8761-4e7e-bcfe-853ca5977b3b\") " Oct 13 17:52:46 crc kubenswrapper[4720]: I1013 17:52:46.336576 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4cjx\" (UniqueName: \"kubernetes.io/projected/41f1773b-8761-4e7e-bcfe-853ca5977b3b-kube-api-access-p4cjx\") pod \"41f1773b-8761-4e7e-bcfe-853ca5977b3b\" (UID: \"41f1773b-8761-4e7e-bcfe-853ca5977b3b\") " Oct 13 17:52:46 crc kubenswrapper[4720]: I1013 17:52:46.336895 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/41f1773b-8761-4e7e-bcfe-853ca5977b3b-inventory\") pod \"41f1773b-8761-4e7e-bcfe-853ca5977b3b\" (UID: \"41f1773b-8761-4e7e-bcfe-853ca5977b3b\") " Oct 13 17:52:46 crc kubenswrapper[4720]: I1013 17:52:46.344830 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41f1773b-8761-4e7e-bcfe-853ca5977b3b-kube-api-access-p4cjx" (OuterVolumeSpecName: "kube-api-access-p4cjx") pod "41f1773b-8761-4e7e-bcfe-853ca5977b3b" (UID: "41f1773b-8761-4e7e-bcfe-853ca5977b3b"). InnerVolumeSpecName "kube-api-access-p4cjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:52:46 crc kubenswrapper[4720]: I1013 17:52:46.378325 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41f1773b-8761-4e7e-bcfe-853ca5977b3b-inventory" (OuterVolumeSpecName: "inventory") pod "41f1773b-8761-4e7e-bcfe-853ca5977b3b" (UID: "41f1773b-8761-4e7e-bcfe-853ca5977b3b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:52:46 crc kubenswrapper[4720]: I1013 17:52:46.380566 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41f1773b-8761-4e7e-bcfe-853ca5977b3b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "41f1773b-8761-4e7e-bcfe-853ca5977b3b" (UID: "41f1773b-8761-4e7e-bcfe-853ca5977b3b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:52:46 crc kubenswrapper[4720]: I1013 17:52:46.439421 4720 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/41f1773b-8761-4e7e-bcfe-853ca5977b3b-inventory\") on node \"crc\" DevicePath \"\"" Oct 13 17:52:46 crc kubenswrapper[4720]: I1013 17:52:46.439450 4720 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/41f1773b-8761-4e7e-bcfe-853ca5977b3b-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 13 17:52:46 crc kubenswrapper[4720]: I1013 17:52:46.439460 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4cjx\" (UniqueName: \"kubernetes.io/projected/41f1773b-8761-4e7e-bcfe-853ca5977b3b-kube-api-access-p4cjx\") on node \"crc\" DevicePath \"\"" Oct 13 17:52:46 crc kubenswrapper[4720]: I1013 17:52:46.542998 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7q6jx" Oct 13 17:52:46 crc kubenswrapper[4720]: I1013 17:52:46.599426 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7q6jx" Oct 13 17:52:46 crc kubenswrapper[4720]: I1013 17:52:46.750757 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dtplp" event={"ID":"41f1773b-8761-4e7e-bcfe-853ca5977b3b","Type":"ContainerDied","Data":"669bcf6715088cdfad243f5411537041188bf73b10f8ad9764f73a541c3ba13a"} Oct 13 17:52:46 crc kubenswrapper[4720]: I1013 17:52:46.750796 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dtplp" Oct 13 17:52:46 crc kubenswrapper[4720]: I1013 17:52:46.750825 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="669bcf6715088cdfad243f5411537041188bf73b10f8ad9764f73a541c3ba13a" Oct 13 17:52:46 crc kubenswrapper[4720]: I1013 17:52:46.782382 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7q6jx"] Oct 13 17:52:47 crc kubenswrapper[4720]: I1013 17:52:47.765641 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7q6jx" podUID="2d561b02-89e7-4fd5-bc93-759db09f73aa" containerName="registry-server" containerID="cri-o://f1de6da12dc85ef5c46b796397d2d8052aa26275fac04c4eb315186a836a62b7" gracePeriod=2 Oct 13 17:52:48 crc kubenswrapper[4720]: I1013 17:52:48.282330 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7q6jx" Oct 13 17:52:48 crc kubenswrapper[4720]: I1013 17:52:48.381670 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d561b02-89e7-4fd5-bc93-759db09f73aa-catalog-content\") pod \"2d561b02-89e7-4fd5-bc93-759db09f73aa\" (UID: \"2d561b02-89e7-4fd5-bc93-759db09f73aa\") " Oct 13 17:52:48 crc kubenswrapper[4720]: I1013 17:52:48.382158 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xh44m\" (UniqueName: \"kubernetes.io/projected/2d561b02-89e7-4fd5-bc93-759db09f73aa-kube-api-access-xh44m\") pod \"2d561b02-89e7-4fd5-bc93-759db09f73aa\" (UID: \"2d561b02-89e7-4fd5-bc93-759db09f73aa\") " Oct 13 17:52:48 crc kubenswrapper[4720]: I1013 17:52:48.382303 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d561b02-89e7-4fd5-bc93-759db09f73aa-utilities\") pod \"2d561b02-89e7-4fd5-bc93-759db09f73aa\" (UID: \"2d561b02-89e7-4fd5-bc93-759db09f73aa\") " Oct 13 17:52:48 crc kubenswrapper[4720]: I1013 17:52:48.383778 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d561b02-89e7-4fd5-bc93-759db09f73aa-utilities" (OuterVolumeSpecName: "utilities") pod "2d561b02-89e7-4fd5-bc93-759db09f73aa" (UID: "2d561b02-89e7-4fd5-bc93-759db09f73aa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:52:48 crc kubenswrapper[4720]: I1013 17:52:48.389643 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d561b02-89e7-4fd5-bc93-759db09f73aa-kube-api-access-xh44m" (OuterVolumeSpecName: "kube-api-access-xh44m") pod "2d561b02-89e7-4fd5-bc93-759db09f73aa" (UID: "2d561b02-89e7-4fd5-bc93-759db09f73aa"). InnerVolumeSpecName "kube-api-access-xh44m". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:52:48 crc kubenswrapper[4720]: I1013 17:52:48.484845 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d561b02-89e7-4fd5-bc93-759db09f73aa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2d561b02-89e7-4fd5-bc93-759db09f73aa" (UID: "2d561b02-89e7-4fd5-bc93-759db09f73aa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:52:48 crc kubenswrapper[4720]: I1013 17:52:48.485999 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d561b02-89e7-4fd5-bc93-759db09f73aa-catalog-content\") pod \"2d561b02-89e7-4fd5-bc93-759db09f73aa\" (UID: \"2d561b02-89e7-4fd5-bc93-759db09f73aa\") " Oct 13 17:52:48 crc kubenswrapper[4720]: W1013 17:52:48.486243 4720 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/2d561b02-89e7-4fd5-bc93-759db09f73aa/volumes/kubernetes.io~empty-dir/catalog-content Oct 13 17:52:48 crc kubenswrapper[4720]: I1013 17:52:48.486286 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d561b02-89e7-4fd5-bc93-759db09f73aa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2d561b02-89e7-4fd5-bc93-759db09f73aa" (UID: "2d561b02-89e7-4fd5-bc93-759db09f73aa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:52:48 crc kubenswrapper[4720]: I1013 17:52:48.486898 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xh44m\" (UniqueName: \"kubernetes.io/projected/2d561b02-89e7-4fd5-bc93-759db09f73aa-kube-api-access-xh44m\") on node \"crc\" DevicePath \"\"" Oct 13 17:52:48 crc kubenswrapper[4720]: I1013 17:52:48.486925 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d561b02-89e7-4fd5-bc93-759db09f73aa-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 17:52:48 crc kubenswrapper[4720]: I1013 17:52:48.486945 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d561b02-89e7-4fd5-bc93-759db09f73aa-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 17:52:48 crc kubenswrapper[4720]: I1013 17:52:48.781701 4720 generic.go:334] "Generic (PLEG): container finished" podID="2d561b02-89e7-4fd5-bc93-759db09f73aa" containerID="f1de6da12dc85ef5c46b796397d2d8052aa26275fac04c4eb315186a836a62b7" exitCode=0 Oct 13 17:52:48 crc kubenswrapper[4720]: I1013 17:52:48.781765 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7q6jx" event={"ID":"2d561b02-89e7-4fd5-bc93-759db09f73aa","Type":"ContainerDied","Data":"f1de6da12dc85ef5c46b796397d2d8052aa26275fac04c4eb315186a836a62b7"} Oct 13 17:52:48 crc kubenswrapper[4720]: I1013 17:52:48.781808 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7q6jx" event={"ID":"2d561b02-89e7-4fd5-bc93-759db09f73aa","Type":"ContainerDied","Data":"e8a489df8c4718653197f68b85ebb21b06512157b0947d8263c4e0724f34a590"} Oct 13 17:52:48 crc kubenswrapper[4720]: I1013 17:52:48.781839 4720 scope.go:117] "RemoveContainer" containerID="f1de6da12dc85ef5c46b796397d2d8052aa26275fac04c4eb315186a836a62b7" Oct 13 17:52:48 crc kubenswrapper[4720]: I1013 17:52:48.782034 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7q6jx" Oct 13 17:52:48 crc kubenswrapper[4720]: I1013 17:52:48.827540 4720 scope.go:117] "RemoveContainer" containerID="3ef052faaa429aa2d46b25d0912d8f29040091f7df9f0cc4c88d133724ace101" Oct 13 17:52:48 crc kubenswrapper[4720]: I1013 17:52:48.835208 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7q6jx"] Oct 13 17:52:48 crc kubenswrapper[4720]: I1013 17:52:48.855471 4720 scope.go:117] "RemoveContainer" containerID="e1f0f918ff38d84dd4d55a91eb25cd45ff54722ce5bbc673b7e6d65cf766faaa" Oct 13 17:52:48 crc kubenswrapper[4720]: I1013 17:52:48.860411 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7q6jx"] Oct 13 17:52:48 crc kubenswrapper[4720]: I1013 17:52:48.927382 4720 scope.go:117] "RemoveContainer" containerID="f1de6da12dc85ef5c46b796397d2d8052aa26275fac04c4eb315186a836a62b7" Oct 13 17:52:48 crc kubenswrapper[4720]: E1013 17:52:48.928138 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1de6da12dc85ef5c46b796397d2d8052aa26275fac04c4eb315186a836a62b7\": container with ID starting with f1de6da12dc85ef5c46b796397d2d8052aa26275fac04c4eb315186a836a62b7 not found: ID does not exist" containerID="f1de6da12dc85ef5c46b796397d2d8052aa26275fac04c4eb315186a836a62b7" Oct 13 17:52:48 crc kubenswrapper[4720]: I1013 17:52:48.928275 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1de6da12dc85ef5c46b796397d2d8052aa26275fac04c4eb315186a836a62b7"} err="failed to get container status \"f1de6da12dc85ef5c46b796397d2d8052aa26275fac04c4eb315186a836a62b7\": rpc error: code = NotFound desc = could not find container \"f1de6da12dc85ef5c46b796397d2d8052aa26275fac04c4eb315186a836a62b7\": container with ID starting with f1de6da12dc85ef5c46b796397d2d8052aa26275fac04c4eb315186a836a62b7 not found: ID does not exist" Oct 13 17:52:48 crc kubenswrapper[4720]: I1013 17:52:48.928311 4720 scope.go:117] "RemoveContainer" containerID="3ef052faaa429aa2d46b25d0912d8f29040091f7df9f0cc4c88d133724ace101" Oct 13 17:52:48 crc kubenswrapper[4720]: E1013 17:52:48.928853 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ef052faaa429aa2d46b25d0912d8f29040091f7df9f0cc4c88d133724ace101\": container with ID starting with 3ef052faaa429aa2d46b25d0912d8f29040091f7df9f0cc4c88d133724ace101 not found: ID does not exist" containerID="3ef052faaa429aa2d46b25d0912d8f29040091f7df9f0cc4c88d133724ace101" Oct 13 17:52:48 crc kubenswrapper[4720]: I1013 17:52:48.928916 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ef052faaa429aa2d46b25d0912d8f29040091f7df9f0cc4c88d133724ace101"} err="failed to get container status \"3ef052faaa429aa2d46b25d0912d8f29040091f7df9f0cc4c88d133724ace101\": rpc error: code = NotFound desc = could not find container \"3ef052faaa429aa2d46b25d0912d8f29040091f7df9f0cc4c88d133724ace101\": container with ID starting with 3ef052faaa429aa2d46b25d0912d8f29040091f7df9f0cc4c88d133724ace101 not found: ID does not exist" Oct 13 17:52:48 crc kubenswrapper[4720]: I1013 17:52:48.928956 4720 scope.go:117] "RemoveContainer" containerID="e1f0f918ff38d84dd4d55a91eb25cd45ff54722ce5bbc673b7e6d65cf766faaa" Oct 13 17:52:48 crc kubenswrapper[4720]: E1013 17:52:48.929450 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1f0f918ff38d84dd4d55a91eb25cd45ff54722ce5bbc673b7e6d65cf766faaa\": container with ID starting with e1f0f918ff38d84dd4d55a91eb25cd45ff54722ce5bbc673b7e6d65cf766faaa not found: ID does not exist" containerID="e1f0f918ff38d84dd4d55a91eb25cd45ff54722ce5bbc673b7e6d65cf766faaa" Oct 13 17:52:48 crc kubenswrapper[4720]: I1013 17:52:48.929496 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1f0f918ff38d84dd4d55a91eb25cd45ff54722ce5bbc673b7e6d65cf766faaa"} err="failed to get container status \"e1f0f918ff38d84dd4d55a91eb25cd45ff54722ce5bbc673b7e6d65cf766faaa\": rpc error: code = NotFound desc = could not find container \"e1f0f918ff38d84dd4d55a91eb25cd45ff54722ce5bbc673b7e6d65cf766faaa\": container with ID starting with e1f0f918ff38d84dd4d55a91eb25cd45ff54722ce5bbc673b7e6d65cf766faaa not found: ID does not exist" Oct 13 17:52:49 crc kubenswrapper[4720]: I1013 17:52:49.184348 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d561b02-89e7-4fd5-bc93-759db09f73aa" path="/var/lib/kubelet/pods/2d561b02-89e7-4fd5-bc93-759db09f73aa/volumes" Oct 13 17:52:53 crc kubenswrapper[4720]: I1013 17:52:53.168887 4720 scope.go:117] "RemoveContainer" containerID="eabc7e2122617e64bac6ae4330db40b5b4e9867a18ff888fd3f7afb5cfc64f2f" Oct 13 17:52:53 crc kubenswrapper[4720]: E1013 17:52:53.170108 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 17:52:54 crc kubenswrapper[4720]: I1013 17:52:54.034954 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w24k6"] Oct 13 17:52:54 crc kubenswrapper[4720]: E1013 17:52:54.037552 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41f1773b-8761-4e7e-bcfe-853ca5977b3b" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 13 17:52:54 crc kubenswrapper[4720]: I1013 17:52:54.038182 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="41f1773b-8761-4e7e-bcfe-853ca5977b3b" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 13 17:52:54 crc kubenswrapper[4720]: E1013 17:52:54.038642 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d561b02-89e7-4fd5-bc93-759db09f73aa" containerName="extract-content" Oct 13 17:52:54 crc kubenswrapper[4720]: I1013 17:52:54.038976 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d561b02-89e7-4fd5-bc93-759db09f73aa" containerName="extract-content" Oct 13 17:52:54 crc kubenswrapper[4720]: E1013 17:52:54.039362 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d561b02-89e7-4fd5-bc93-759db09f73aa" containerName="registry-server" Oct 13 17:52:54 crc kubenswrapper[4720]: I1013 17:52:54.039625 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d561b02-89e7-4fd5-bc93-759db09f73aa" containerName="registry-server" Oct 13 17:52:54 crc kubenswrapper[4720]: E1013 17:52:54.039845 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d561b02-89e7-4fd5-bc93-759db09f73aa" containerName="extract-utilities" Oct 13 17:52:54 crc kubenswrapper[4720]: I1013 17:52:54.040087 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d561b02-89e7-4fd5-bc93-759db09f73aa" containerName="extract-utilities" Oct 13 17:52:54 crc kubenswrapper[4720]: I1013 17:52:54.041117 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="41f1773b-8761-4e7e-bcfe-853ca5977b3b" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 13 17:52:54 crc kubenswrapper[4720]: I1013 17:52:54.041451 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d561b02-89e7-4fd5-bc93-759db09f73aa" containerName="registry-server" Oct 13 17:52:54 crc kubenswrapper[4720]: I1013 17:52:54.044004 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w24k6"] Oct 13 17:52:54 crc kubenswrapper[4720]: I1013 17:52:54.044169 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w24k6" Oct 13 17:52:54 crc kubenswrapper[4720]: I1013 17:52:54.047071 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 13 17:52:54 crc kubenswrapper[4720]: I1013 17:52:54.048593 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 13 17:52:54 crc kubenswrapper[4720]: I1013 17:52:54.048634 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l2fds" Oct 13 17:52:54 crc kubenswrapper[4720]: I1013 17:52:54.056230 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 13 17:52:54 crc kubenswrapper[4720]: I1013 17:52:54.201001 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9e9b7c8c-f6f4-448d-af87-164d1f0d008f-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-w24k6\" (UID: \"9e9b7c8c-f6f4-448d-af87-164d1f0d008f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w24k6" Oct 13 17:52:54 crc kubenswrapper[4720]: I1013 17:52:54.201309 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v49d6\" (UniqueName: \"kubernetes.io/projected/9e9b7c8c-f6f4-448d-af87-164d1f0d008f-kube-api-access-v49d6\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-w24k6\" (UID: \"9e9b7c8c-f6f4-448d-af87-164d1f0d008f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w24k6" Oct 13 17:52:54 crc kubenswrapper[4720]: I1013 17:52:54.201449 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9e9b7c8c-f6f4-448d-af87-164d1f0d008f-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-w24k6\" (UID: \"9e9b7c8c-f6f4-448d-af87-164d1f0d008f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w24k6" Oct 13 17:52:54 crc kubenswrapper[4720]: I1013 17:52:54.303880 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9e9b7c8c-f6f4-448d-af87-164d1f0d008f-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-w24k6\" (UID: \"9e9b7c8c-f6f4-448d-af87-164d1f0d008f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w24k6" Oct 13 17:52:54 crc kubenswrapper[4720]: I1013 17:52:54.304184 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9e9b7c8c-f6f4-448d-af87-164d1f0d008f-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-w24k6\" (UID: \"9e9b7c8c-f6f4-448d-af87-164d1f0d008f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w24k6" Oct 13 17:52:54 crc kubenswrapper[4720]: I1013 17:52:54.304458 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v49d6\" (UniqueName: \"kubernetes.io/projected/9e9b7c8c-f6f4-448d-af87-164d1f0d008f-kube-api-access-v49d6\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-w24k6\" (UID: \"9e9b7c8c-f6f4-448d-af87-164d1f0d008f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w24k6" Oct 13 17:52:54 crc kubenswrapper[4720]: I1013 17:52:54.311680 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9e9b7c8c-f6f4-448d-af87-164d1f0d008f-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-w24k6\" (UID: \"9e9b7c8c-f6f4-448d-af87-164d1f0d008f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w24k6" Oct 13 17:52:54 crc kubenswrapper[4720]: I1013 17:52:54.312329 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9e9b7c8c-f6f4-448d-af87-164d1f0d008f-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-w24k6\" (UID: \"9e9b7c8c-f6f4-448d-af87-164d1f0d008f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w24k6" Oct 13 17:52:54 crc kubenswrapper[4720]: I1013 17:52:54.335531 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v49d6\" (UniqueName: \"kubernetes.io/projected/9e9b7c8c-f6f4-448d-af87-164d1f0d008f-kube-api-access-v49d6\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-w24k6\" (UID: \"9e9b7c8c-f6f4-448d-af87-164d1f0d008f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w24k6" Oct 13 17:52:54 crc kubenswrapper[4720]: I1013 17:52:54.383920 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w24k6" Oct 13 17:52:54 crc kubenswrapper[4720]: I1013 17:52:54.991802 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w24k6"] Oct 13 17:52:55 crc kubenswrapper[4720]: I1013 17:52:55.869009 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w24k6" event={"ID":"9e9b7c8c-f6f4-448d-af87-164d1f0d008f","Type":"ContainerStarted","Data":"8c5e99e607ad186adb0edd2ac725abe347991eb4e5dd9a6a377bc0f5fa56f6cd"} Oct 13 17:52:56 crc kubenswrapper[4720]: I1013 17:52:56.052316 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-j4lmp"] Oct 13 17:52:56 crc kubenswrapper[4720]: I1013 17:52:56.067981 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-j4lmp"] Oct 13 17:52:56 crc kubenswrapper[4720]: I1013 17:52:56.906126 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w24k6" event={"ID":"9e9b7c8c-f6f4-448d-af87-164d1f0d008f","Type":"ContainerStarted","Data":"f08b743f6b33eda26b4433566cf049e08ca2f7ee3042664da254f1fd75ed4c80"} Oct 13 17:52:56 crc kubenswrapper[4720]: I1013 17:52:56.935221 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w24k6" podStartSLOduration=2.245825943 podStartE2EDuration="2.935201597s" podCreationTimestamp="2025-10-13 17:52:54 +0000 UTC" firstStartedPulling="2025-10-13 17:52:54.998333953 +0000 UTC m=+1720.455584115" lastFinishedPulling="2025-10-13 17:52:55.687709607 +0000 UTC m=+1721.144959769" observedRunningTime="2025-10-13 17:52:56.929555631 +0000 UTC m=+1722.386805763" watchObservedRunningTime="2025-10-13 17:52:56.935201597 +0000 UTC m=+1722.392451729" Oct 13 17:52:57 crc kubenswrapper[4720]: I1013 17:52:57.178883 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f2ee4f4-e8f0-41c3-9ec4-0832701c48a8" path="/var/lib/kubelet/pods/3f2ee4f4-e8f0-41c3-9ec4-0832701c48a8/volumes" Oct 13 17:53:06 crc kubenswrapper[4720]: I1013 17:53:06.168660 4720 scope.go:117] "RemoveContainer" containerID="eabc7e2122617e64bac6ae4330db40b5b4e9867a18ff888fd3f7afb5cfc64f2f" Oct 13 17:53:06 crc kubenswrapper[4720]: E1013 17:53:06.169796 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 17:53:19 crc kubenswrapper[4720]: I1013 17:53:19.584738 4720 scope.go:117] "RemoveContainer" containerID="360130da2fdf61ce37b6f94421c1a85ed37eb48dd143de2265564c462b24a013" Oct 13 17:53:21 crc kubenswrapper[4720]: I1013 17:53:21.168102 4720 scope.go:117] "RemoveContainer" containerID="eabc7e2122617e64bac6ae4330db40b5b4e9867a18ff888fd3f7afb5cfc64f2f" Oct 13 17:53:21 crc kubenswrapper[4720]: E1013 17:53:21.168647 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 17:53:36 crc kubenswrapper[4720]: I1013 17:53:36.167891 4720 scope.go:117] "RemoveContainer" containerID="eabc7e2122617e64bac6ae4330db40b5b4e9867a18ff888fd3f7afb5cfc64f2f" Oct 13 17:53:36 crc kubenswrapper[4720]: E1013 17:53:36.168711 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 17:53:46 crc kubenswrapper[4720]: I1013 17:53:46.429123 4720 generic.go:334] "Generic (PLEG): container finished" podID="9e9b7c8c-f6f4-448d-af87-164d1f0d008f" containerID="f08b743f6b33eda26b4433566cf049e08ca2f7ee3042664da254f1fd75ed4c80" exitCode=0 Oct 13 17:53:46 crc kubenswrapper[4720]: I1013 17:53:46.429309 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w24k6" event={"ID":"9e9b7c8c-f6f4-448d-af87-164d1f0d008f","Type":"ContainerDied","Data":"f08b743f6b33eda26b4433566cf049e08ca2f7ee3042664da254f1fd75ed4c80"} Oct 13 17:53:47 crc kubenswrapper[4720]: I1013 17:53:47.973693 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w24k6" Oct 13 17:53:48 crc kubenswrapper[4720]: I1013 17:53:48.028419 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9e9b7c8c-f6f4-448d-af87-164d1f0d008f-ssh-key\") pod \"9e9b7c8c-f6f4-448d-af87-164d1f0d008f\" (UID: \"9e9b7c8c-f6f4-448d-af87-164d1f0d008f\") " Oct 13 17:53:48 crc kubenswrapper[4720]: I1013 17:53:48.028536 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v49d6\" (UniqueName: \"kubernetes.io/projected/9e9b7c8c-f6f4-448d-af87-164d1f0d008f-kube-api-access-v49d6\") pod \"9e9b7c8c-f6f4-448d-af87-164d1f0d008f\" (UID: \"9e9b7c8c-f6f4-448d-af87-164d1f0d008f\") " Oct 13 17:53:48 crc kubenswrapper[4720]: I1013 17:53:48.029892 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9e9b7c8c-f6f4-448d-af87-164d1f0d008f-inventory\") pod \"9e9b7c8c-f6f4-448d-af87-164d1f0d008f\" (UID: \"9e9b7c8c-f6f4-448d-af87-164d1f0d008f\") " Oct 13 17:53:48 crc kubenswrapper[4720]: I1013 17:53:48.035368 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b7c8c-f6f4-448d-af87-164d1f0d008f-kube-api-access-v49d6" (OuterVolumeSpecName: "kube-api-access-v49d6") pod "9e9b7c8c-f6f4-448d-af87-164d1f0d008f" (UID: "9e9b7c8c-f6f4-448d-af87-164d1f0d008f"). InnerVolumeSpecName "kube-api-access-v49d6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:53:48 crc kubenswrapper[4720]: I1013 17:53:48.055999 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9b7c8c-f6f4-448d-af87-164d1f0d008f-inventory" (OuterVolumeSpecName: "inventory") pod "9e9b7c8c-f6f4-448d-af87-164d1f0d008f" (UID: "9e9b7c8c-f6f4-448d-af87-164d1f0d008f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:53:48 crc kubenswrapper[4720]: I1013 17:53:48.078947 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9b7c8c-f6f4-448d-af87-164d1f0d008f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9e9b7c8c-f6f4-448d-af87-164d1f0d008f" (UID: "9e9b7c8c-f6f4-448d-af87-164d1f0d008f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:53:48 crc kubenswrapper[4720]: I1013 17:53:48.132702 4720 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9e9b7c8c-f6f4-448d-af87-164d1f0d008f-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 13 17:53:48 crc kubenswrapper[4720]: I1013 17:53:48.132739 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v49d6\" (UniqueName: \"kubernetes.io/projected/9e9b7c8c-f6f4-448d-af87-164d1f0d008f-kube-api-access-v49d6\") on node \"crc\" DevicePath \"\"" Oct 13 17:53:48 crc kubenswrapper[4720]: I1013 17:53:48.132754 4720 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9e9b7c8c-f6f4-448d-af87-164d1f0d008f-inventory\") on node \"crc\" DevicePath \"\"" Oct 13 17:53:48 crc kubenswrapper[4720]: I1013 17:53:48.467853 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w24k6" event={"ID":"9e9b7c8c-f6f4-448d-af87-164d1f0d008f","Type":"ContainerDied","Data":"8c5e99e607ad186adb0edd2ac725abe347991eb4e5dd9a6a377bc0f5fa56f6cd"} Oct 13 17:53:48 crc kubenswrapper[4720]: I1013 17:53:48.467904 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w24k6" Oct 13 17:53:48 crc kubenswrapper[4720]: I1013 17:53:48.467911 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c5e99e607ad186adb0edd2ac725abe347991eb4e5dd9a6a377bc0f5fa56f6cd" Oct 13 17:53:48 crc kubenswrapper[4720]: I1013 17:53:48.556479 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-8vhmv"] Oct 13 17:53:48 crc kubenswrapper[4720]: E1013 17:53:48.556852 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e9b7c8c-f6f4-448d-af87-164d1f0d008f" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 13 17:53:48 crc kubenswrapper[4720]: I1013 17:53:48.556867 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e9b7c8c-f6f4-448d-af87-164d1f0d008f" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 13 17:53:48 crc kubenswrapper[4720]: I1013 17:53:48.557101 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e9b7c8c-f6f4-448d-af87-164d1f0d008f" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 13 17:53:48 crc kubenswrapper[4720]: I1013 17:53:48.557749 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-8vhmv" Oct 13 17:53:48 crc kubenswrapper[4720]: I1013 17:53:48.561907 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 13 17:53:48 crc kubenswrapper[4720]: I1013 17:53:48.562213 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l2fds" Oct 13 17:53:48 crc kubenswrapper[4720]: I1013 17:53:48.562897 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 13 17:53:48 crc kubenswrapper[4720]: I1013 17:53:48.563043 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 13 17:53:48 crc kubenswrapper[4720]: I1013 17:53:48.570648 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-8vhmv"] Oct 13 17:53:48 crc kubenswrapper[4720]: I1013 17:53:48.649409 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/06bc5000-9f94-4cff-ade7-ac063d97ef79-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-8vhmv\" (UID: \"06bc5000-9f94-4cff-ade7-ac063d97ef79\") " pod="openstack/ssh-known-hosts-edpm-deployment-8vhmv" Oct 13 17:53:48 crc kubenswrapper[4720]: I1013 17:53:48.650088 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5gcc\" (UniqueName: \"kubernetes.io/projected/06bc5000-9f94-4cff-ade7-ac063d97ef79-kube-api-access-z5gcc\") pod \"ssh-known-hosts-edpm-deployment-8vhmv\" (UID: \"06bc5000-9f94-4cff-ade7-ac063d97ef79\") " pod="openstack/ssh-known-hosts-edpm-deployment-8vhmv" Oct 13 17:53:48 crc kubenswrapper[4720]: I1013 17:53:48.650425 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/06bc5000-9f94-4cff-ade7-ac063d97ef79-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-8vhmv\" (UID: \"06bc5000-9f94-4cff-ade7-ac063d97ef79\") " pod="openstack/ssh-known-hosts-edpm-deployment-8vhmv" Oct 13 17:53:48 crc kubenswrapper[4720]: I1013 17:53:48.752564 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/06bc5000-9f94-4cff-ade7-ac063d97ef79-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-8vhmv\" (UID: \"06bc5000-9f94-4cff-ade7-ac063d97ef79\") " pod="openstack/ssh-known-hosts-edpm-deployment-8vhmv" Oct 13 17:53:48 crc kubenswrapper[4720]: I1013 17:53:48.752937 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5gcc\" (UniqueName: \"kubernetes.io/projected/06bc5000-9f94-4cff-ade7-ac063d97ef79-kube-api-access-z5gcc\") pod \"ssh-known-hosts-edpm-deployment-8vhmv\" (UID: \"06bc5000-9f94-4cff-ade7-ac063d97ef79\") " pod="openstack/ssh-known-hosts-edpm-deployment-8vhmv" Oct 13 17:53:48 crc kubenswrapper[4720]: I1013 17:53:48.753004 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/06bc5000-9f94-4cff-ade7-ac063d97ef79-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-8vhmv\" (UID: \"06bc5000-9f94-4cff-ade7-ac063d97ef79\") " pod="openstack/ssh-known-hosts-edpm-deployment-8vhmv" Oct 13 17:53:48 crc kubenswrapper[4720]: I1013 17:53:48.759004 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/06bc5000-9f94-4cff-ade7-ac063d97ef79-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-8vhmv\" (UID: \"06bc5000-9f94-4cff-ade7-ac063d97ef79\") " pod="openstack/ssh-known-hosts-edpm-deployment-8vhmv" Oct 13 17:53:48 crc kubenswrapper[4720]: I1013 17:53:48.760807 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/06bc5000-9f94-4cff-ade7-ac063d97ef79-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-8vhmv\" (UID: \"06bc5000-9f94-4cff-ade7-ac063d97ef79\") " pod="openstack/ssh-known-hosts-edpm-deployment-8vhmv" Oct 13 17:53:48 crc kubenswrapper[4720]: I1013 17:53:48.773854 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5gcc\" (UniqueName: \"kubernetes.io/projected/06bc5000-9f94-4cff-ade7-ac063d97ef79-kube-api-access-z5gcc\") pod \"ssh-known-hosts-edpm-deployment-8vhmv\" (UID: \"06bc5000-9f94-4cff-ade7-ac063d97ef79\") " pod="openstack/ssh-known-hosts-edpm-deployment-8vhmv" Oct 13 17:53:48 crc kubenswrapper[4720]: I1013 17:53:48.892484 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-8vhmv" Oct 13 17:53:49 crc kubenswrapper[4720]: I1013 17:53:49.168914 4720 scope.go:117] "RemoveContainer" containerID="eabc7e2122617e64bac6ae4330db40b5b4e9867a18ff888fd3f7afb5cfc64f2f" Oct 13 17:53:49 crc kubenswrapper[4720]: I1013 17:53:49.462944 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-8vhmv"] Oct 13 17:53:49 crc kubenswrapper[4720]: W1013 17:53:49.473002 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06bc5000_9f94_4cff_ade7_ac063d97ef79.slice/crio-dcd016fb352103546733153d99af2c0802025939f2671e62a4aa6137e7303ba5 WatchSource:0}: Error finding container dcd016fb352103546733153d99af2c0802025939f2671e62a4aa6137e7303ba5: Status 404 returned error can't find the container with id dcd016fb352103546733153d99af2c0802025939f2671e62a4aa6137e7303ba5 Oct 13 17:53:49 crc kubenswrapper[4720]: I1013 17:53:49.488084 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" event={"ID":"ce442c80-fcde-4b79-b6f9-f8f25771dfd4","Type":"ContainerStarted","Data":"a04e41707b3ca4c901c7d1fed0a4b9bbfe355d3cbe36208efddf6c91d80563e3"} Oct 13 17:53:50 crc kubenswrapper[4720]: I1013 17:53:50.501212 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-8vhmv" event={"ID":"06bc5000-9f94-4cff-ade7-ac063d97ef79","Type":"ContainerStarted","Data":"a4c8f85fb339fdc2ad3615e6f449ecf2cc31d90770f33a0982a5a55d52e60ac3"} Oct 13 17:53:50 crc kubenswrapper[4720]: I1013 17:53:50.502154 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-8vhmv" event={"ID":"06bc5000-9f94-4cff-ade7-ac063d97ef79","Type":"ContainerStarted","Data":"dcd016fb352103546733153d99af2c0802025939f2671e62a4aa6137e7303ba5"} Oct 13 17:53:50 crc kubenswrapper[4720]: I1013 17:53:50.527941 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-8vhmv" podStartSLOduration=2.095201171 podStartE2EDuration="2.527920239s" podCreationTimestamp="2025-10-13 17:53:48 +0000 UTC" firstStartedPulling="2025-10-13 17:53:49.476248024 +0000 UTC m=+1774.933498156" lastFinishedPulling="2025-10-13 17:53:49.908967082 +0000 UTC m=+1775.366217224" observedRunningTime="2025-10-13 17:53:50.520819466 +0000 UTC m=+1775.978069618" watchObservedRunningTime="2025-10-13 17:53:50.527920239 +0000 UTC m=+1775.985170371" Oct 13 17:53:57 crc kubenswrapper[4720]: I1013 17:53:57.584712 4720 generic.go:334] "Generic (PLEG): container finished" podID="06bc5000-9f94-4cff-ade7-ac063d97ef79" containerID="a4c8f85fb339fdc2ad3615e6f449ecf2cc31d90770f33a0982a5a55d52e60ac3" exitCode=0 Oct 13 17:53:57 crc kubenswrapper[4720]: I1013 17:53:57.585211 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-8vhmv" event={"ID":"06bc5000-9f94-4cff-ade7-ac063d97ef79","Type":"ContainerDied","Data":"a4c8f85fb339fdc2ad3615e6f449ecf2cc31d90770f33a0982a5a55d52e60ac3"} Oct 13 17:53:59 crc kubenswrapper[4720]: I1013 17:53:59.137922 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-8vhmv" Oct 13 17:53:59 crc kubenswrapper[4720]: I1013 17:53:59.199409 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/06bc5000-9f94-4cff-ade7-ac063d97ef79-inventory-0\") pod \"06bc5000-9f94-4cff-ade7-ac063d97ef79\" (UID: \"06bc5000-9f94-4cff-ade7-ac063d97ef79\") " Oct 13 17:53:59 crc kubenswrapper[4720]: I1013 17:53:59.199525 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/06bc5000-9f94-4cff-ade7-ac063d97ef79-ssh-key-openstack-edpm-ipam\") pod \"06bc5000-9f94-4cff-ade7-ac063d97ef79\" (UID: \"06bc5000-9f94-4cff-ade7-ac063d97ef79\") " Oct 13 17:53:59 crc kubenswrapper[4720]: I1013 17:53:59.199661 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5gcc\" (UniqueName: \"kubernetes.io/projected/06bc5000-9f94-4cff-ade7-ac063d97ef79-kube-api-access-z5gcc\") pod \"06bc5000-9f94-4cff-ade7-ac063d97ef79\" (UID: \"06bc5000-9f94-4cff-ade7-ac063d97ef79\") " Oct 13 17:53:59 crc kubenswrapper[4720]: I1013 17:53:59.206983 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06bc5000-9f94-4cff-ade7-ac063d97ef79-kube-api-access-z5gcc" (OuterVolumeSpecName: "kube-api-access-z5gcc") pod "06bc5000-9f94-4cff-ade7-ac063d97ef79" (UID: "06bc5000-9f94-4cff-ade7-ac063d97ef79"). InnerVolumeSpecName "kube-api-access-z5gcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:53:59 crc kubenswrapper[4720]: I1013 17:53:59.250164 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06bc5000-9f94-4cff-ade7-ac063d97ef79-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "06bc5000-9f94-4cff-ade7-ac063d97ef79" (UID: "06bc5000-9f94-4cff-ade7-ac063d97ef79"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:53:59 crc kubenswrapper[4720]: I1013 17:53:59.250426 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06bc5000-9f94-4cff-ade7-ac063d97ef79-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "06bc5000-9f94-4cff-ade7-ac063d97ef79" (UID: "06bc5000-9f94-4cff-ade7-ac063d97ef79"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:53:59 crc kubenswrapper[4720]: I1013 17:53:59.304615 4720 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/06bc5000-9f94-4cff-ade7-ac063d97ef79-inventory-0\") on node \"crc\" DevicePath \"\"" Oct 13 17:53:59 crc kubenswrapper[4720]: I1013 17:53:59.304648 4720 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/06bc5000-9f94-4cff-ade7-ac063d97ef79-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Oct 13 17:53:59 crc kubenswrapper[4720]: I1013 17:53:59.304663 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5gcc\" (UniqueName: \"kubernetes.io/projected/06bc5000-9f94-4cff-ade7-ac063d97ef79-kube-api-access-z5gcc\") on node \"crc\" DevicePath \"\"" Oct 13 17:53:59 crc kubenswrapper[4720]: I1013 17:53:59.606906 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-8vhmv" event={"ID":"06bc5000-9f94-4cff-ade7-ac063d97ef79","Type":"ContainerDied","Data":"dcd016fb352103546733153d99af2c0802025939f2671e62a4aa6137e7303ba5"} Oct 13 17:53:59 crc kubenswrapper[4720]: I1013 17:53:59.607279 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcd016fb352103546733153d99af2c0802025939f2671e62a4aa6137e7303ba5" Oct 13 17:53:59 crc kubenswrapper[4720]: I1013 17:53:59.607209 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-8vhmv" Oct 13 17:53:59 crc kubenswrapper[4720]: I1013 17:53:59.694665 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-94xkd"] Oct 13 17:53:59 crc kubenswrapper[4720]: E1013 17:53:59.695082 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06bc5000-9f94-4cff-ade7-ac063d97ef79" containerName="ssh-known-hosts-edpm-deployment" Oct 13 17:53:59 crc kubenswrapper[4720]: I1013 17:53:59.695101 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="06bc5000-9f94-4cff-ade7-ac063d97ef79" containerName="ssh-known-hosts-edpm-deployment" Oct 13 17:53:59 crc kubenswrapper[4720]: I1013 17:53:59.695372 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="06bc5000-9f94-4cff-ade7-ac063d97ef79" containerName="ssh-known-hosts-edpm-deployment" Oct 13 17:53:59 crc kubenswrapper[4720]: I1013 17:53:59.696283 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-94xkd" Oct 13 17:53:59 crc kubenswrapper[4720]: I1013 17:53:59.708254 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l2fds" Oct 13 17:53:59 crc kubenswrapper[4720]: I1013 17:53:59.708905 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 13 17:53:59 crc kubenswrapper[4720]: I1013 17:53:59.710958 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 13 17:53:59 crc kubenswrapper[4720]: I1013 17:53:59.711027 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 13 17:53:59 crc kubenswrapper[4720]: I1013 17:53:59.721033 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-94xkd"] Oct 13 17:53:59 crc kubenswrapper[4720]: I1013 17:53:59.814217 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwz87\" (UniqueName: \"kubernetes.io/projected/d6ae1ade-ceec-4b00-b028-1272c83dea9a-kube-api-access-rwz87\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-94xkd\" (UID: \"d6ae1ade-ceec-4b00-b028-1272c83dea9a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-94xkd" Oct 13 17:53:59 crc kubenswrapper[4720]: I1013 17:53:59.814500 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d6ae1ade-ceec-4b00-b028-1272c83dea9a-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-94xkd\" (UID: \"d6ae1ade-ceec-4b00-b028-1272c83dea9a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-94xkd" Oct 13 17:53:59 crc kubenswrapper[4720]: I1013 17:53:59.815175 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d6ae1ade-ceec-4b00-b028-1272c83dea9a-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-94xkd\" (UID: \"d6ae1ade-ceec-4b00-b028-1272c83dea9a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-94xkd" Oct 13 17:53:59 crc kubenswrapper[4720]: I1013 17:53:59.917005 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d6ae1ade-ceec-4b00-b028-1272c83dea9a-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-94xkd\" (UID: \"d6ae1ade-ceec-4b00-b028-1272c83dea9a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-94xkd" Oct 13 17:53:59 crc kubenswrapper[4720]: I1013 17:53:59.917176 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d6ae1ade-ceec-4b00-b028-1272c83dea9a-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-94xkd\" (UID: \"d6ae1ade-ceec-4b00-b028-1272c83dea9a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-94xkd" Oct 13 17:53:59 crc kubenswrapper[4720]: I1013 17:53:59.917490 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwz87\" (UniqueName: \"kubernetes.io/projected/d6ae1ade-ceec-4b00-b028-1272c83dea9a-kube-api-access-rwz87\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-94xkd\" (UID: \"d6ae1ade-ceec-4b00-b028-1272c83dea9a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-94xkd" Oct 13 17:53:59 crc kubenswrapper[4720]: I1013 17:53:59.925069 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d6ae1ade-ceec-4b00-b028-1272c83dea9a-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-94xkd\" (UID: \"d6ae1ade-ceec-4b00-b028-1272c83dea9a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-94xkd" Oct 13 17:53:59 crc kubenswrapper[4720]: I1013 17:53:59.925481 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d6ae1ade-ceec-4b00-b028-1272c83dea9a-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-94xkd\" (UID: \"d6ae1ade-ceec-4b00-b028-1272c83dea9a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-94xkd" Oct 13 17:53:59 crc kubenswrapper[4720]: I1013 17:53:59.947834 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwz87\" (UniqueName: \"kubernetes.io/projected/d6ae1ade-ceec-4b00-b028-1272c83dea9a-kube-api-access-rwz87\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-94xkd\" (UID: \"d6ae1ade-ceec-4b00-b028-1272c83dea9a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-94xkd" Oct 13 17:54:00 crc kubenswrapper[4720]: I1013 17:54:00.024657 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-94xkd" Oct 13 17:54:00 crc kubenswrapper[4720]: I1013 17:54:00.642952 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-94xkd"] Oct 13 17:54:01 crc kubenswrapper[4720]: I1013 17:54:01.634074 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-94xkd" event={"ID":"d6ae1ade-ceec-4b00-b028-1272c83dea9a","Type":"ContainerStarted","Data":"47a1197871c1d87701a3f5ac31da634616ee5fb295a22c71e43ddd9dddd51114"} Oct 13 17:54:01 crc kubenswrapper[4720]: I1013 17:54:01.635019 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-94xkd" event={"ID":"d6ae1ade-ceec-4b00-b028-1272c83dea9a","Type":"ContainerStarted","Data":"d0bdf8330b78a6aef53ca5804607c8b36859cdc2b0b32d3f545caeeb86de3041"} Oct 13 17:54:01 crc kubenswrapper[4720]: I1013 17:54:01.663936 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-94xkd" podStartSLOduration=2.171529761 podStartE2EDuration="2.663913761s" podCreationTimestamp="2025-10-13 17:53:59 +0000 UTC" firstStartedPulling="2025-10-13 17:54:00.646853878 +0000 UTC m=+1786.104104020" lastFinishedPulling="2025-10-13 17:54:01.139237888 +0000 UTC m=+1786.596488020" observedRunningTime="2025-10-13 17:54:01.654695703 +0000 UTC m=+1787.111945885" watchObservedRunningTime="2025-10-13 17:54:01.663913761 +0000 UTC m=+1787.121163923" Oct 13 17:54:10 crc kubenswrapper[4720]: I1013 17:54:10.740094 4720 generic.go:334] "Generic (PLEG): container finished" podID="d6ae1ade-ceec-4b00-b028-1272c83dea9a" containerID="47a1197871c1d87701a3f5ac31da634616ee5fb295a22c71e43ddd9dddd51114" exitCode=0 Oct 13 17:54:10 crc kubenswrapper[4720]: I1013 17:54:10.740168 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-94xkd" event={"ID":"d6ae1ade-ceec-4b00-b028-1272c83dea9a","Type":"ContainerDied","Data":"47a1197871c1d87701a3f5ac31da634616ee5fb295a22c71e43ddd9dddd51114"} Oct 13 17:54:12 crc kubenswrapper[4720]: I1013 17:54:12.311534 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-94xkd" Oct 13 17:54:12 crc kubenswrapper[4720]: I1013 17:54:12.383621 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwz87\" (UniqueName: \"kubernetes.io/projected/d6ae1ade-ceec-4b00-b028-1272c83dea9a-kube-api-access-rwz87\") pod \"d6ae1ade-ceec-4b00-b028-1272c83dea9a\" (UID: \"d6ae1ade-ceec-4b00-b028-1272c83dea9a\") " Oct 13 17:54:12 crc kubenswrapper[4720]: I1013 17:54:12.383906 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d6ae1ade-ceec-4b00-b028-1272c83dea9a-inventory\") pod \"d6ae1ade-ceec-4b00-b028-1272c83dea9a\" (UID: \"d6ae1ade-ceec-4b00-b028-1272c83dea9a\") " Oct 13 17:54:12 crc kubenswrapper[4720]: I1013 17:54:12.383930 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d6ae1ade-ceec-4b00-b028-1272c83dea9a-ssh-key\") pod \"d6ae1ade-ceec-4b00-b028-1272c83dea9a\" (UID: \"d6ae1ade-ceec-4b00-b028-1272c83dea9a\") " Oct 13 17:54:12 crc kubenswrapper[4720]: I1013 17:54:12.389885 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6ae1ade-ceec-4b00-b028-1272c83dea9a-kube-api-access-rwz87" (OuterVolumeSpecName: "kube-api-access-rwz87") pod "d6ae1ade-ceec-4b00-b028-1272c83dea9a" (UID: "d6ae1ade-ceec-4b00-b028-1272c83dea9a"). InnerVolumeSpecName "kube-api-access-rwz87". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:54:12 crc kubenswrapper[4720]: I1013 17:54:12.412607 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6ae1ade-ceec-4b00-b028-1272c83dea9a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d6ae1ade-ceec-4b00-b028-1272c83dea9a" (UID: "d6ae1ade-ceec-4b00-b028-1272c83dea9a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:54:12 crc kubenswrapper[4720]: I1013 17:54:12.415068 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6ae1ade-ceec-4b00-b028-1272c83dea9a-inventory" (OuterVolumeSpecName: "inventory") pod "d6ae1ade-ceec-4b00-b028-1272c83dea9a" (UID: "d6ae1ade-ceec-4b00-b028-1272c83dea9a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:54:12 crc kubenswrapper[4720]: I1013 17:54:12.486735 4720 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d6ae1ade-ceec-4b00-b028-1272c83dea9a-inventory\") on node \"crc\" DevicePath \"\"" Oct 13 17:54:12 crc kubenswrapper[4720]: I1013 17:54:12.486779 4720 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d6ae1ade-ceec-4b00-b028-1272c83dea9a-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 13 17:54:12 crc kubenswrapper[4720]: I1013 17:54:12.486792 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwz87\" (UniqueName: \"kubernetes.io/projected/d6ae1ade-ceec-4b00-b028-1272c83dea9a-kube-api-access-rwz87\") on node \"crc\" DevicePath \"\"" Oct 13 17:54:12 crc kubenswrapper[4720]: I1013 17:54:12.772359 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-94xkd" event={"ID":"d6ae1ade-ceec-4b00-b028-1272c83dea9a","Type":"ContainerDied","Data":"d0bdf8330b78a6aef53ca5804607c8b36859cdc2b0b32d3f545caeeb86de3041"} Oct 13 17:54:12 crc kubenswrapper[4720]: I1013 17:54:12.772423 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0bdf8330b78a6aef53ca5804607c8b36859cdc2b0b32d3f545caeeb86de3041" Oct 13 17:54:12 crc kubenswrapper[4720]: I1013 17:54:12.772519 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-94xkd" Oct 13 17:54:12 crc kubenswrapper[4720]: I1013 17:54:12.878126 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r964t"] Oct 13 17:54:12 crc kubenswrapper[4720]: E1013 17:54:12.879399 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6ae1ade-ceec-4b00-b028-1272c83dea9a" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Oct 13 17:54:12 crc kubenswrapper[4720]: I1013 17:54:12.879439 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6ae1ade-ceec-4b00-b028-1272c83dea9a" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Oct 13 17:54:12 crc kubenswrapper[4720]: I1013 17:54:12.879814 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6ae1ade-ceec-4b00-b028-1272c83dea9a" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Oct 13 17:54:12 crc kubenswrapper[4720]: I1013 17:54:12.881868 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r964t" Oct 13 17:54:12 crc kubenswrapper[4720]: I1013 17:54:12.892121 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 13 17:54:12 crc kubenswrapper[4720]: I1013 17:54:12.892551 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 13 17:54:12 crc kubenswrapper[4720]: I1013 17:54:12.892781 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 13 17:54:12 crc kubenswrapper[4720]: I1013 17:54:12.893601 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l2fds" Oct 13 17:54:12 crc kubenswrapper[4720]: I1013 17:54:12.901410 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r964t"] Oct 13 17:54:13 crc kubenswrapper[4720]: I1013 17:54:13.000742 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/28cc87d3-31ea-48dd-8169-3ac47061e244-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-r964t\" (UID: \"28cc87d3-31ea-48dd-8169-3ac47061e244\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r964t" Oct 13 17:54:13 crc kubenswrapper[4720]: I1013 17:54:13.000855 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/28cc87d3-31ea-48dd-8169-3ac47061e244-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-r964t\" (UID: \"28cc87d3-31ea-48dd-8169-3ac47061e244\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r964t" Oct 13 17:54:13 crc kubenswrapper[4720]: I1013 17:54:13.001010 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p2wv\" (UniqueName: \"kubernetes.io/projected/28cc87d3-31ea-48dd-8169-3ac47061e244-kube-api-access-7p2wv\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-r964t\" (UID: \"28cc87d3-31ea-48dd-8169-3ac47061e244\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r964t" Oct 13 17:54:13 crc kubenswrapper[4720]: I1013 17:54:13.104601 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/28cc87d3-31ea-48dd-8169-3ac47061e244-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-r964t\" (UID: \"28cc87d3-31ea-48dd-8169-3ac47061e244\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r964t" Oct 13 17:54:13 crc kubenswrapper[4720]: I1013 17:54:13.106040 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/28cc87d3-31ea-48dd-8169-3ac47061e244-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-r964t\" (UID: \"28cc87d3-31ea-48dd-8169-3ac47061e244\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r964t" Oct 13 17:54:13 crc kubenswrapper[4720]: I1013 17:54:13.106224 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p2wv\" (UniqueName: \"kubernetes.io/projected/28cc87d3-31ea-48dd-8169-3ac47061e244-kube-api-access-7p2wv\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-r964t\" (UID: \"28cc87d3-31ea-48dd-8169-3ac47061e244\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r964t" Oct 13 17:54:13 crc kubenswrapper[4720]: I1013 17:54:13.121055 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/28cc87d3-31ea-48dd-8169-3ac47061e244-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-r964t\" (UID: \"28cc87d3-31ea-48dd-8169-3ac47061e244\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r964t" Oct 13 17:54:13 crc kubenswrapper[4720]: I1013 17:54:13.130579 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7p2wv\" (UniqueName: \"kubernetes.io/projected/28cc87d3-31ea-48dd-8169-3ac47061e244-kube-api-access-7p2wv\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-r964t\" (UID: \"28cc87d3-31ea-48dd-8169-3ac47061e244\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r964t" Oct 13 17:54:13 crc kubenswrapper[4720]: I1013 17:54:13.132277 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/28cc87d3-31ea-48dd-8169-3ac47061e244-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-r964t\" (UID: \"28cc87d3-31ea-48dd-8169-3ac47061e244\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r964t" Oct 13 17:54:13 crc kubenswrapper[4720]: I1013 17:54:13.206302 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r964t" Oct 13 17:54:13 crc kubenswrapper[4720]: W1013 17:54:13.776364 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28cc87d3_31ea_48dd_8169_3ac47061e244.slice/crio-45a79cbce442387406026b25eb6daf6ef8e7cc4b1df50747e22c59e5d8e7d00a WatchSource:0}: Error finding container 45a79cbce442387406026b25eb6daf6ef8e7cc4b1df50747e22c59e5d8e7d00a: Status 404 returned error can't find the container with id 45a79cbce442387406026b25eb6daf6ef8e7cc4b1df50747e22c59e5d8e7d00a Oct 13 17:54:13 crc kubenswrapper[4720]: I1013 17:54:13.777524 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r964t"] Oct 13 17:54:14 crc kubenswrapper[4720]: I1013 17:54:14.809439 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r964t" event={"ID":"28cc87d3-31ea-48dd-8169-3ac47061e244","Type":"ContainerStarted","Data":"969db59e1878aca648eb039cb43087c40a880826308ec782ea65b9e1ff22b984"} Oct 13 17:54:14 crc kubenswrapper[4720]: I1013 17:54:14.811162 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r964t" event={"ID":"28cc87d3-31ea-48dd-8169-3ac47061e244","Type":"ContainerStarted","Data":"45a79cbce442387406026b25eb6daf6ef8e7cc4b1df50747e22c59e5d8e7d00a"} Oct 13 17:54:14 crc kubenswrapper[4720]: I1013 17:54:14.832906 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r964t" podStartSLOduration=2.402867899 podStartE2EDuration="2.832885039s" podCreationTimestamp="2025-10-13 17:54:12 +0000 UTC" firstStartedPulling="2025-10-13 17:54:13.778683757 +0000 UTC m=+1799.235933919" lastFinishedPulling="2025-10-13 17:54:14.208700917 +0000 UTC m=+1799.665951059" observedRunningTime="2025-10-13 17:54:14.82983845 +0000 UTC m=+1800.287088612" watchObservedRunningTime="2025-10-13 17:54:14.832885039 +0000 UTC m=+1800.290135171" Oct 13 17:54:24 crc kubenswrapper[4720]: I1013 17:54:24.922857 4720 generic.go:334] "Generic (PLEG): container finished" podID="28cc87d3-31ea-48dd-8169-3ac47061e244" containerID="969db59e1878aca648eb039cb43087c40a880826308ec782ea65b9e1ff22b984" exitCode=0 Oct 13 17:54:24 crc kubenswrapper[4720]: I1013 17:54:24.923049 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r964t" event={"ID":"28cc87d3-31ea-48dd-8169-3ac47061e244","Type":"ContainerDied","Data":"969db59e1878aca648eb039cb43087c40a880826308ec782ea65b9e1ff22b984"} Oct 13 17:54:26 crc kubenswrapper[4720]: I1013 17:54:26.397491 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r964t" Oct 13 17:54:26 crc kubenswrapper[4720]: I1013 17:54:26.517576 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7p2wv\" (UniqueName: \"kubernetes.io/projected/28cc87d3-31ea-48dd-8169-3ac47061e244-kube-api-access-7p2wv\") pod \"28cc87d3-31ea-48dd-8169-3ac47061e244\" (UID: \"28cc87d3-31ea-48dd-8169-3ac47061e244\") " Oct 13 17:54:26 crc kubenswrapper[4720]: I1013 17:54:26.517765 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/28cc87d3-31ea-48dd-8169-3ac47061e244-ssh-key\") pod \"28cc87d3-31ea-48dd-8169-3ac47061e244\" (UID: \"28cc87d3-31ea-48dd-8169-3ac47061e244\") " Oct 13 17:54:26 crc kubenswrapper[4720]: I1013 17:54:26.518721 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/28cc87d3-31ea-48dd-8169-3ac47061e244-inventory\") pod \"28cc87d3-31ea-48dd-8169-3ac47061e244\" (UID: \"28cc87d3-31ea-48dd-8169-3ac47061e244\") " Oct 13 17:54:26 crc kubenswrapper[4720]: I1013 17:54:26.523117 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28cc87d3-31ea-48dd-8169-3ac47061e244-kube-api-access-7p2wv" (OuterVolumeSpecName: "kube-api-access-7p2wv") pod "28cc87d3-31ea-48dd-8169-3ac47061e244" (UID: "28cc87d3-31ea-48dd-8169-3ac47061e244"). InnerVolumeSpecName "kube-api-access-7p2wv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:54:26 crc kubenswrapper[4720]: I1013 17:54:26.546616 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28cc87d3-31ea-48dd-8169-3ac47061e244-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "28cc87d3-31ea-48dd-8169-3ac47061e244" (UID: "28cc87d3-31ea-48dd-8169-3ac47061e244"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:54:26 crc kubenswrapper[4720]: I1013 17:54:26.550035 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28cc87d3-31ea-48dd-8169-3ac47061e244-inventory" (OuterVolumeSpecName: "inventory") pod "28cc87d3-31ea-48dd-8169-3ac47061e244" (UID: "28cc87d3-31ea-48dd-8169-3ac47061e244"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:54:26 crc kubenswrapper[4720]: I1013 17:54:26.621600 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7p2wv\" (UniqueName: \"kubernetes.io/projected/28cc87d3-31ea-48dd-8169-3ac47061e244-kube-api-access-7p2wv\") on node \"crc\" DevicePath \"\"" Oct 13 17:54:26 crc kubenswrapper[4720]: I1013 17:54:26.621646 4720 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/28cc87d3-31ea-48dd-8169-3ac47061e244-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 13 17:54:26 crc kubenswrapper[4720]: I1013 17:54:26.621714 4720 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/28cc87d3-31ea-48dd-8169-3ac47061e244-inventory\") on node \"crc\" DevicePath \"\"" Oct 13 17:54:26 crc kubenswrapper[4720]: I1013 17:54:26.943995 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r964t" event={"ID":"28cc87d3-31ea-48dd-8169-3ac47061e244","Type":"ContainerDied","Data":"45a79cbce442387406026b25eb6daf6ef8e7cc4b1df50747e22c59e5d8e7d00a"} Oct 13 17:54:26 crc kubenswrapper[4720]: I1013 17:54:26.944276 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45a79cbce442387406026b25eb6daf6ef8e7cc4b1df50747e22c59e5d8e7d00a" Oct 13 17:54:26 crc kubenswrapper[4720]: I1013 17:54:26.944085 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-r964t" Oct 13 17:54:27 crc kubenswrapper[4720]: I1013 17:54:27.071416 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2"] Oct 13 17:54:27 crc kubenswrapper[4720]: E1013 17:54:27.072046 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28cc87d3-31ea-48dd-8169-3ac47061e244" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Oct 13 17:54:27 crc kubenswrapper[4720]: I1013 17:54:27.072133 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="28cc87d3-31ea-48dd-8169-3ac47061e244" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Oct 13 17:54:27 crc kubenswrapper[4720]: I1013 17:54:27.072381 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="28cc87d3-31ea-48dd-8169-3ac47061e244" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Oct 13 17:54:27 crc kubenswrapper[4720]: I1013 17:54:27.073083 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2" Oct 13 17:54:27 crc kubenswrapper[4720]: I1013 17:54:27.078733 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 13 17:54:27 crc kubenswrapper[4720]: I1013 17:54:27.079143 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Oct 13 17:54:27 crc kubenswrapper[4720]: I1013 17:54:27.079771 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2"] Oct 13 17:54:27 crc kubenswrapper[4720]: I1013 17:54:27.079284 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Oct 13 17:54:27 crc kubenswrapper[4720]: I1013 17:54:27.079407 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 13 17:54:27 crc kubenswrapper[4720]: I1013 17:54:27.079457 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l2fds" Oct 13 17:54:27 crc kubenswrapper[4720]: I1013 17:54:27.079735 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 13 17:54:27 crc kubenswrapper[4720]: I1013 17:54:27.079796 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Oct 13 17:54:27 crc kubenswrapper[4720]: I1013 17:54:27.081082 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Oct 13 17:54:27 crc kubenswrapper[4720]: I1013 17:54:27.129101 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b6dc194d-cfc2-4303-ad72-ead87650ea96-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2\" (UID: \"b6dc194d-cfc2-4303-ad72-ead87650ea96\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2" Oct 13 17:54:27 crc kubenswrapper[4720]: I1013 17:54:27.129149 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6dc194d-cfc2-4303-ad72-ead87650ea96-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2\" (UID: \"b6dc194d-cfc2-4303-ad72-ead87650ea96\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2" Oct 13 17:54:27 crc kubenswrapper[4720]: I1013 17:54:27.129205 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6dc194d-cfc2-4303-ad72-ead87650ea96-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2\" (UID: \"b6dc194d-cfc2-4303-ad72-ead87650ea96\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2" Oct 13 17:54:27 crc kubenswrapper[4720]: I1013 17:54:27.129255 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6dc194d-cfc2-4303-ad72-ead87650ea96-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2\" (UID: \"b6dc194d-cfc2-4303-ad72-ead87650ea96\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2" Oct 13 17:54:27 crc kubenswrapper[4720]: I1013 17:54:27.129293 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b6dc194d-cfc2-4303-ad72-ead87650ea96-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2\" (UID: \"b6dc194d-cfc2-4303-ad72-ead87650ea96\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2" Oct 13 17:54:27 crc kubenswrapper[4720]: I1013 17:54:27.129329 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6dc194d-cfc2-4303-ad72-ead87650ea96-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2\" (UID: \"b6dc194d-cfc2-4303-ad72-ead87650ea96\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2" Oct 13 17:54:27 crc kubenswrapper[4720]: I1013 17:54:27.129381 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b6dc194d-cfc2-4303-ad72-ead87650ea96-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2\" (UID: \"b6dc194d-cfc2-4303-ad72-ead87650ea96\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2" Oct 13 17:54:27 crc kubenswrapper[4720]: I1013 17:54:27.129409 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6dc194d-cfc2-4303-ad72-ead87650ea96-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2\" (UID: \"b6dc194d-cfc2-4303-ad72-ead87650ea96\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2" Oct 13 17:54:27 crc kubenswrapper[4720]: I1013 17:54:27.129460 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6dc194d-cfc2-4303-ad72-ead87650ea96-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2\" (UID: \"b6dc194d-cfc2-4303-ad72-ead87650ea96\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2" Oct 13 17:54:27 crc kubenswrapper[4720]: I1013 17:54:27.129498 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6dc194d-cfc2-4303-ad72-ead87650ea96-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2\" (UID: \"b6dc194d-cfc2-4303-ad72-ead87650ea96\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2" Oct 13 17:54:27 crc kubenswrapper[4720]: I1013 17:54:27.129524 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6dds\" (UniqueName: \"kubernetes.io/projected/b6dc194d-cfc2-4303-ad72-ead87650ea96-kube-api-access-g6dds\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2\" (UID: \"b6dc194d-cfc2-4303-ad72-ead87650ea96\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2" Oct 13 17:54:27 crc kubenswrapper[4720]: I1013 17:54:27.129553 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b6dc194d-cfc2-4303-ad72-ead87650ea96-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2\" (UID: \"b6dc194d-cfc2-4303-ad72-ead87650ea96\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2" Oct 13 17:54:27 crc kubenswrapper[4720]: I1013 17:54:27.129609 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b6dc194d-cfc2-4303-ad72-ead87650ea96-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2\" (UID: \"b6dc194d-cfc2-4303-ad72-ead87650ea96\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2" Oct 13 17:54:27 crc kubenswrapper[4720]: I1013 17:54:27.129635 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6dc194d-cfc2-4303-ad72-ead87650ea96-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2\" (UID: \"b6dc194d-cfc2-4303-ad72-ead87650ea96\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2" Oct 13 17:54:27 crc kubenswrapper[4720]: I1013 17:54:27.230936 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b6dc194d-cfc2-4303-ad72-ead87650ea96-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2\" (UID: \"b6dc194d-cfc2-4303-ad72-ead87650ea96\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2" Oct 13 17:54:27 crc kubenswrapper[4720]: I1013 17:54:27.230981 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6dc194d-cfc2-4303-ad72-ead87650ea96-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2\" (UID: \"b6dc194d-cfc2-4303-ad72-ead87650ea96\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2" Oct 13 17:54:27 crc kubenswrapper[4720]: I1013 17:54:27.231030 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6dc194d-cfc2-4303-ad72-ead87650ea96-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2\" (UID: \"b6dc194d-cfc2-4303-ad72-ead87650ea96\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2" Oct 13 17:54:27 crc kubenswrapper[4720]: I1013 17:54:27.231059 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6dc194d-cfc2-4303-ad72-ead87650ea96-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2\" (UID: \"b6dc194d-cfc2-4303-ad72-ead87650ea96\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2" Oct 13 17:54:27 crc kubenswrapper[4720]: I1013 17:54:27.231077 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6dds\" (UniqueName: \"kubernetes.io/projected/b6dc194d-cfc2-4303-ad72-ead87650ea96-kube-api-access-g6dds\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2\" (UID: \"b6dc194d-cfc2-4303-ad72-ead87650ea96\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2" Oct 13 17:54:27 crc kubenswrapper[4720]: I1013 17:54:27.231097 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b6dc194d-cfc2-4303-ad72-ead87650ea96-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2\" (UID: \"b6dc194d-cfc2-4303-ad72-ead87650ea96\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2" Oct 13 17:54:27 crc kubenswrapper[4720]: I1013 17:54:27.231150 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b6dc194d-cfc2-4303-ad72-ead87650ea96-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2\" (UID: \"b6dc194d-cfc2-4303-ad72-ead87650ea96\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2" Oct 13 17:54:27 crc kubenswrapper[4720]: I1013 17:54:27.231175 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6dc194d-cfc2-4303-ad72-ead87650ea96-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2\" (UID: \"b6dc194d-cfc2-4303-ad72-ead87650ea96\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2" Oct 13 17:54:27 crc kubenswrapper[4720]: I1013 17:54:27.231238 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b6dc194d-cfc2-4303-ad72-ead87650ea96-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2\" (UID: \"b6dc194d-cfc2-4303-ad72-ead87650ea96\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2" Oct 13 17:54:27 crc kubenswrapper[4720]: I1013 17:54:27.231253 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6dc194d-cfc2-4303-ad72-ead87650ea96-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2\" (UID: \"b6dc194d-cfc2-4303-ad72-ead87650ea96\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2" Oct 13 17:54:27 crc kubenswrapper[4720]: I1013 17:54:27.231272 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6dc194d-cfc2-4303-ad72-ead87650ea96-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2\" (UID: \"b6dc194d-cfc2-4303-ad72-ead87650ea96\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2" Oct 13 17:54:27 crc kubenswrapper[4720]: I1013 17:54:27.231290 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6dc194d-cfc2-4303-ad72-ead87650ea96-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2\" (UID: \"b6dc194d-cfc2-4303-ad72-ead87650ea96\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2" Oct 13 17:54:27 crc kubenswrapper[4720]: I1013 17:54:27.231314 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b6dc194d-cfc2-4303-ad72-ead87650ea96-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2\" (UID: \"b6dc194d-cfc2-4303-ad72-ead87650ea96\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2" Oct 13 17:54:27 crc kubenswrapper[4720]: I1013 17:54:27.231335 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6dc194d-cfc2-4303-ad72-ead87650ea96-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2\" (UID: \"b6dc194d-cfc2-4303-ad72-ead87650ea96\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2" Oct 13 17:54:27 crc kubenswrapper[4720]: I1013 17:54:27.240683 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b6dc194d-cfc2-4303-ad72-ead87650ea96-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2\" (UID: \"b6dc194d-cfc2-4303-ad72-ead87650ea96\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2" Oct 13 17:54:27 crc kubenswrapper[4720]: I1013 17:54:27.241435 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6dc194d-cfc2-4303-ad72-ead87650ea96-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2\" (UID: \"b6dc194d-cfc2-4303-ad72-ead87650ea96\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2" Oct 13 17:54:27 crc kubenswrapper[4720]: I1013 17:54:27.241493 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6dc194d-cfc2-4303-ad72-ead87650ea96-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2\" (UID: \"b6dc194d-cfc2-4303-ad72-ead87650ea96\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2" Oct 13 17:54:27 crc kubenswrapper[4720]: I1013 17:54:27.241570 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b6dc194d-cfc2-4303-ad72-ead87650ea96-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2\" (UID: \"b6dc194d-cfc2-4303-ad72-ead87650ea96\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2" Oct 13 17:54:27 crc kubenswrapper[4720]: I1013 17:54:27.241765 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6dc194d-cfc2-4303-ad72-ead87650ea96-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2\" (UID: \"b6dc194d-cfc2-4303-ad72-ead87650ea96\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2" Oct 13 17:54:27 crc kubenswrapper[4720]: I1013 17:54:27.246721 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6dc194d-cfc2-4303-ad72-ead87650ea96-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2\" (UID: \"b6dc194d-cfc2-4303-ad72-ead87650ea96\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2" Oct 13 17:54:27 crc kubenswrapper[4720]: I1013 17:54:27.247285 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b6dc194d-cfc2-4303-ad72-ead87650ea96-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2\" (UID: \"b6dc194d-cfc2-4303-ad72-ead87650ea96\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2" Oct 13 17:54:27 crc kubenswrapper[4720]: I1013 17:54:27.247303 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6dc194d-cfc2-4303-ad72-ead87650ea96-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2\" (UID: \"b6dc194d-cfc2-4303-ad72-ead87650ea96\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2" Oct 13 17:54:27 crc kubenswrapper[4720]: I1013 17:54:27.247319 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b6dc194d-cfc2-4303-ad72-ead87650ea96-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2\" (UID: \"b6dc194d-cfc2-4303-ad72-ead87650ea96\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2" Oct 13 17:54:27 crc kubenswrapper[4720]: I1013 17:54:27.247938 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6dc194d-cfc2-4303-ad72-ead87650ea96-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2\" (UID: \"b6dc194d-cfc2-4303-ad72-ead87650ea96\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2" Oct 13 17:54:27 crc kubenswrapper[4720]: I1013 17:54:27.248926 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b6dc194d-cfc2-4303-ad72-ead87650ea96-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2\" (UID: \"b6dc194d-cfc2-4303-ad72-ead87650ea96\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2" Oct 13 17:54:27 crc kubenswrapper[4720]: I1013 17:54:27.249565 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6dc194d-cfc2-4303-ad72-ead87650ea96-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2\" (UID: \"b6dc194d-cfc2-4303-ad72-ead87650ea96\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2" Oct 13 17:54:27 crc kubenswrapper[4720]: I1013 17:54:27.249932 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6dc194d-cfc2-4303-ad72-ead87650ea96-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2\" (UID: \"b6dc194d-cfc2-4303-ad72-ead87650ea96\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2" Oct 13 17:54:27 crc kubenswrapper[4720]: I1013 17:54:27.252171 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6dds\" (UniqueName: \"kubernetes.io/projected/b6dc194d-cfc2-4303-ad72-ead87650ea96-kube-api-access-g6dds\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2\" (UID: \"b6dc194d-cfc2-4303-ad72-ead87650ea96\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2" Oct 13 17:54:27 crc kubenswrapper[4720]: I1013 17:54:27.396635 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2" Oct 13 17:54:28 crc kubenswrapper[4720]: I1013 17:54:28.028379 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2"] Oct 13 17:54:28 crc kubenswrapper[4720]: I1013 17:54:28.966689 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2" event={"ID":"b6dc194d-cfc2-4303-ad72-ead87650ea96","Type":"ContainerStarted","Data":"91a7291a7ba3aae24518c590b48424295f14d6ce4fc29134017824847d18eafa"} Oct 13 17:54:29 crc kubenswrapper[4720]: I1013 17:54:29.986758 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2" event={"ID":"b6dc194d-cfc2-4303-ad72-ead87650ea96","Type":"ContainerStarted","Data":"56d582f976aa3f6162e8a720ed53c5876c61affb217487ee22fbee343c3e47c7"} Oct 13 17:54:30 crc kubenswrapper[4720]: I1013 17:54:30.016389 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2" podStartSLOduration=2.312596768 podStartE2EDuration="3.016373494s" podCreationTimestamp="2025-10-13 17:54:27 +0000 UTC" firstStartedPulling="2025-10-13 17:54:28.015093907 +0000 UTC m=+1813.472344049" lastFinishedPulling="2025-10-13 17:54:28.718870603 +0000 UTC m=+1814.176120775" observedRunningTime="2025-10-13 17:54:30.014274889 +0000 UTC m=+1815.471525041" watchObservedRunningTime="2025-10-13 17:54:30.016373494 +0000 UTC m=+1815.473623616" Oct 13 17:55:13 crc kubenswrapper[4720]: I1013 17:55:13.444877 4720 generic.go:334] "Generic (PLEG): container finished" podID="b6dc194d-cfc2-4303-ad72-ead87650ea96" containerID="56d582f976aa3f6162e8a720ed53c5876c61affb217487ee22fbee343c3e47c7" exitCode=0 Oct 13 17:55:13 crc kubenswrapper[4720]: I1013 17:55:13.445138 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2" event={"ID":"b6dc194d-cfc2-4303-ad72-ead87650ea96","Type":"ContainerDied","Data":"56d582f976aa3f6162e8a720ed53c5876c61affb217487ee22fbee343c3e47c7"} Oct 13 17:55:14 crc kubenswrapper[4720]: I1013 17:55:14.959413 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2" Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.072102 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b6dc194d-cfc2-4303-ad72-ead87650ea96-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"b6dc194d-cfc2-4303-ad72-ead87650ea96\" (UID: \"b6dc194d-cfc2-4303-ad72-ead87650ea96\") " Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.072170 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b6dc194d-cfc2-4303-ad72-ead87650ea96-openstack-edpm-ipam-ovn-default-certs-0\") pod \"b6dc194d-cfc2-4303-ad72-ead87650ea96\" (UID: \"b6dc194d-cfc2-4303-ad72-ead87650ea96\") " Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.072239 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6dc194d-cfc2-4303-ad72-ead87650ea96-telemetry-combined-ca-bundle\") pod \"b6dc194d-cfc2-4303-ad72-ead87650ea96\" (UID: \"b6dc194d-cfc2-4303-ad72-ead87650ea96\") " Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.072306 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6dc194d-cfc2-4303-ad72-ead87650ea96-neutron-metadata-combined-ca-bundle\") pod \"b6dc194d-cfc2-4303-ad72-ead87650ea96\" (UID: \"b6dc194d-cfc2-4303-ad72-ead87650ea96\") " Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.072335 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6dc194d-cfc2-4303-ad72-ead87650ea96-libvirt-combined-ca-bundle\") pod \"b6dc194d-cfc2-4303-ad72-ead87650ea96\" (UID: \"b6dc194d-cfc2-4303-ad72-ead87650ea96\") " Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.072355 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b6dc194d-cfc2-4303-ad72-ead87650ea96-ssh-key\") pod \"b6dc194d-cfc2-4303-ad72-ead87650ea96\" (UID: \"b6dc194d-cfc2-4303-ad72-ead87650ea96\") " Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.072386 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6dds\" (UniqueName: \"kubernetes.io/projected/b6dc194d-cfc2-4303-ad72-ead87650ea96-kube-api-access-g6dds\") pod \"b6dc194d-cfc2-4303-ad72-ead87650ea96\" (UID: \"b6dc194d-cfc2-4303-ad72-ead87650ea96\") " Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.072409 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6dc194d-cfc2-4303-ad72-ead87650ea96-ovn-combined-ca-bundle\") pod \"b6dc194d-cfc2-4303-ad72-ead87650ea96\" (UID: \"b6dc194d-cfc2-4303-ad72-ead87650ea96\") " Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.072451 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6dc194d-cfc2-4303-ad72-ead87650ea96-repo-setup-combined-ca-bundle\") pod \"b6dc194d-cfc2-4303-ad72-ead87650ea96\" (UID: \"b6dc194d-cfc2-4303-ad72-ead87650ea96\") " Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.072481 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6dc194d-cfc2-4303-ad72-ead87650ea96-nova-combined-ca-bundle\") pod \"b6dc194d-cfc2-4303-ad72-ead87650ea96\" (UID: \"b6dc194d-cfc2-4303-ad72-ead87650ea96\") " Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.072512 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b6dc194d-cfc2-4303-ad72-ead87650ea96-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"b6dc194d-cfc2-4303-ad72-ead87650ea96\" (UID: \"b6dc194d-cfc2-4303-ad72-ead87650ea96\") " Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.072534 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b6dc194d-cfc2-4303-ad72-ead87650ea96-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"b6dc194d-cfc2-4303-ad72-ead87650ea96\" (UID: \"b6dc194d-cfc2-4303-ad72-ead87650ea96\") " Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.072592 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6dc194d-cfc2-4303-ad72-ead87650ea96-inventory\") pod \"b6dc194d-cfc2-4303-ad72-ead87650ea96\" (UID: \"b6dc194d-cfc2-4303-ad72-ead87650ea96\") " Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.072610 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6dc194d-cfc2-4303-ad72-ead87650ea96-bootstrap-combined-ca-bundle\") pod \"b6dc194d-cfc2-4303-ad72-ead87650ea96\" (UID: \"b6dc194d-cfc2-4303-ad72-ead87650ea96\") " Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.079280 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6dc194d-cfc2-4303-ad72-ead87650ea96-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "b6dc194d-cfc2-4303-ad72-ead87650ea96" (UID: "b6dc194d-cfc2-4303-ad72-ead87650ea96"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.079351 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6dc194d-cfc2-4303-ad72-ead87650ea96-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "b6dc194d-cfc2-4303-ad72-ead87650ea96" (UID: "b6dc194d-cfc2-4303-ad72-ead87650ea96"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.079926 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6dc194d-cfc2-4303-ad72-ead87650ea96-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "b6dc194d-cfc2-4303-ad72-ead87650ea96" (UID: "b6dc194d-cfc2-4303-ad72-ead87650ea96"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.079941 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6dc194d-cfc2-4303-ad72-ead87650ea96-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "b6dc194d-cfc2-4303-ad72-ead87650ea96" (UID: "b6dc194d-cfc2-4303-ad72-ead87650ea96"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.080174 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6dc194d-cfc2-4303-ad72-ead87650ea96-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "b6dc194d-cfc2-4303-ad72-ead87650ea96" (UID: "b6dc194d-cfc2-4303-ad72-ead87650ea96"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.081006 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6dc194d-cfc2-4303-ad72-ead87650ea96-kube-api-access-g6dds" (OuterVolumeSpecName: "kube-api-access-g6dds") pod "b6dc194d-cfc2-4303-ad72-ead87650ea96" (UID: "b6dc194d-cfc2-4303-ad72-ead87650ea96"). InnerVolumeSpecName "kube-api-access-g6dds". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.083644 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6dc194d-cfc2-4303-ad72-ead87650ea96-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "b6dc194d-cfc2-4303-ad72-ead87650ea96" (UID: "b6dc194d-cfc2-4303-ad72-ead87650ea96"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.083668 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6dc194d-cfc2-4303-ad72-ead87650ea96-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "b6dc194d-cfc2-4303-ad72-ead87650ea96" (UID: "b6dc194d-cfc2-4303-ad72-ead87650ea96"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.084358 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6dc194d-cfc2-4303-ad72-ead87650ea96-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "b6dc194d-cfc2-4303-ad72-ead87650ea96" (UID: "b6dc194d-cfc2-4303-ad72-ead87650ea96"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.086618 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6dc194d-cfc2-4303-ad72-ead87650ea96-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "b6dc194d-cfc2-4303-ad72-ead87650ea96" (UID: "b6dc194d-cfc2-4303-ad72-ead87650ea96"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.087487 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6dc194d-cfc2-4303-ad72-ead87650ea96-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "b6dc194d-cfc2-4303-ad72-ead87650ea96" (UID: "b6dc194d-cfc2-4303-ad72-ead87650ea96"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.090404 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6dc194d-cfc2-4303-ad72-ead87650ea96-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "b6dc194d-cfc2-4303-ad72-ead87650ea96" (UID: "b6dc194d-cfc2-4303-ad72-ead87650ea96"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.108530 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6dc194d-cfc2-4303-ad72-ead87650ea96-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b6dc194d-cfc2-4303-ad72-ead87650ea96" (UID: "b6dc194d-cfc2-4303-ad72-ead87650ea96"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.111403 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6dc194d-cfc2-4303-ad72-ead87650ea96-inventory" (OuterVolumeSpecName: "inventory") pod "b6dc194d-cfc2-4303-ad72-ead87650ea96" (UID: "b6dc194d-cfc2-4303-ad72-ead87650ea96"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.175137 4720 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6dc194d-cfc2-4303-ad72-ead87650ea96-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.175213 4720 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6dc194d-cfc2-4303-ad72-ead87650ea96-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.175237 4720 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6dc194d-cfc2-4303-ad72-ead87650ea96-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.175255 4720 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b6dc194d-cfc2-4303-ad72-ead87650ea96-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.175273 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6dds\" (UniqueName: \"kubernetes.io/projected/b6dc194d-cfc2-4303-ad72-ead87650ea96-kube-api-access-g6dds\") on node \"crc\" DevicePath \"\"" Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.175290 4720 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6dc194d-cfc2-4303-ad72-ead87650ea96-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.175307 4720 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6dc194d-cfc2-4303-ad72-ead87650ea96-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.175325 4720 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6dc194d-cfc2-4303-ad72-ead87650ea96-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.175344 4720 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b6dc194d-cfc2-4303-ad72-ead87650ea96-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.175364 4720 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b6dc194d-cfc2-4303-ad72-ead87650ea96-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.175381 4720 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b6dc194d-cfc2-4303-ad72-ead87650ea96-inventory\") on node \"crc\" DevicePath \"\"" Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.175398 4720 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6dc194d-cfc2-4303-ad72-ead87650ea96-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.175418 4720 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b6dc194d-cfc2-4303-ad72-ead87650ea96-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.175439 4720 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b6dc194d-cfc2-4303-ad72-ead87650ea96-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.473766 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2" event={"ID":"b6dc194d-cfc2-4303-ad72-ead87650ea96","Type":"ContainerDied","Data":"91a7291a7ba3aae24518c590b48424295f14d6ce4fc29134017824847d18eafa"} Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.474127 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91a7291a7ba3aae24518c590b48424295f14d6ce4fc29134017824847d18eafa" Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.473860 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2" Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.597505 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-fhdbt"] Oct 13 17:55:15 crc kubenswrapper[4720]: E1013 17:55:15.599404 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6dc194d-cfc2-4303-ad72-ead87650ea96" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.599444 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6dc194d-cfc2-4303-ad72-ead87650ea96" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.599727 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6dc194d-cfc2-4303-ad72-ead87650ea96" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.600615 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fhdbt" Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.603691 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.603873 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l2fds" Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.603883 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.604078 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.604113 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.612613 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-fhdbt"] Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.686527 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/64141929-3427-4673-9aea-5ce314ceb23b-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fhdbt\" (UID: \"64141929-3427-4673-9aea-5ce314ceb23b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fhdbt" Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.686605 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6r9n\" (UniqueName: \"kubernetes.io/projected/64141929-3427-4673-9aea-5ce314ceb23b-kube-api-access-w6r9n\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fhdbt\" (UID: \"64141929-3427-4673-9aea-5ce314ceb23b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fhdbt" Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.686862 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/64141929-3427-4673-9aea-5ce314ceb23b-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fhdbt\" (UID: \"64141929-3427-4673-9aea-5ce314ceb23b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fhdbt" Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.686938 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/64141929-3427-4673-9aea-5ce314ceb23b-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fhdbt\" (UID: \"64141929-3427-4673-9aea-5ce314ceb23b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fhdbt" Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.687388 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64141929-3427-4673-9aea-5ce314ceb23b-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fhdbt\" (UID: \"64141929-3427-4673-9aea-5ce314ceb23b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fhdbt" Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.789621 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/64141929-3427-4673-9aea-5ce314ceb23b-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fhdbt\" (UID: \"64141929-3427-4673-9aea-5ce314ceb23b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fhdbt" Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.790063 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6r9n\" (UniqueName: \"kubernetes.io/projected/64141929-3427-4673-9aea-5ce314ceb23b-kube-api-access-w6r9n\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fhdbt\" (UID: \"64141929-3427-4673-9aea-5ce314ceb23b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fhdbt" Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.790340 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/64141929-3427-4673-9aea-5ce314ceb23b-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fhdbt\" (UID: \"64141929-3427-4673-9aea-5ce314ceb23b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fhdbt" Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.790540 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/64141929-3427-4673-9aea-5ce314ceb23b-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fhdbt\" (UID: \"64141929-3427-4673-9aea-5ce314ceb23b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fhdbt" Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.790875 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64141929-3427-4673-9aea-5ce314ceb23b-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fhdbt\" (UID: \"64141929-3427-4673-9aea-5ce314ceb23b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fhdbt" Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.791345 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/64141929-3427-4673-9aea-5ce314ceb23b-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fhdbt\" (UID: \"64141929-3427-4673-9aea-5ce314ceb23b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fhdbt" Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.796729 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/64141929-3427-4673-9aea-5ce314ceb23b-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fhdbt\" (UID: \"64141929-3427-4673-9aea-5ce314ceb23b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fhdbt" Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.798198 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64141929-3427-4673-9aea-5ce314ceb23b-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fhdbt\" (UID: \"64141929-3427-4673-9aea-5ce314ceb23b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fhdbt" Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.802121 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/64141929-3427-4673-9aea-5ce314ceb23b-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fhdbt\" (UID: \"64141929-3427-4673-9aea-5ce314ceb23b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fhdbt" Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.828685 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6r9n\" (UniqueName: \"kubernetes.io/projected/64141929-3427-4673-9aea-5ce314ceb23b-kube-api-access-w6r9n\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fhdbt\" (UID: \"64141929-3427-4673-9aea-5ce314ceb23b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fhdbt" Oct 13 17:55:15 crc kubenswrapper[4720]: I1013 17:55:15.929811 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fhdbt" Oct 13 17:55:16 crc kubenswrapper[4720]: I1013 17:55:16.535378 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-fhdbt"] Oct 13 17:55:16 crc kubenswrapper[4720]: I1013 17:55:16.550591 4720 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 13 17:55:17 crc kubenswrapper[4720]: I1013 17:55:17.508136 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fhdbt" event={"ID":"64141929-3427-4673-9aea-5ce314ceb23b","Type":"ContainerStarted","Data":"c7506913db6e9e6fe11da503249bb49631eccc9886a5d03fe07938307dba03dd"} Oct 13 17:55:18 crc kubenswrapper[4720]: I1013 17:55:18.519084 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fhdbt" event={"ID":"64141929-3427-4673-9aea-5ce314ceb23b","Type":"ContainerStarted","Data":"fdd101b237542f6c9b05ec82696ac07658688d3958c72a86c3e3c711853d925c"} Oct 13 17:55:18 crc kubenswrapper[4720]: I1013 17:55:18.543088 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fhdbt" podStartSLOduration=2.835260671 podStartE2EDuration="3.543074551s" podCreationTimestamp="2025-10-13 17:55:15 +0000 UTC" firstStartedPulling="2025-10-13 17:55:16.549981085 +0000 UTC m=+1862.007231257" lastFinishedPulling="2025-10-13 17:55:17.257794965 +0000 UTC m=+1862.715045137" observedRunningTime="2025-10-13 17:55:18.536908182 +0000 UTC m=+1863.994158314" watchObservedRunningTime="2025-10-13 17:55:18.543074551 +0000 UTC m=+1864.000324683" Oct 13 17:55:19 crc kubenswrapper[4720]: I1013 17:55:19.718524 4720 scope.go:117] "RemoveContainer" containerID="bcba894a9effa1dd95db1c447196363784d84cba76c1516bcb0a92c39005d101" Oct 13 17:55:19 crc kubenswrapper[4720]: I1013 17:55:19.751226 4720 scope.go:117] "RemoveContainer" containerID="f43497e4b13f6d6e7e32f4a2f131fbc763edbae9324a1ed576cc9339b51e43fb" Oct 13 17:55:19 crc kubenswrapper[4720]: I1013 17:55:19.817697 4720 scope.go:117] "RemoveContainer" containerID="7002160e37252c1bab92399753f3da7593c0c3c36e1982a67e59627484e1d3dc" Oct 13 17:56:15 crc kubenswrapper[4720]: I1013 17:56:15.212711 4720 patch_prober.go:28] interesting pod/machine-config-daemon-htwnl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 17:56:15 crc kubenswrapper[4720]: I1013 17:56:15.213465 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 17:56:28 crc kubenswrapper[4720]: I1013 17:56:28.261090 4720 generic.go:334] "Generic (PLEG): container finished" podID="64141929-3427-4673-9aea-5ce314ceb23b" containerID="fdd101b237542f6c9b05ec82696ac07658688d3958c72a86c3e3c711853d925c" exitCode=0 Oct 13 17:56:28 crc kubenswrapper[4720]: I1013 17:56:28.261227 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fhdbt" event={"ID":"64141929-3427-4673-9aea-5ce314ceb23b","Type":"ContainerDied","Data":"fdd101b237542f6c9b05ec82696ac07658688d3958c72a86c3e3c711853d925c"} Oct 13 17:56:29 crc kubenswrapper[4720]: I1013 17:56:29.719150 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fhdbt" Oct 13 17:56:29 crc kubenswrapper[4720]: I1013 17:56:29.840687 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64141929-3427-4673-9aea-5ce314ceb23b-ovn-combined-ca-bundle\") pod \"64141929-3427-4673-9aea-5ce314ceb23b\" (UID: \"64141929-3427-4673-9aea-5ce314ceb23b\") " Oct 13 17:56:29 crc kubenswrapper[4720]: I1013 17:56:29.840816 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/64141929-3427-4673-9aea-5ce314ceb23b-ovncontroller-config-0\") pod \"64141929-3427-4673-9aea-5ce314ceb23b\" (UID: \"64141929-3427-4673-9aea-5ce314ceb23b\") " Oct 13 17:56:29 crc kubenswrapper[4720]: I1013 17:56:29.840901 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/64141929-3427-4673-9aea-5ce314ceb23b-ssh-key\") pod \"64141929-3427-4673-9aea-5ce314ceb23b\" (UID: \"64141929-3427-4673-9aea-5ce314ceb23b\") " Oct 13 17:56:29 crc kubenswrapper[4720]: I1013 17:56:29.840965 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/64141929-3427-4673-9aea-5ce314ceb23b-inventory\") pod \"64141929-3427-4673-9aea-5ce314ceb23b\" (UID: \"64141929-3427-4673-9aea-5ce314ceb23b\") " Oct 13 17:56:29 crc kubenswrapper[4720]: I1013 17:56:29.841006 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6r9n\" (UniqueName: \"kubernetes.io/projected/64141929-3427-4673-9aea-5ce314ceb23b-kube-api-access-w6r9n\") pod \"64141929-3427-4673-9aea-5ce314ceb23b\" (UID: \"64141929-3427-4673-9aea-5ce314ceb23b\") " Oct 13 17:56:29 crc kubenswrapper[4720]: I1013 17:56:29.846840 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64141929-3427-4673-9aea-5ce314ceb23b-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "64141929-3427-4673-9aea-5ce314ceb23b" (UID: "64141929-3427-4673-9aea-5ce314ceb23b"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:56:29 crc kubenswrapper[4720]: I1013 17:56:29.847707 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64141929-3427-4673-9aea-5ce314ceb23b-kube-api-access-w6r9n" (OuterVolumeSpecName: "kube-api-access-w6r9n") pod "64141929-3427-4673-9aea-5ce314ceb23b" (UID: "64141929-3427-4673-9aea-5ce314ceb23b"). InnerVolumeSpecName "kube-api-access-w6r9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:56:29 crc kubenswrapper[4720]: I1013 17:56:29.867054 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64141929-3427-4673-9aea-5ce314ceb23b-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "64141929-3427-4673-9aea-5ce314ceb23b" (UID: "64141929-3427-4673-9aea-5ce314ceb23b"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 17:56:29 crc kubenswrapper[4720]: I1013 17:56:29.874454 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64141929-3427-4673-9aea-5ce314ceb23b-inventory" (OuterVolumeSpecName: "inventory") pod "64141929-3427-4673-9aea-5ce314ceb23b" (UID: "64141929-3427-4673-9aea-5ce314ceb23b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:56:29 crc kubenswrapper[4720]: I1013 17:56:29.885866 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64141929-3427-4673-9aea-5ce314ceb23b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "64141929-3427-4673-9aea-5ce314ceb23b" (UID: "64141929-3427-4673-9aea-5ce314ceb23b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:56:29 crc kubenswrapper[4720]: I1013 17:56:29.943573 4720 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/64141929-3427-4673-9aea-5ce314ceb23b-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Oct 13 17:56:29 crc kubenswrapper[4720]: I1013 17:56:29.943624 4720 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/64141929-3427-4673-9aea-5ce314ceb23b-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 13 17:56:29 crc kubenswrapper[4720]: I1013 17:56:29.943639 4720 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/64141929-3427-4673-9aea-5ce314ceb23b-inventory\") on node \"crc\" DevicePath \"\"" Oct 13 17:56:29 crc kubenswrapper[4720]: I1013 17:56:29.943651 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6r9n\" (UniqueName: \"kubernetes.io/projected/64141929-3427-4673-9aea-5ce314ceb23b-kube-api-access-w6r9n\") on node \"crc\" DevicePath \"\"" Oct 13 17:56:29 crc kubenswrapper[4720]: I1013 17:56:29.943663 4720 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64141929-3427-4673-9aea-5ce314ceb23b-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 17:56:30 crc kubenswrapper[4720]: I1013 17:56:30.286346 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fhdbt" event={"ID":"64141929-3427-4673-9aea-5ce314ceb23b","Type":"ContainerDied","Data":"c7506913db6e9e6fe11da503249bb49631eccc9886a5d03fe07938307dba03dd"} Oct 13 17:56:30 crc kubenswrapper[4720]: I1013 17:56:30.286670 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7506913db6e9e6fe11da503249bb49631eccc9886a5d03fe07938307dba03dd" Oct 13 17:56:30 crc kubenswrapper[4720]: I1013 17:56:30.286404 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fhdbt" Oct 13 17:56:30 crc kubenswrapper[4720]: I1013 17:56:30.464934 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rcnh"] Oct 13 17:56:30 crc kubenswrapper[4720]: E1013 17:56:30.465589 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64141929-3427-4673-9aea-5ce314ceb23b" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Oct 13 17:56:30 crc kubenswrapper[4720]: I1013 17:56:30.465616 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="64141929-3427-4673-9aea-5ce314ceb23b" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Oct 13 17:56:30 crc kubenswrapper[4720]: I1013 17:56:30.465904 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="64141929-3427-4673-9aea-5ce314ceb23b" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Oct 13 17:56:30 crc kubenswrapper[4720]: I1013 17:56:30.466740 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rcnh" Oct 13 17:56:30 crc kubenswrapper[4720]: I1013 17:56:30.469238 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Oct 13 17:56:30 crc kubenswrapper[4720]: I1013 17:56:30.469363 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 13 17:56:30 crc kubenswrapper[4720]: I1013 17:56:30.469896 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l2fds" Oct 13 17:56:30 crc kubenswrapper[4720]: I1013 17:56:30.469929 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 13 17:56:30 crc kubenswrapper[4720]: I1013 17:56:30.470107 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 13 17:56:30 crc kubenswrapper[4720]: I1013 17:56:30.472545 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Oct 13 17:56:30 crc kubenswrapper[4720]: I1013 17:56:30.485793 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rcnh"] Oct 13 17:56:30 crc kubenswrapper[4720]: I1013 17:56:30.656989 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/317f512e-221d-4587-9817-526adffbe348-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rcnh\" (UID: \"317f512e-221d-4587-9817-526adffbe348\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rcnh" Oct 13 17:56:30 crc kubenswrapper[4720]: I1013 17:56:30.657089 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/317f512e-221d-4587-9817-526adffbe348-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rcnh\" (UID: \"317f512e-221d-4587-9817-526adffbe348\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rcnh" Oct 13 17:56:30 crc kubenswrapper[4720]: I1013 17:56:30.657298 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/317f512e-221d-4587-9817-526adffbe348-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rcnh\" (UID: \"317f512e-221d-4587-9817-526adffbe348\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rcnh" Oct 13 17:56:30 crc kubenswrapper[4720]: I1013 17:56:30.657374 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szcf7\" (UniqueName: \"kubernetes.io/projected/317f512e-221d-4587-9817-526adffbe348-kube-api-access-szcf7\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rcnh\" (UID: \"317f512e-221d-4587-9817-526adffbe348\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rcnh" Oct 13 17:56:30 crc kubenswrapper[4720]: I1013 17:56:30.657425 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/317f512e-221d-4587-9817-526adffbe348-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rcnh\" (UID: \"317f512e-221d-4587-9817-526adffbe348\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rcnh" Oct 13 17:56:30 crc kubenswrapper[4720]: I1013 17:56:30.657487 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/317f512e-221d-4587-9817-526adffbe348-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rcnh\" (UID: \"317f512e-221d-4587-9817-526adffbe348\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rcnh" Oct 13 17:56:30 crc kubenswrapper[4720]: I1013 17:56:30.759484 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/317f512e-221d-4587-9817-526adffbe348-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rcnh\" (UID: \"317f512e-221d-4587-9817-526adffbe348\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rcnh" Oct 13 17:56:30 crc kubenswrapper[4720]: I1013 17:56:30.759584 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/317f512e-221d-4587-9817-526adffbe348-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rcnh\" (UID: \"317f512e-221d-4587-9817-526adffbe348\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rcnh" Oct 13 17:56:30 crc kubenswrapper[4720]: I1013 17:56:30.759766 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/317f512e-221d-4587-9817-526adffbe348-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rcnh\" (UID: \"317f512e-221d-4587-9817-526adffbe348\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rcnh" Oct 13 17:56:30 crc kubenswrapper[4720]: I1013 17:56:30.759836 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szcf7\" (UniqueName: \"kubernetes.io/projected/317f512e-221d-4587-9817-526adffbe348-kube-api-access-szcf7\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rcnh\" (UID: \"317f512e-221d-4587-9817-526adffbe348\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rcnh" Oct 13 17:56:30 crc kubenswrapper[4720]: I1013 17:56:30.759891 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/317f512e-221d-4587-9817-526adffbe348-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rcnh\" (UID: \"317f512e-221d-4587-9817-526adffbe348\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rcnh" Oct 13 17:56:30 crc kubenswrapper[4720]: I1013 17:56:30.759955 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/317f512e-221d-4587-9817-526adffbe348-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rcnh\" (UID: \"317f512e-221d-4587-9817-526adffbe348\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rcnh" Oct 13 17:56:30 crc kubenswrapper[4720]: I1013 17:56:30.767647 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/317f512e-221d-4587-9817-526adffbe348-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rcnh\" (UID: \"317f512e-221d-4587-9817-526adffbe348\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rcnh" Oct 13 17:56:30 crc kubenswrapper[4720]: I1013 17:56:30.768147 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/317f512e-221d-4587-9817-526adffbe348-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rcnh\" (UID: \"317f512e-221d-4587-9817-526adffbe348\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rcnh" Oct 13 17:56:30 crc kubenswrapper[4720]: I1013 17:56:30.770796 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/317f512e-221d-4587-9817-526adffbe348-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rcnh\" (UID: \"317f512e-221d-4587-9817-526adffbe348\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rcnh" Oct 13 17:56:30 crc kubenswrapper[4720]: I1013 17:56:30.771025 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/317f512e-221d-4587-9817-526adffbe348-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rcnh\" (UID: \"317f512e-221d-4587-9817-526adffbe348\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rcnh" Oct 13 17:56:30 crc kubenswrapper[4720]: I1013 17:56:30.771740 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/317f512e-221d-4587-9817-526adffbe348-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rcnh\" (UID: \"317f512e-221d-4587-9817-526adffbe348\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rcnh" Oct 13 17:56:30 crc kubenswrapper[4720]: I1013 17:56:30.793122 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szcf7\" (UniqueName: \"kubernetes.io/projected/317f512e-221d-4587-9817-526adffbe348-kube-api-access-szcf7\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rcnh\" (UID: \"317f512e-221d-4587-9817-526adffbe348\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rcnh" Oct 13 17:56:31 crc kubenswrapper[4720]: I1013 17:56:31.092586 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rcnh" Oct 13 17:56:31 crc kubenswrapper[4720]: I1013 17:56:31.653032 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rcnh"] Oct 13 17:56:32 crc kubenswrapper[4720]: I1013 17:56:32.305473 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rcnh" event={"ID":"317f512e-221d-4587-9817-526adffbe348","Type":"ContainerStarted","Data":"eb91870b5fa315f187b6d5d2f231ea06f2c8a3317db3cf29eb117ebd994dfa39"} Oct 13 17:56:33 crc kubenswrapper[4720]: I1013 17:56:33.315167 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rcnh" event={"ID":"317f512e-221d-4587-9817-526adffbe348","Type":"ContainerStarted","Data":"22ef62ce174f30011b34268fcac32ce74f66dbcc18c1cb2971dcc9e07855d588"} Oct 13 17:56:33 crc kubenswrapper[4720]: I1013 17:56:33.334212 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rcnh" podStartSLOduration=2.913436825 podStartE2EDuration="3.334181305s" podCreationTimestamp="2025-10-13 17:56:30 +0000 UTC" firstStartedPulling="2025-10-13 17:56:31.650332961 +0000 UTC m=+1937.107583123" lastFinishedPulling="2025-10-13 17:56:32.071077461 +0000 UTC m=+1937.528327603" observedRunningTime="2025-10-13 17:56:33.331045354 +0000 UTC m=+1938.788295496" watchObservedRunningTime="2025-10-13 17:56:33.334181305 +0000 UTC m=+1938.791431437" Oct 13 17:56:45 crc kubenswrapper[4720]: I1013 17:56:45.212733 4720 patch_prober.go:28] interesting pod/machine-config-daemon-htwnl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 17:56:45 crc kubenswrapper[4720]: I1013 17:56:45.213357 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 17:57:15 crc kubenswrapper[4720]: I1013 17:57:15.212754 4720 patch_prober.go:28] interesting pod/machine-config-daemon-htwnl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 17:57:15 crc kubenswrapper[4720]: I1013 17:57:15.213431 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 17:57:15 crc kubenswrapper[4720]: I1013 17:57:15.213495 4720 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" Oct 13 17:57:15 crc kubenswrapper[4720]: I1013 17:57:15.214493 4720 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a04e41707b3ca4c901c7d1fed0a4b9bbfe355d3cbe36208efddf6c91d80563e3"} pod="openshift-machine-config-operator/machine-config-daemon-htwnl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 13 17:57:15 crc kubenswrapper[4720]: I1013 17:57:15.214588 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerName="machine-config-daemon" containerID="cri-o://a04e41707b3ca4c901c7d1fed0a4b9bbfe355d3cbe36208efddf6c91d80563e3" gracePeriod=600 Oct 13 17:57:15 crc kubenswrapper[4720]: I1013 17:57:15.762917 4720 generic.go:334] "Generic (PLEG): container finished" podID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerID="a04e41707b3ca4c901c7d1fed0a4b9bbfe355d3cbe36208efddf6c91d80563e3" exitCode=0 Oct 13 17:57:15 crc kubenswrapper[4720]: I1013 17:57:15.763008 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" event={"ID":"ce442c80-fcde-4b79-b6f9-f8f25771dfd4","Type":"ContainerDied","Data":"a04e41707b3ca4c901c7d1fed0a4b9bbfe355d3cbe36208efddf6c91d80563e3"} Oct 13 17:57:15 crc kubenswrapper[4720]: I1013 17:57:15.763315 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" event={"ID":"ce442c80-fcde-4b79-b6f9-f8f25771dfd4","Type":"ContainerStarted","Data":"f16b061e305a205bc6cd67c30a1e6ff436af254fc7370be55f944123cc9cd230"} Oct 13 17:57:15 crc kubenswrapper[4720]: I1013 17:57:15.763354 4720 scope.go:117] "RemoveContainer" containerID="eabc7e2122617e64bac6ae4330db40b5b4e9867a18ff888fd3f7afb5cfc64f2f" Oct 13 17:57:25 crc kubenswrapper[4720]: I1013 17:57:25.883089 4720 generic.go:334] "Generic (PLEG): container finished" podID="317f512e-221d-4587-9817-526adffbe348" containerID="22ef62ce174f30011b34268fcac32ce74f66dbcc18c1cb2971dcc9e07855d588" exitCode=0 Oct 13 17:57:25 crc kubenswrapper[4720]: I1013 17:57:25.883346 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rcnh" event={"ID":"317f512e-221d-4587-9817-526adffbe348","Type":"ContainerDied","Data":"22ef62ce174f30011b34268fcac32ce74f66dbcc18c1cb2971dcc9e07855d588"} Oct 13 17:57:27 crc kubenswrapper[4720]: I1013 17:57:27.335621 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rcnh" Oct 13 17:57:27 crc kubenswrapper[4720]: I1013 17:57:27.441945 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/317f512e-221d-4587-9817-526adffbe348-ssh-key\") pod \"317f512e-221d-4587-9817-526adffbe348\" (UID: \"317f512e-221d-4587-9817-526adffbe348\") " Oct 13 17:57:27 crc kubenswrapper[4720]: I1013 17:57:27.442076 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/317f512e-221d-4587-9817-526adffbe348-neutron-metadata-combined-ca-bundle\") pod \"317f512e-221d-4587-9817-526adffbe348\" (UID: \"317f512e-221d-4587-9817-526adffbe348\") " Oct 13 17:57:27 crc kubenswrapper[4720]: I1013 17:57:27.442228 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/317f512e-221d-4587-9817-526adffbe348-neutron-ovn-metadata-agent-neutron-config-0\") pod \"317f512e-221d-4587-9817-526adffbe348\" (UID: \"317f512e-221d-4587-9817-526adffbe348\") " Oct 13 17:57:27 crc kubenswrapper[4720]: I1013 17:57:27.442375 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/317f512e-221d-4587-9817-526adffbe348-nova-metadata-neutron-config-0\") pod \"317f512e-221d-4587-9817-526adffbe348\" (UID: \"317f512e-221d-4587-9817-526adffbe348\") " Oct 13 17:57:27 crc kubenswrapper[4720]: I1013 17:57:27.442476 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szcf7\" (UniqueName: \"kubernetes.io/projected/317f512e-221d-4587-9817-526adffbe348-kube-api-access-szcf7\") pod \"317f512e-221d-4587-9817-526adffbe348\" (UID: \"317f512e-221d-4587-9817-526adffbe348\") " Oct 13 17:57:27 crc kubenswrapper[4720]: I1013 17:57:27.442509 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/317f512e-221d-4587-9817-526adffbe348-inventory\") pod \"317f512e-221d-4587-9817-526adffbe348\" (UID: \"317f512e-221d-4587-9817-526adffbe348\") " Oct 13 17:57:27 crc kubenswrapper[4720]: I1013 17:57:27.451439 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/317f512e-221d-4587-9817-526adffbe348-kube-api-access-szcf7" (OuterVolumeSpecName: "kube-api-access-szcf7") pod "317f512e-221d-4587-9817-526adffbe348" (UID: "317f512e-221d-4587-9817-526adffbe348"). InnerVolumeSpecName "kube-api-access-szcf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:57:27 crc kubenswrapper[4720]: I1013 17:57:27.451550 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/317f512e-221d-4587-9817-526adffbe348-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "317f512e-221d-4587-9817-526adffbe348" (UID: "317f512e-221d-4587-9817-526adffbe348"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:57:27 crc kubenswrapper[4720]: I1013 17:57:27.477893 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/317f512e-221d-4587-9817-526adffbe348-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "317f512e-221d-4587-9817-526adffbe348" (UID: "317f512e-221d-4587-9817-526adffbe348"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:57:27 crc kubenswrapper[4720]: I1013 17:57:27.478274 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/317f512e-221d-4587-9817-526adffbe348-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "317f512e-221d-4587-9817-526adffbe348" (UID: "317f512e-221d-4587-9817-526adffbe348"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:57:27 crc kubenswrapper[4720]: I1013 17:57:27.486538 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/317f512e-221d-4587-9817-526adffbe348-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "317f512e-221d-4587-9817-526adffbe348" (UID: "317f512e-221d-4587-9817-526adffbe348"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:57:27 crc kubenswrapper[4720]: I1013 17:57:27.488423 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/317f512e-221d-4587-9817-526adffbe348-inventory" (OuterVolumeSpecName: "inventory") pod "317f512e-221d-4587-9817-526adffbe348" (UID: "317f512e-221d-4587-9817-526adffbe348"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 17:57:27 crc kubenswrapper[4720]: I1013 17:57:27.545071 4720 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/317f512e-221d-4587-9817-526adffbe348-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Oct 13 17:57:27 crc kubenswrapper[4720]: I1013 17:57:27.545134 4720 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/317f512e-221d-4587-9817-526adffbe348-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Oct 13 17:57:27 crc kubenswrapper[4720]: I1013 17:57:27.545157 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szcf7\" (UniqueName: \"kubernetes.io/projected/317f512e-221d-4587-9817-526adffbe348-kube-api-access-szcf7\") on node \"crc\" DevicePath \"\"" Oct 13 17:57:27 crc kubenswrapper[4720]: I1013 17:57:27.545181 4720 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/317f512e-221d-4587-9817-526adffbe348-inventory\") on node \"crc\" DevicePath \"\"" Oct 13 17:57:27 crc kubenswrapper[4720]: I1013 17:57:27.545226 4720 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/317f512e-221d-4587-9817-526adffbe348-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 13 17:57:27 crc kubenswrapper[4720]: I1013 17:57:27.545245 4720 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/317f512e-221d-4587-9817-526adffbe348-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 17:57:27 crc kubenswrapper[4720]: I1013 17:57:27.911629 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rcnh" event={"ID":"317f512e-221d-4587-9817-526adffbe348","Type":"ContainerDied","Data":"eb91870b5fa315f187b6d5d2f231ea06f2c8a3317db3cf29eb117ebd994dfa39"} Oct 13 17:57:27 crc kubenswrapper[4720]: I1013 17:57:27.911680 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb91870b5fa315f187b6d5d2f231ea06f2c8a3317db3cf29eb117ebd994dfa39" Oct 13 17:57:27 crc kubenswrapper[4720]: I1013 17:57:27.911688 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rcnh" Oct 13 17:57:28 crc kubenswrapper[4720]: I1013 17:57:28.010653 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ht9ww"] Oct 13 17:57:28 crc kubenswrapper[4720]: E1013 17:57:28.011067 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="317f512e-221d-4587-9817-526adffbe348" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Oct 13 17:57:28 crc kubenswrapper[4720]: I1013 17:57:28.011094 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="317f512e-221d-4587-9817-526adffbe348" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Oct 13 17:57:28 crc kubenswrapper[4720]: I1013 17:57:28.011347 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="317f512e-221d-4587-9817-526adffbe348" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Oct 13 17:57:28 crc kubenswrapper[4720]: I1013 17:57:28.012116 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ht9ww" Oct 13 17:57:28 crc kubenswrapper[4720]: I1013 17:57:28.014498 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Oct 13 17:57:28 crc kubenswrapper[4720]: I1013 17:57:28.014607 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 13 17:57:28 crc kubenswrapper[4720]: I1013 17:57:28.016988 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 13 17:57:28 crc kubenswrapper[4720]: I1013 17:57:28.017051 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 13 17:57:28 crc kubenswrapper[4720]: I1013 17:57:28.018961 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l2fds" Oct 13 17:57:28 crc kubenswrapper[4720]: I1013 17:57:28.032042 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ht9ww"] Oct 13 17:57:28 crc kubenswrapper[4720]: I1013 17:57:28.157079 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0b253b74-8253-44c4-962c-b01331772a19-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ht9ww\" (UID: \"0b253b74-8253-44c4-962c-b01331772a19\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ht9ww" Oct 13 17:57:28 crc kubenswrapper[4720]: I1013 17:57:28.157440 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqskz\" (UniqueName: \"kubernetes.io/projected/0b253b74-8253-44c4-962c-b01331772a19-kube-api-access-kqskz\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ht9ww\" (UID: \"0b253b74-8253-44c4-962c-b01331772a19\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ht9ww" Oct 13 17:57:28 crc kubenswrapper[4720]: I1013 17:57:28.158361 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b253b74-8253-44c4-962c-b01331772a19-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ht9ww\" (UID: \"0b253b74-8253-44c4-962c-b01331772a19\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ht9ww" Oct 13 17:57:28 crc kubenswrapper[4720]: I1013 17:57:28.158534 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b253b74-8253-44c4-962c-b01331772a19-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ht9ww\" (UID: \"0b253b74-8253-44c4-962c-b01331772a19\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ht9ww" Oct 13 17:57:28 crc kubenswrapper[4720]: I1013 17:57:28.158686 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/0b253b74-8253-44c4-962c-b01331772a19-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ht9ww\" (UID: \"0b253b74-8253-44c4-962c-b01331772a19\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ht9ww" Oct 13 17:57:28 crc kubenswrapper[4720]: I1013 17:57:28.260251 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0b253b74-8253-44c4-962c-b01331772a19-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ht9ww\" (UID: \"0b253b74-8253-44c4-962c-b01331772a19\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ht9ww" Oct 13 17:57:28 crc kubenswrapper[4720]: I1013 17:57:28.260307 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqskz\" (UniqueName: \"kubernetes.io/projected/0b253b74-8253-44c4-962c-b01331772a19-kube-api-access-kqskz\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ht9ww\" (UID: \"0b253b74-8253-44c4-962c-b01331772a19\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ht9ww" Oct 13 17:57:28 crc kubenswrapper[4720]: I1013 17:57:28.260341 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b253b74-8253-44c4-962c-b01331772a19-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ht9ww\" (UID: \"0b253b74-8253-44c4-962c-b01331772a19\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ht9ww" Oct 13 17:57:28 crc kubenswrapper[4720]: I1013 17:57:28.260719 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b253b74-8253-44c4-962c-b01331772a19-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ht9ww\" (UID: \"0b253b74-8253-44c4-962c-b01331772a19\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ht9ww" Oct 13 17:57:28 crc kubenswrapper[4720]: I1013 17:57:28.260913 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/0b253b74-8253-44c4-962c-b01331772a19-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ht9ww\" (UID: \"0b253b74-8253-44c4-962c-b01331772a19\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ht9ww" Oct 13 17:57:28 crc kubenswrapper[4720]: I1013 17:57:28.266754 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0b253b74-8253-44c4-962c-b01331772a19-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ht9ww\" (UID: \"0b253b74-8253-44c4-962c-b01331772a19\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ht9ww" Oct 13 17:57:28 crc kubenswrapper[4720]: I1013 17:57:28.266844 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b253b74-8253-44c4-962c-b01331772a19-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ht9ww\" (UID: \"0b253b74-8253-44c4-962c-b01331772a19\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ht9ww" Oct 13 17:57:28 crc kubenswrapper[4720]: I1013 17:57:28.267116 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/0b253b74-8253-44c4-962c-b01331772a19-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ht9ww\" (UID: \"0b253b74-8253-44c4-962c-b01331772a19\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ht9ww" Oct 13 17:57:28 crc kubenswrapper[4720]: I1013 17:57:28.267560 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b253b74-8253-44c4-962c-b01331772a19-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ht9ww\" (UID: \"0b253b74-8253-44c4-962c-b01331772a19\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ht9ww" Oct 13 17:57:28 crc kubenswrapper[4720]: I1013 17:57:28.276872 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqskz\" (UniqueName: \"kubernetes.io/projected/0b253b74-8253-44c4-962c-b01331772a19-kube-api-access-kqskz\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-ht9ww\" (UID: \"0b253b74-8253-44c4-962c-b01331772a19\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ht9ww" Oct 13 17:57:28 crc kubenswrapper[4720]: I1013 17:57:28.330418 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ht9ww" Oct 13 17:57:28 crc kubenswrapper[4720]: I1013 17:57:28.871479 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ht9ww"] Oct 13 17:57:28 crc kubenswrapper[4720]: I1013 17:57:28.921944 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ht9ww" event={"ID":"0b253b74-8253-44c4-962c-b01331772a19","Type":"ContainerStarted","Data":"cd4b1dfd3ccf2cea1d0063eb97e7e2da1cd1bdb94d4ada8c61fd60088e96fc4c"} Oct 13 17:57:29 crc kubenswrapper[4720]: I1013 17:57:29.938493 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ht9ww" event={"ID":"0b253b74-8253-44c4-962c-b01331772a19","Type":"ContainerStarted","Data":"c497def0fc86eecbbcf43bcef3b7a3d019afd3fc8bc3bb51eb6d0f02a592baee"} Oct 13 17:57:29 crc kubenswrapper[4720]: I1013 17:57:29.960127 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ht9ww" podStartSLOduration=2.444212142 podStartE2EDuration="2.960110971s" podCreationTimestamp="2025-10-13 17:57:27 +0000 UTC" firstStartedPulling="2025-10-13 17:57:28.875970374 +0000 UTC m=+1994.333220526" lastFinishedPulling="2025-10-13 17:57:29.391869223 +0000 UTC m=+1994.849119355" observedRunningTime="2025-10-13 17:57:29.954516507 +0000 UTC m=+1995.411766639" watchObservedRunningTime="2025-10-13 17:57:29.960110971 +0000 UTC m=+1995.417361093" Oct 13 17:57:57 crc kubenswrapper[4720]: I1013 17:57:57.651883 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kmng6"] Oct 13 17:57:57 crc kubenswrapper[4720]: I1013 17:57:57.655897 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kmng6" Oct 13 17:57:57 crc kubenswrapper[4720]: I1013 17:57:57.665435 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kmng6"] Oct 13 17:57:57 crc kubenswrapper[4720]: I1013 17:57:57.692514 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e34dc1cf-d637-4e0b-bc25-f181210c5391-catalog-content\") pod \"redhat-marketplace-kmng6\" (UID: \"e34dc1cf-d637-4e0b-bc25-f181210c5391\") " pod="openshift-marketplace/redhat-marketplace-kmng6" Oct 13 17:57:57 crc kubenswrapper[4720]: I1013 17:57:57.693520 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e34dc1cf-d637-4e0b-bc25-f181210c5391-utilities\") pod \"redhat-marketplace-kmng6\" (UID: \"e34dc1cf-d637-4e0b-bc25-f181210c5391\") " pod="openshift-marketplace/redhat-marketplace-kmng6" Oct 13 17:57:57 crc kubenswrapper[4720]: I1013 17:57:57.694070 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-465cs\" (UniqueName: \"kubernetes.io/projected/e34dc1cf-d637-4e0b-bc25-f181210c5391-kube-api-access-465cs\") pod \"redhat-marketplace-kmng6\" (UID: \"e34dc1cf-d637-4e0b-bc25-f181210c5391\") " pod="openshift-marketplace/redhat-marketplace-kmng6" Oct 13 17:57:57 crc kubenswrapper[4720]: I1013 17:57:57.795524 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-465cs\" (UniqueName: \"kubernetes.io/projected/e34dc1cf-d637-4e0b-bc25-f181210c5391-kube-api-access-465cs\") pod \"redhat-marketplace-kmng6\" (UID: \"e34dc1cf-d637-4e0b-bc25-f181210c5391\") " pod="openshift-marketplace/redhat-marketplace-kmng6" Oct 13 17:57:57 crc kubenswrapper[4720]: I1013 17:57:57.795598 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e34dc1cf-d637-4e0b-bc25-f181210c5391-catalog-content\") pod \"redhat-marketplace-kmng6\" (UID: \"e34dc1cf-d637-4e0b-bc25-f181210c5391\") " pod="openshift-marketplace/redhat-marketplace-kmng6" Oct 13 17:57:57 crc kubenswrapper[4720]: I1013 17:57:57.795655 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e34dc1cf-d637-4e0b-bc25-f181210c5391-utilities\") pod \"redhat-marketplace-kmng6\" (UID: \"e34dc1cf-d637-4e0b-bc25-f181210c5391\") " pod="openshift-marketplace/redhat-marketplace-kmng6" Oct 13 17:57:57 crc kubenswrapper[4720]: I1013 17:57:57.796105 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e34dc1cf-d637-4e0b-bc25-f181210c5391-utilities\") pod \"redhat-marketplace-kmng6\" (UID: \"e34dc1cf-d637-4e0b-bc25-f181210c5391\") " pod="openshift-marketplace/redhat-marketplace-kmng6" Oct 13 17:57:57 crc kubenswrapper[4720]: I1013 17:57:57.796279 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e34dc1cf-d637-4e0b-bc25-f181210c5391-catalog-content\") pod \"redhat-marketplace-kmng6\" (UID: \"e34dc1cf-d637-4e0b-bc25-f181210c5391\") " pod="openshift-marketplace/redhat-marketplace-kmng6" Oct 13 17:57:57 crc kubenswrapper[4720]: I1013 17:57:57.813865 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-465cs\" (UniqueName: \"kubernetes.io/projected/e34dc1cf-d637-4e0b-bc25-f181210c5391-kube-api-access-465cs\") pod \"redhat-marketplace-kmng6\" (UID: \"e34dc1cf-d637-4e0b-bc25-f181210c5391\") " pod="openshift-marketplace/redhat-marketplace-kmng6" Oct 13 17:57:57 crc kubenswrapper[4720]: I1013 17:57:57.978808 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kmng6" Oct 13 17:57:58 crc kubenswrapper[4720]: I1013 17:57:58.428774 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kmng6"] Oct 13 17:57:59 crc kubenswrapper[4720]: I1013 17:57:59.238148 4720 generic.go:334] "Generic (PLEG): container finished" podID="e34dc1cf-d637-4e0b-bc25-f181210c5391" containerID="1a8dc67b107eba2c9f890599a1d40c3e367eeb957e8588f4a0e67405c0316ec7" exitCode=0 Oct 13 17:57:59 crc kubenswrapper[4720]: I1013 17:57:59.238271 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kmng6" event={"ID":"e34dc1cf-d637-4e0b-bc25-f181210c5391","Type":"ContainerDied","Data":"1a8dc67b107eba2c9f890599a1d40c3e367eeb957e8588f4a0e67405c0316ec7"} Oct 13 17:57:59 crc kubenswrapper[4720]: I1013 17:57:59.238606 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kmng6" event={"ID":"e34dc1cf-d637-4e0b-bc25-f181210c5391","Type":"ContainerStarted","Data":"e92660b3d92bd7ef059c86f34c5c4e9c17642d25cd03a31f45ccb7b38ac45a30"} Oct 13 17:58:00 crc kubenswrapper[4720]: I1013 17:58:00.249849 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kmng6" event={"ID":"e34dc1cf-d637-4e0b-bc25-f181210c5391","Type":"ContainerStarted","Data":"57bd60c7975f20711e405e566841770c27998d778a2b826b21655130d11b6e9b"} Oct 13 17:58:01 crc kubenswrapper[4720]: I1013 17:58:01.264234 4720 generic.go:334] "Generic (PLEG): container finished" podID="e34dc1cf-d637-4e0b-bc25-f181210c5391" containerID="57bd60c7975f20711e405e566841770c27998d778a2b826b21655130d11b6e9b" exitCode=0 Oct 13 17:58:01 crc kubenswrapper[4720]: I1013 17:58:01.264395 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kmng6" event={"ID":"e34dc1cf-d637-4e0b-bc25-f181210c5391","Type":"ContainerDied","Data":"57bd60c7975f20711e405e566841770c27998d778a2b826b21655130d11b6e9b"} Oct 13 17:58:02 crc kubenswrapper[4720]: I1013 17:58:02.274507 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kmng6" event={"ID":"e34dc1cf-d637-4e0b-bc25-f181210c5391","Type":"ContainerStarted","Data":"dd78267492879c38f08af55a6798db3d07d137b73a36c6048a5fb07f07fe748d"} Oct 13 17:58:02 crc kubenswrapper[4720]: I1013 17:58:02.297764 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kmng6" podStartSLOduration=2.641977374 podStartE2EDuration="5.297746875s" podCreationTimestamp="2025-10-13 17:57:57 +0000 UTC" firstStartedPulling="2025-10-13 17:57:59.239810416 +0000 UTC m=+2024.697060598" lastFinishedPulling="2025-10-13 17:58:01.895579957 +0000 UTC m=+2027.352830099" observedRunningTime="2025-10-13 17:58:02.296701028 +0000 UTC m=+2027.753951160" watchObservedRunningTime="2025-10-13 17:58:02.297746875 +0000 UTC m=+2027.754997027" Oct 13 17:58:07 crc kubenswrapper[4720]: I1013 17:58:07.979369 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kmng6" Oct 13 17:58:07 crc kubenswrapper[4720]: I1013 17:58:07.980038 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kmng6" Oct 13 17:58:08 crc kubenswrapper[4720]: I1013 17:58:08.068702 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kmng6" Oct 13 17:58:08 crc kubenswrapper[4720]: I1013 17:58:08.389281 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kmng6" Oct 13 17:58:08 crc kubenswrapper[4720]: I1013 17:58:08.439575 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kmng6"] Oct 13 17:58:10 crc kubenswrapper[4720]: I1013 17:58:10.353548 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kmng6" podUID="e34dc1cf-d637-4e0b-bc25-f181210c5391" containerName="registry-server" containerID="cri-o://dd78267492879c38f08af55a6798db3d07d137b73a36c6048a5fb07f07fe748d" gracePeriod=2 Oct 13 17:58:10 crc kubenswrapper[4720]: I1013 17:58:10.947978 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kmng6" Oct 13 17:58:10 crc kubenswrapper[4720]: I1013 17:58:10.968569 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e34dc1cf-d637-4e0b-bc25-f181210c5391-utilities\") pod \"e34dc1cf-d637-4e0b-bc25-f181210c5391\" (UID: \"e34dc1cf-d637-4e0b-bc25-f181210c5391\") " Oct 13 17:58:10 crc kubenswrapper[4720]: I1013 17:58:10.968724 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e34dc1cf-d637-4e0b-bc25-f181210c5391-catalog-content\") pod \"e34dc1cf-d637-4e0b-bc25-f181210c5391\" (UID: \"e34dc1cf-d637-4e0b-bc25-f181210c5391\") " Oct 13 17:58:10 crc kubenswrapper[4720]: I1013 17:58:10.968821 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-465cs\" (UniqueName: \"kubernetes.io/projected/e34dc1cf-d637-4e0b-bc25-f181210c5391-kube-api-access-465cs\") pod \"e34dc1cf-d637-4e0b-bc25-f181210c5391\" (UID: \"e34dc1cf-d637-4e0b-bc25-f181210c5391\") " Oct 13 17:58:10 crc kubenswrapper[4720]: I1013 17:58:10.969556 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e34dc1cf-d637-4e0b-bc25-f181210c5391-utilities" (OuterVolumeSpecName: "utilities") pod "e34dc1cf-d637-4e0b-bc25-f181210c5391" (UID: "e34dc1cf-d637-4e0b-bc25-f181210c5391"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:58:10 crc kubenswrapper[4720]: I1013 17:58:10.975320 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e34dc1cf-d637-4e0b-bc25-f181210c5391-kube-api-access-465cs" (OuterVolumeSpecName: "kube-api-access-465cs") pod "e34dc1cf-d637-4e0b-bc25-f181210c5391" (UID: "e34dc1cf-d637-4e0b-bc25-f181210c5391"). InnerVolumeSpecName "kube-api-access-465cs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:58:11 crc kubenswrapper[4720]: I1013 17:58:10.999351 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e34dc1cf-d637-4e0b-bc25-f181210c5391-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e34dc1cf-d637-4e0b-bc25-f181210c5391" (UID: "e34dc1cf-d637-4e0b-bc25-f181210c5391"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:58:11 crc kubenswrapper[4720]: I1013 17:58:11.071368 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e34dc1cf-d637-4e0b-bc25-f181210c5391-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 17:58:11 crc kubenswrapper[4720]: I1013 17:58:11.071398 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e34dc1cf-d637-4e0b-bc25-f181210c5391-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 17:58:11 crc kubenswrapper[4720]: I1013 17:58:11.071411 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-465cs\" (UniqueName: \"kubernetes.io/projected/e34dc1cf-d637-4e0b-bc25-f181210c5391-kube-api-access-465cs\") on node \"crc\" DevicePath \"\"" Oct 13 17:58:11 crc kubenswrapper[4720]: I1013 17:58:11.364691 4720 generic.go:334] "Generic (PLEG): container finished" podID="e34dc1cf-d637-4e0b-bc25-f181210c5391" containerID="dd78267492879c38f08af55a6798db3d07d137b73a36c6048a5fb07f07fe748d" exitCode=0 Oct 13 17:58:11 crc kubenswrapper[4720]: I1013 17:58:11.364747 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kmng6" event={"ID":"e34dc1cf-d637-4e0b-bc25-f181210c5391","Type":"ContainerDied","Data":"dd78267492879c38f08af55a6798db3d07d137b73a36c6048a5fb07f07fe748d"} Oct 13 17:58:11 crc kubenswrapper[4720]: I1013 17:58:11.364782 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kmng6" event={"ID":"e34dc1cf-d637-4e0b-bc25-f181210c5391","Type":"ContainerDied","Data":"e92660b3d92bd7ef059c86f34c5c4e9c17642d25cd03a31f45ccb7b38ac45a30"} Oct 13 17:58:11 crc kubenswrapper[4720]: I1013 17:58:11.364791 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kmng6" Oct 13 17:58:11 crc kubenswrapper[4720]: I1013 17:58:11.364803 4720 scope.go:117] "RemoveContainer" containerID="dd78267492879c38f08af55a6798db3d07d137b73a36c6048a5fb07f07fe748d" Oct 13 17:58:11 crc kubenswrapper[4720]: I1013 17:58:11.396728 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kmng6"] Oct 13 17:58:11 crc kubenswrapper[4720]: I1013 17:58:11.405453 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kmng6"] Oct 13 17:58:11 crc kubenswrapper[4720]: I1013 17:58:11.406555 4720 scope.go:117] "RemoveContainer" containerID="57bd60c7975f20711e405e566841770c27998d778a2b826b21655130d11b6e9b" Oct 13 17:58:11 crc kubenswrapper[4720]: I1013 17:58:11.434170 4720 scope.go:117] "RemoveContainer" containerID="1a8dc67b107eba2c9f890599a1d40c3e367eeb957e8588f4a0e67405c0316ec7" Oct 13 17:58:11 crc kubenswrapper[4720]: I1013 17:58:11.486997 4720 scope.go:117] "RemoveContainer" containerID="dd78267492879c38f08af55a6798db3d07d137b73a36c6048a5fb07f07fe748d" Oct 13 17:58:11 crc kubenswrapper[4720]: E1013 17:58:11.487487 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd78267492879c38f08af55a6798db3d07d137b73a36c6048a5fb07f07fe748d\": container with ID starting with dd78267492879c38f08af55a6798db3d07d137b73a36c6048a5fb07f07fe748d not found: ID does not exist" containerID="dd78267492879c38f08af55a6798db3d07d137b73a36c6048a5fb07f07fe748d" Oct 13 17:58:11 crc kubenswrapper[4720]: I1013 17:58:11.487534 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd78267492879c38f08af55a6798db3d07d137b73a36c6048a5fb07f07fe748d"} err="failed to get container status \"dd78267492879c38f08af55a6798db3d07d137b73a36c6048a5fb07f07fe748d\": rpc error: code = NotFound desc = could not find container \"dd78267492879c38f08af55a6798db3d07d137b73a36c6048a5fb07f07fe748d\": container with ID starting with dd78267492879c38f08af55a6798db3d07d137b73a36c6048a5fb07f07fe748d not found: ID does not exist" Oct 13 17:58:11 crc kubenswrapper[4720]: I1013 17:58:11.487559 4720 scope.go:117] "RemoveContainer" containerID="57bd60c7975f20711e405e566841770c27998d778a2b826b21655130d11b6e9b" Oct 13 17:58:11 crc kubenswrapper[4720]: E1013 17:58:11.488112 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57bd60c7975f20711e405e566841770c27998d778a2b826b21655130d11b6e9b\": container with ID starting with 57bd60c7975f20711e405e566841770c27998d778a2b826b21655130d11b6e9b not found: ID does not exist" containerID="57bd60c7975f20711e405e566841770c27998d778a2b826b21655130d11b6e9b" Oct 13 17:58:11 crc kubenswrapper[4720]: I1013 17:58:11.488151 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57bd60c7975f20711e405e566841770c27998d778a2b826b21655130d11b6e9b"} err="failed to get container status \"57bd60c7975f20711e405e566841770c27998d778a2b826b21655130d11b6e9b\": rpc error: code = NotFound desc = could not find container \"57bd60c7975f20711e405e566841770c27998d778a2b826b21655130d11b6e9b\": container with ID starting with 57bd60c7975f20711e405e566841770c27998d778a2b826b21655130d11b6e9b not found: ID does not exist" Oct 13 17:58:11 crc kubenswrapper[4720]: I1013 17:58:11.488175 4720 scope.go:117] "RemoveContainer" containerID="1a8dc67b107eba2c9f890599a1d40c3e367eeb957e8588f4a0e67405c0316ec7" Oct 13 17:58:11 crc kubenswrapper[4720]: E1013 17:58:11.488494 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a8dc67b107eba2c9f890599a1d40c3e367eeb957e8588f4a0e67405c0316ec7\": container with ID starting with 1a8dc67b107eba2c9f890599a1d40c3e367eeb957e8588f4a0e67405c0316ec7 not found: ID does not exist" containerID="1a8dc67b107eba2c9f890599a1d40c3e367eeb957e8588f4a0e67405c0316ec7" Oct 13 17:58:11 crc kubenswrapper[4720]: I1013 17:58:11.488773 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a8dc67b107eba2c9f890599a1d40c3e367eeb957e8588f4a0e67405c0316ec7"} err="failed to get container status \"1a8dc67b107eba2c9f890599a1d40c3e367eeb957e8588f4a0e67405c0316ec7\": rpc error: code = NotFound desc = could not find container \"1a8dc67b107eba2c9f890599a1d40c3e367eeb957e8588f4a0e67405c0316ec7\": container with ID starting with 1a8dc67b107eba2c9f890599a1d40c3e367eeb957e8588f4a0e67405c0316ec7 not found: ID does not exist" Oct 13 17:58:13 crc kubenswrapper[4720]: I1013 17:58:13.192952 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e34dc1cf-d637-4e0b-bc25-f181210c5391" path="/var/lib/kubelet/pods/e34dc1cf-d637-4e0b-bc25-f181210c5391/volumes" Oct 13 17:59:15 crc kubenswrapper[4720]: I1013 17:59:15.213063 4720 patch_prober.go:28] interesting pod/machine-config-daemon-htwnl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 17:59:15 crc kubenswrapper[4720]: I1013 17:59:15.213719 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 17:59:24 crc kubenswrapper[4720]: I1013 17:59:24.156032 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-b5r4n"] Oct 13 17:59:24 crc kubenswrapper[4720]: E1013 17:59:24.158372 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e34dc1cf-d637-4e0b-bc25-f181210c5391" containerName="extract-content" Oct 13 17:59:24 crc kubenswrapper[4720]: I1013 17:59:24.158396 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="e34dc1cf-d637-4e0b-bc25-f181210c5391" containerName="extract-content" Oct 13 17:59:24 crc kubenswrapper[4720]: E1013 17:59:24.158427 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e34dc1cf-d637-4e0b-bc25-f181210c5391" containerName="extract-utilities" Oct 13 17:59:24 crc kubenswrapper[4720]: I1013 17:59:24.158440 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="e34dc1cf-d637-4e0b-bc25-f181210c5391" containerName="extract-utilities" Oct 13 17:59:24 crc kubenswrapper[4720]: E1013 17:59:24.158468 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e34dc1cf-d637-4e0b-bc25-f181210c5391" containerName="registry-server" Oct 13 17:59:24 crc kubenswrapper[4720]: I1013 17:59:24.158483 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="e34dc1cf-d637-4e0b-bc25-f181210c5391" containerName="registry-server" Oct 13 17:59:24 crc kubenswrapper[4720]: I1013 17:59:24.158838 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="e34dc1cf-d637-4e0b-bc25-f181210c5391" containerName="registry-server" Oct 13 17:59:24 crc kubenswrapper[4720]: I1013 17:59:24.199102 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b5r4n"] Oct 13 17:59:24 crc kubenswrapper[4720]: I1013 17:59:24.200291 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b5r4n" Oct 13 17:59:24 crc kubenswrapper[4720]: I1013 17:59:24.255720 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcwv4\" (UniqueName: \"kubernetes.io/projected/8fc698fe-9a7e-400d-af0c-64f85dfb46fa-kube-api-access-tcwv4\") pod \"community-operators-b5r4n\" (UID: \"8fc698fe-9a7e-400d-af0c-64f85dfb46fa\") " pod="openshift-marketplace/community-operators-b5r4n" Oct 13 17:59:24 crc kubenswrapper[4720]: I1013 17:59:24.256062 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fc698fe-9a7e-400d-af0c-64f85dfb46fa-catalog-content\") pod \"community-operators-b5r4n\" (UID: \"8fc698fe-9a7e-400d-af0c-64f85dfb46fa\") " pod="openshift-marketplace/community-operators-b5r4n" Oct 13 17:59:24 crc kubenswrapper[4720]: I1013 17:59:24.256446 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fc698fe-9a7e-400d-af0c-64f85dfb46fa-utilities\") pod \"community-operators-b5r4n\" (UID: \"8fc698fe-9a7e-400d-af0c-64f85dfb46fa\") " pod="openshift-marketplace/community-operators-b5r4n" Oct 13 17:59:24 crc kubenswrapper[4720]: I1013 17:59:24.358109 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcwv4\" (UniqueName: \"kubernetes.io/projected/8fc698fe-9a7e-400d-af0c-64f85dfb46fa-kube-api-access-tcwv4\") pod \"community-operators-b5r4n\" (UID: \"8fc698fe-9a7e-400d-af0c-64f85dfb46fa\") " pod="openshift-marketplace/community-operators-b5r4n" Oct 13 17:59:24 crc kubenswrapper[4720]: I1013 17:59:24.358238 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fc698fe-9a7e-400d-af0c-64f85dfb46fa-catalog-content\") pod \"community-operators-b5r4n\" (UID: \"8fc698fe-9a7e-400d-af0c-64f85dfb46fa\") " pod="openshift-marketplace/community-operators-b5r4n" Oct 13 17:59:24 crc kubenswrapper[4720]: I1013 17:59:24.358390 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fc698fe-9a7e-400d-af0c-64f85dfb46fa-utilities\") pod \"community-operators-b5r4n\" (UID: \"8fc698fe-9a7e-400d-af0c-64f85dfb46fa\") " pod="openshift-marketplace/community-operators-b5r4n" Oct 13 17:59:24 crc kubenswrapper[4720]: I1013 17:59:24.358989 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fc698fe-9a7e-400d-af0c-64f85dfb46fa-utilities\") pod \"community-operators-b5r4n\" (UID: \"8fc698fe-9a7e-400d-af0c-64f85dfb46fa\") " pod="openshift-marketplace/community-operators-b5r4n" Oct 13 17:59:24 crc kubenswrapper[4720]: I1013 17:59:24.359056 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fc698fe-9a7e-400d-af0c-64f85dfb46fa-catalog-content\") pod \"community-operators-b5r4n\" (UID: \"8fc698fe-9a7e-400d-af0c-64f85dfb46fa\") " pod="openshift-marketplace/community-operators-b5r4n" Oct 13 17:59:24 crc kubenswrapper[4720]: I1013 17:59:24.380555 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcwv4\" (UniqueName: \"kubernetes.io/projected/8fc698fe-9a7e-400d-af0c-64f85dfb46fa-kube-api-access-tcwv4\") pod \"community-operators-b5r4n\" (UID: \"8fc698fe-9a7e-400d-af0c-64f85dfb46fa\") " pod="openshift-marketplace/community-operators-b5r4n" Oct 13 17:59:24 crc kubenswrapper[4720]: I1013 17:59:24.533113 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b5r4n" Oct 13 17:59:25 crc kubenswrapper[4720]: I1013 17:59:25.056165 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b5r4n"] Oct 13 17:59:25 crc kubenswrapper[4720]: W1013 17:59:25.058381 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8fc698fe_9a7e_400d_af0c_64f85dfb46fa.slice/crio-f7b43258da349cefa392ae28107f90868623c26966cb9d03bb3e96d81b703bae WatchSource:0}: Error finding container f7b43258da349cefa392ae28107f90868623c26966cb9d03bb3e96d81b703bae: Status 404 returned error can't find the container with id f7b43258da349cefa392ae28107f90868623c26966cb9d03bb3e96d81b703bae Oct 13 17:59:25 crc kubenswrapper[4720]: I1013 17:59:25.184025 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b5r4n" event={"ID":"8fc698fe-9a7e-400d-af0c-64f85dfb46fa","Type":"ContainerStarted","Data":"f7b43258da349cefa392ae28107f90868623c26966cb9d03bb3e96d81b703bae"} Oct 13 17:59:26 crc kubenswrapper[4720]: I1013 17:59:26.191256 4720 generic.go:334] "Generic (PLEG): container finished" podID="8fc698fe-9a7e-400d-af0c-64f85dfb46fa" containerID="3e722d4580dd9286f22b64e0d475eb36f1eb565d381f3f678ad4701ad29bc8e0" exitCode=0 Oct 13 17:59:26 crc kubenswrapper[4720]: I1013 17:59:26.191346 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b5r4n" event={"ID":"8fc698fe-9a7e-400d-af0c-64f85dfb46fa","Type":"ContainerDied","Data":"3e722d4580dd9286f22b64e0d475eb36f1eb565d381f3f678ad4701ad29bc8e0"} Oct 13 17:59:27 crc kubenswrapper[4720]: I1013 17:59:27.206718 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b5r4n" event={"ID":"8fc698fe-9a7e-400d-af0c-64f85dfb46fa","Type":"ContainerStarted","Data":"8d33a10ed2d11fc36c6c9001b6d1f2be26be51496a74cad75448e0197d52920e"} Oct 13 17:59:28 crc kubenswrapper[4720]: I1013 17:59:28.219366 4720 generic.go:334] "Generic (PLEG): container finished" podID="8fc698fe-9a7e-400d-af0c-64f85dfb46fa" containerID="8d33a10ed2d11fc36c6c9001b6d1f2be26be51496a74cad75448e0197d52920e" exitCode=0 Oct 13 17:59:28 crc kubenswrapper[4720]: I1013 17:59:28.219431 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b5r4n" event={"ID":"8fc698fe-9a7e-400d-af0c-64f85dfb46fa","Type":"ContainerDied","Data":"8d33a10ed2d11fc36c6c9001b6d1f2be26be51496a74cad75448e0197d52920e"} Oct 13 17:59:29 crc kubenswrapper[4720]: I1013 17:59:29.232909 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b5r4n" event={"ID":"8fc698fe-9a7e-400d-af0c-64f85dfb46fa","Type":"ContainerStarted","Data":"8d9a4d9fffdafb5e4bddf3a2f34fbb55d0402db502f5332b4976157d60fe1379"} Oct 13 17:59:29 crc kubenswrapper[4720]: I1013 17:59:29.262751 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-b5r4n" podStartSLOduration=2.8165814 podStartE2EDuration="5.262733408s" podCreationTimestamp="2025-10-13 17:59:24 +0000 UTC" firstStartedPulling="2025-10-13 17:59:26.193761895 +0000 UTC m=+2111.651012057" lastFinishedPulling="2025-10-13 17:59:28.639913893 +0000 UTC m=+2114.097164065" observedRunningTime="2025-10-13 17:59:29.255734598 +0000 UTC m=+2114.712984740" watchObservedRunningTime="2025-10-13 17:59:29.262733408 +0000 UTC m=+2114.719983540" Oct 13 17:59:34 crc kubenswrapper[4720]: I1013 17:59:34.533877 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-b5r4n" Oct 13 17:59:34 crc kubenswrapper[4720]: I1013 17:59:34.534215 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-b5r4n" Oct 13 17:59:34 crc kubenswrapper[4720]: I1013 17:59:34.590627 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-b5r4n" Oct 13 17:59:35 crc kubenswrapper[4720]: I1013 17:59:35.364672 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-b5r4n" Oct 13 17:59:35 crc kubenswrapper[4720]: I1013 17:59:35.427835 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b5r4n"] Oct 13 17:59:37 crc kubenswrapper[4720]: I1013 17:59:37.320870 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-b5r4n" podUID="8fc698fe-9a7e-400d-af0c-64f85dfb46fa" containerName="registry-server" containerID="cri-o://8d9a4d9fffdafb5e4bddf3a2f34fbb55d0402db502f5332b4976157d60fe1379" gracePeriod=2 Oct 13 17:59:37 crc kubenswrapper[4720]: I1013 17:59:37.790539 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b5r4n" Oct 13 17:59:37 crc kubenswrapper[4720]: I1013 17:59:37.868337 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcwv4\" (UniqueName: \"kubernetes.io/projected/8fc698fe-9a7e-400d-af0c-64f85dfb46fa-kube-api-access-tcwv4\") pod \"8fc698fe-9a7e-400d-af0c-64f85dfb46fa\" (UID: \"8fc698fe-9a7e-400d-af0c-64f85dfb46fa\") " Oct 13 17:59:37 crc kubenswrapper[4720]: I1013 17:59:37.868401 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fc698fe-9a7e-400d-af0c-64f85dfb46fa-catalog-content\") pod \"8fc698fe-9a7e-400d-af0c-64f85dfb46fa\" (UID: \"8fc698fe-9a7e-400d-af0c-64f85dfb46fa\") " Oct 13 17:59:37 crc kubenswrapper[4720]: I1013 17:59:37.868425 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fc698fe-9a7e-400d-af0c-64f85dfb46fa-utilities\") pod \"8fc698fe-9a7e-400d-af0c-64f85dfb46fa\" (UID: \"8fc698fe-9a7e-400d-af0c-64f85dfb46fa\") " Oct 13 17:59:37 crc kubenswrapper[4720]: I1013 17:59:37.869581 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fc698fe-9a7e-400d-af0c-64f85dfb46fa-utilities" (OuterVolumeSpecName: "utilities") pod "8fc698fe-9a7e-400d-af0c-64f85dfb46fa" (UID: "8fc698fe-9a7e-400d-af0c-64f85dfb46fa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:59:37 crc kubenswrapper[4720]: I1013 17:59:37.879133 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fc698fe-9a7e-400d-af0c-64f85dfb46fa-kube-api-access-tcwv4" (OuterVolumeSpecName: "kube-api-access-tcwv4") pod "8fc698fe-9a7e-400d-af0c-64f85dfb46fa" (UID: "8fc698fe-9a7e-400d-af0c-64f85dfb46fa"). InnerVolumeSpecName "kube-api-access-tcwv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 17:59:37 crc kubenswrapper[4720]: I1013 17:59:37.923402 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fc698fe-9a7e-400d-af0c-64f85dfb46fa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8fc698fe-9a7e-400d-af0c-64f85dfb46fa" (UID: "8fc698fe-9a7e-400d-af0c-64f85dfb46fa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 17:59:37 crc kubenswrapper[4720]: I1013 17:59:37.970262 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcwv4\" (UniqueName: \"kubernetes.io/projected/8fc698fe-9a7e-400d-af0c-64f85dfb46fa-kube-api-access-tcwv4\") on node \"crc\" DevicePath \"\"" Oct 13 17:59:37 crc kubenswrapper[4720]: I1013 17:59:37.970296 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fc698fe-9a7e-400d-af0c-64f85dfb46fa-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 17:59:37 crc kubenswrapper[4720]: I1013 17:59:37.970307 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fc698fe-9a7e-400d-af0c-64f85dfb46fa-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 17:59:38 crc kubenswrapper[4720]: I1013 17:59:38.332539 4720 generic.go:334] "Generic (PLEG): container finished" podID="8fc698fe-9a7e-400d-af0c-64f85dfb46fa" containerID="8d9a4d9fffdafb5e4bddf3a2f34fbb55d0402db502f5332b4976157d60fe1379" exitCode=0 Oct 13 17:59:38 crc kubenswrapper[4720]: I1013 17:59:38.332618 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b5r4n" event={"ID":"8fc698fe-9a7e-400d-af0c-64f85dfb46fa","Type":"ContainerDied","Data":"8d9a4d9fffdafb5e4bddf3a2f34fbb55d0402db502f5332b4976157d60fe1379"} Oct 13 17:59:38 crc kubenswrapper[4720]: I1013 17:59:38.332692 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b5r4n" Oct 13 17:59:38 crc kubenswrapper[4720]: I1013 17:59:38.332942 4720 scope.go:117] "RemoveContainer" containerID="8d9a4d9fffdafb5e4bddf3a2f34fbb55d0402db502f5332b4976157d60fe1379" Oct 13 17:59:38 crc kubenswrapper[4720]: I1013 17:59:38.332920 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b5r4n" event={"ID":"8fc698fe-9a7e-400d-af0c-64f85dfb46fa","Type":"ContainerDied","Data":"f7b43258da349cefa392ae28107f90868623c26966cb9d03bb3e96d81b703bae"} Oct 13 17:59:38 crc kubenswrapper[4720]: I1013 17:59:38.378605 4720 scope.go:117] "RemoveContainer" containerID="8d33a10ed2d11fc36c6c9001b6d1f2be26be51496a74cad75448e0197d52920e" Oct 13 17:59:38 crc kubenswrapper[4720]: I1013 17:59:38.393164 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b5r4n"] Oct 13 17:59:38 crc kubenswrapper[4720]: I1013 17:59:38.410180 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-b5r4n"] Oct 13 17:59:38 crc kubenswrapper[4720]: I1013 17:59:38.425529 4720 scope.go:117] "RemoveContainer" containerID="3e722d4580dd9286f22b64e0d475eb36f1eb565d381f3f678ad4701ad29bc8e0" Oct 13 17:59:38 crc kubenswrapper[4720]: I1013 17:59:38.477099 4720 scope.go:117] "RemoveContainer" containerID="8d9a4d9fffdafb5e4bddf3a2f34fbb55d0402db502f5332b4976157d60fe1379" Oct 13 17:59:38 crc kubenswrapper[4720]: E1013 17:59:38.477876 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d9a4d9fffdafb5e4bddf3a2f34fbb55d0402db502f5332b4976157d60fe1379\": container with ID starting with 8d9a4d9fffdafb5e4bddf3a2f34fbb55d0402db502f5332b4976157d60fe1379 not found: ID does not exist" containerID="8d9a4d9fffdafb5e4bddf3a2f34fbb55d0402db502f5332b4976157d60fe1379" Oct 13 17:59:38 crc kubenswrapper[4720]: I1013 17:59:38.477931 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d9a4d9fffdafb5e4bddf3a2f34fbb55d0402db502f5332b4976157d60fe1379"} err="failed to get container status \"8d9a4d9fffdafb5e4bddf3a2f34fbb55d0402db502f5332b4976157d60fe1379\": rpc error: code = NotFound desc = could not find container \"8d9a4d9fffdafb5e4bddf3a2f34fbb55d0402db502f5332b4976157d60fe1379\": container with ID starting with 8d9a4d9fffdafb5e4bddf3a2f34fbb55d0402db502f5332b4976157d60fe1379 not found: ID does not exist" Oct 13 17:59:38 crc kubenswrapper[4720]: I1013 17:59:38.477964 4720 scope.go:117] "RemoveContainer" containerID="8d33a10ed2d11fc36c6c9001b6d1f2be26be51496a74cad75448e0197d52920e" Oct 13 17:59:38 crc kubenswrapper[4720]: E1013 17:59:38.478467 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d33a10ed2d11fc36c6c9001b6d1f2be26be51496a74cad75448e0197d52920e\": container with ID starting with 8d33a10ed2d11fc36c6c9001b6d1f2be26be51496a74cad75448e0197d52920e not found: ID does not exist" containerID="8d33a10ed2d11fc36c6c9001b6d1f2be26be51496a74cad75448e0197d52920e" Oct 13 17:59:38 crc kubenswrapper[4720]: I1013 17:59:38.478521 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d33a10ed2d11fc36c6c9001b6d1f2be26be51496a74cad75448e0197d52920e"} err="failed to get container status \"8d33a10ed2d11fc36c6c9001b6d1f2be26be51496a74cad75448e0197d52920e\": rpc error: code = NotFound desc = could not find container \"8d33a10ed2d11fc36c6c9001b6d1f2be26be51496a74cad75448e0197d52920e\": container with ID starting with 8d33a10ed2d11fc36c6c9001b6d1f2be26be51496a74cad75448e0197d52920e not found: ID does not exist" Oct 13 17:59:38 crc kubenswrapper[4720]: I1013 17:59:38.478555 4720 scope.go:117] "RemoveContainer" containerID="3e722d4580dd9286f22b64e0d475eb36f1eb565d381f3f678ad4701ad29bc8e0" Oct 13 17:59:38 crc kubenswrapper[4720]: E1013 17:59:38.479105 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e722d4580dd9286f22b64e0d475eb36f1eb565d381f3f678ad4701ad29bc8e0\": container with ID starting with 3e722d4580dd9286f22b64e0d475eb36f1eb565d381f3f678ad4701ad29bc8e0 not found: ID does not exist" containerID="3e722d4580dd9286f22b64e0d475eb36f1eb565d381f3f678ad4701ad29bc8e0" Oct 13 17:59:38 crc kubenswrapper[4720]: I1013 17:59:38.479172 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e722d4580dd9286f22b64e0d475eb36f1eb565d381f3f678ad4701ad29bc8e0"} err="failed to get container status \"3e722d4580dd9286f22b64e0d475eb36f1eb565d381f3f678ad4701ad29bc8e0\": rpc error: code = NotFound desc = could not find container \"3e722d4580dd9286f22b64e0d475eb36f1eb565d381f3f678ad4701ad29bc8e0\": container with ID starting with 3e722d4580dd9286f22b64e0d475eb36f1eb565d381f3f678ad4701ad29bc8e0 not found: ID does not exist" Oct 13 17:59:39 crc kubenswrapper[4720]: I1013 17:59:39.189386 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fc698fe-9a7e-400d-af0c-64f85dfb46fa" path="/var/lib/kubelet/pods/8fc698fe-9a7e-400d-af0c-64f85dfb46fa/volumes" Oct 13 17:59:45 crc kubenswrapper[4720]: I1013 17:59:45.212992 4720 patch_prober.go:28] interesting pod/machine-config-daemon-htwnl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 17:59:45 crc kubenswrapper[4720]: I1013 17:59:45.214397 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 18:00:00 crc kubenswrapper[4720]: I1013 18:00:00.183079 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29339640-6bk47"] Oct 13 18:00:00 crc kubenswrapper[4720]: E1013 18:00:00.184052 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fc698fe-9a7e-400d-af0c-64f85dfb46fa" containerName="extract-content" Oct 13 18:00:00 crc kubenswrapper[4720]: I1013 18:00:00.184067 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fc698fe-9a7e-400d-af0c-64f85dfb46fa" containerName="extract-content" Oct 13 18:00:00 crc kubenswrapper[4720]: E1013 18:00:00.184079 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fc698fe-9a7e-400d-af0c-64f85dfb46fa" containerName="extract-utilities" Oct 13 18:00:00 crc kubenswrapper[4720]: I1013 18:00:00.184087 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fc698fe-9a7e-400d-af0c-64f85dfb46fa" containerName="extract-utilities" Oct 13 18:00:00 crc kubenswrapper[4720]: E1013 18:00:00.184125 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fc698fe-9a7e-400d-af0c-64f85dfb46fa" containerName="registry-server" Oct 13 18:00:00 crc kubenswrapper[4720]: I1013 18:00:00.184131 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fc698fe-9a7e-400d-af0c-64f85dfb46fa" containerName="registry-server" Oct 13 18:00:00 crc kubenswrapper[4720]: I1013 18:00:00.186719 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fc698fe-9a7e-400d-af0c-64f85dfb46fa" containerName="registry-server" Oct 13 18:00:00 crc kubenswrapper[4720]: I1013 18:00:00.187669 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29339640-6bk47" Oct 13 18:00:00 crc kubenswrapper[4720]: I1013 18:00:00.190981 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 13 18:00:00 crc kubenswrapper[4720]: I1013 18:00:00.191420 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 13 18:00:00 crc kubenswrapper[4720]: I1013 18:00:00.197958 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29339640-6bk47"] Oct 13 18:00:00 crc kubenswrapper[4720]: I1013 18:00:00.284968 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49220229-7d5e-4014-9918-d9479cd53eff-config-volume\") pod \"collect-profiles-29339640-6bk47\" (UID: \"49220229-7d5e-4014-9918-d9479cd53eff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339640-6bk47" Oct 13 18:00:00 crc kubenswrapper[4720]: I1013 18:00:00.285251 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b4mw\" (UniqueName: \"kubernetes.io/projected/49220229-7d5e-4014-9918-d9479cd53eff-kube-api-access-5b4mw\") pod \"collect-profiles-29339640-6bk47\" (UID: \"49220229-7d5e-4014-9918-d9479cd53eff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339640-6bk47" Oct 13 18:00:00 crc kubenswrapper[4720]: I1013 18:00:00.285363 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/49220229-7d5e-4014-9918-d9479cd53eff-secret-volume\") pod \"collect-profiles-29339640-6bk47\" (UID: \"49220229-7d5e-4014-9918-d9479cd53eff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339640-6bk47" Oct 13 18:00:00 crc kubenswrapper[4720]: I1013 18:00:00.386638 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5b4mw\" (UniqueName: \"kubernetes.io/projected/49220229-7d5e-4014-9918-d9479cd53eff-kube-api-access-5b4mw\") pod \"collect-profiles-29339640-6bk47\" (UID: \"49220229-7d5e-4014-9918-d9479cd53eff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339640-6bk47" Oct 13 18:00:00 crc kubenswrapper[4720]: I1013 18:00:00.386708 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/49220229-7d5e-4014-9918-d9479cd53eff-secret-volume\") pod \"collect-profiles-29339640-6bk47\" (UID: \"49220229-7d5e-4014-9918-d9479cd53eff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339640-6bk47" Oct 13 18:00:00 crc kubenswrapper[4720]: I1013 18:00:00.386744 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49220229-7d5e-4014-9918-d9479cd53eff-config-volume\") pod \"collect-profiles-29339640-6bk47\" (UID: \"49220229-7d5e-4014-9918-d9479cd53eff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339640-6bk47" Oct 13 18:00:00 crc kubenswrapper[4720]: I1013 18:00:00.387702 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49220229-7d5e-4014-9918-d9479cd53eff-config-volume\") pod \"collect-profiles-29339640-6bk47\" (UID: \"49220229-7d5e-4014-9918-d9479cd53eff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339640-6bk47" Oct 13 18:00:00 crc kubenswrapper[4720]: I1013 18:00:00.401302 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/49220229-7d5e-4014-9918-d9479cd53eff-secret-volume\") pod \"collect-profiles-29339640-6bk47\" (UID: \"49220229-7d5e-4014-9918-d9479cd53eff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339640-6bk47" Oct 13 18:00:00 crc kubenswrapper[4720]: I1013 18:00:00.421960 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5b4mw\" (UniqueName: \"kubernetes.io/projected/49220229-7d5e-4014-9918-d9479cd53eff-kube-api-access-5b4mw\") pod \"collect-profiles-29339640-6bk47\" (UID: \"49220229-7d5e-4014-9918-d9479cd53eff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339640-6bk47" Oct 13 18:00:00 crc kubenswrapper[4720]: I1013 18:00:00.528427 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29339640-6bk47" Oct 13 18:00:01 crc kubenswrapper[4720]: I1013 18:00:01.043311 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29339640-6bk47"] Oct 13 18:00:01 crc kubenswrapper[4720]: I1013 18:00:01.628120 4720 generic.go:334] "Generic (PLEG): container finished" podID="49220229-7d5e-4014-9918-d9479cd53eff" containerID="b3484ccdf21e1c2d81284b4697993e940e0e137e9fb76f0c5ce44958cfb4388d" exitCode=0 Oct 13 18:00:01 crc kubenswrapper[4720]: I1013 18:00:01.628169 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29339640-6bk47" event={"ID":"49220229-7d5e-4014-9918-d9479cd53eff","Type":"ContainerDied","Data":"b3484ccdf21e1c2d81284b4697993e940e0e137e9fb76f0c5ce44958cfb4388d"} Oct 13 18:00:01 crc kubenswrapper[4720]: I1013 18:00:01.628243 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29339640-6bk47" event={"ID":"49220229-7d5e-4014-9918-d9479cd53eff","Type":"ContainerStarted","Data":"d36252242feebe542d26454cdc0b360f05baefd8fcbe4ec291c27b9117de616b"} Oct 13 18:00:02 crc kubenswrapper[4720]: I1013 18:00:02.987180 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29339640-6bk47" Oct 13 18:00:03 crc kubenswrapper[4720]: I1013 18:00:03.133405 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5b4mw\" (UniqueName: \"kubernetes.io/projected/49220229-7d5e-4014-9918-d9479cd53eff-kube-api-access-5b4mw\") pod \"49220229-7d5e-4014-9918-d9479cd53eff\" (UID: \"49220229-7d5e-4014-9918-d9479cd53eff\") " Oct 13 18:00:03 crc kubenswrapper[4720]: I1013 18:00:03.133551 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/49220229-7d5e-4014-9918-d9479cd53eff-secret-volume\") pod \"49220229-7d5e-4014-9918-d9479cd53eff\" (UID: \"49220229-7d5e-4014-9918-d9479cd53eff\") " Oct 13 18:00:03 crc kubenswrapper[4720]: I1013 18:00:03.133655 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49220229-7d5e-4014-9918-d9479cd53eff-config-volume\") pod \"49220229-7d5e-4014-9918-d9479cd53eff\" (UID: \"49220229-7d5e-4014-9918-d9479cd53eff\") " Oct 13 18:00:03 crc kubenswrapper[4720]: I1013 18:00:03.134335 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49220229-7d5e-4014-9918-d9479cd53eff-config-volume" (OuterVolumeSpecName: "config-volume") pod "49220229-7d5e-4014-9918-d9479cd53eff" (UID: "49220229-7d5e-4014-9918-d9479cd53eff"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 18:00:03 crc kubenswrapper[4720]: I1013 18:00:03.141458 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49220229-7d5e-4014-9918-d9479cd53eff-kube-api-access-5b4mw" (OuterVolumeSpecName: "kube-api-access-5b4mw") pod "49220229-7d5e-4014-9918-d9479cd53eff" (UID: "49220229-7d5e-4014-9918-d9479cd53eff"). InnerVolumeSpecName "kube-api-access-5b4mw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 18:00:03 crc kubenswrapper[4720]: I1013 18:00:03.142395 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49220229-7d5e-4014-9918-d9479cd53eff-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "49220229-7d5e-4014-9918-d9479cd53eff" (UID: "49220229-7d5e-4014-9918-d9479cd53eff"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 18:00:03 crc kubenswrapper[4720]: I1013 18:00:03.235478 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5b4mw\" (UniqueName: \"kubernetes.io/projected/49220229-7d5e-4014-9918-d9479cd53eff-kube-api-access-5b4mw\") on node \"crc\" DevicePath \"\"" Oct 13 18:00:03 crc kubenswrapper[4720]: I1013 18:00:03.235508 4720 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/49220229-7d5e-4014-9918-d9479cd53eff-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 13 18:00:03 crc kubenswrapper[4720]: I1013 18:00:03.235518 4720 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49220229-7d5e-4014-9918-d9479cd53eff-config-volume\") on node \"crc\" DevicePath \"\"" Oct 13 18:00:03 crc kubenswrapper[4720]: I1013 18:00:03.651323 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29339640-6bk47" event={"ID":"49220229-7d5e-4014-9918-d9479cd53eff","Type":"ContainerDied","Data":"d36252242feebe542d26454cdc0b360f05baefd8fcbe4ec291c27b9117de616b"} Oct 13 18:00:03 crc kubenswrapper[4720]: I1013 18:00:03.651368 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d36252242feebe542d26454cdc0b360f05baefd8fcbe4ec291c27b9117de616b" Oct 13 18:00:03 crc kubenswrapper[4720]: I1013 18:00:03.651421 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29339640-6bk47" Oct 13 18:00:04 crc kubenswrapper[4720]: I1013 18:00:04.066942 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29339595-f78vr"] Oct 13 18:00:04 crc kubenswrapper[4720]: I1013 18:00:04.074267 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29339595-f78vr"] Oct 13 18:00:05 crc kubenswrapper[4720]: I1013 18:00:05.181810 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a6d69da-3074-4b30-898b-4bb2eea1fb75" path="/var/lib/kubelet/pods/0a6d69da-3074-4b30-898b-4bb2eea1fb75/volumes" Oct 13 18:00:15 crc kubenswrapper[4720]: I1013 18:00:15.213161 4720 patch_prober.go:28] interesting pod/machine-config-daemon-htwnl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 18:00:15 crc kubenswrapper[4720]: I1013 18:00:15.213858 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 18:00:15 crc kubenswrapper[4720]: I1013 18:00:15.213920 4720 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" Oct 13 18:00:15 crc kubenswrapper[4720]: I1013 18:00:15.214870 4720 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f16b061e305a205bc6cd67c30a1e6ff436af254fc7370be55f944123cc9cd230"} pod="openshift-machine-config-operator/machine-config-daemon-htwnl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 13 18:00:15 crc kubenswrapper[4720]: I1013 18:00:15.215130 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerName="machine-config-daemon" containerID="cri-o://f16b061e305a205bc6cd67c30a1e6ff436af254fc7370be55f944123cc9cd230" gracePeriod=600 Oct 13 18:00:15 crc kubenswrapper[4720]: E1013 18:00:15.347457 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:00:15 crc kubenswrapper[4720]: I1013 18:00:15.794256 4720 generic.go:334] "Generic (PLEG): container finished" podID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerID="f16b061e305a205bc6cd67c30a1e6ff436af254fc7370be55f944123cc9cd230" exitCode=0 Oct 13 18:00:15 crc kubenswrapper[4720]: I1013 18:00:15.794312 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" event={"ID":"ce442c80-fcde-4b79-b6f9-f8f25771dfd4","Type":"ContainerDied","Data":"f16b061e305a205bc6cd67c30a1e6ff436af254fc7370be55f944123cc9cd230"} Oct 13 18:00:15 crc kubenswrapper[4720]: I1013 18:00:15.794359 4720 scope.go:117] "RemoveContainer" containerID="a04e41707b3ca4c901c7d1fed0a4b9bbfe355d3cbe36208efddf6c91d80563e3" Oct 13 18:00:15 crc kubenswrapper[4720]: I1013 18:00:15.795387 4720 scope.go:117] "RemoveContainer" containerID="f16b061e305a205bc6cd67c30a1e6ff436af254fc7370be55f944123cc9cd230" Oct 13 18:00:15 crc kubenswrapper[4720]: E1013 18:00:15.796594 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:00:19 crc kubenswrapper[4720]: I1013 18:00:19.991300 4720 scope.go:117] "RemoveContainer" containerID="f3e5ed6dd43e4d22a6c59b3c81a2671caa27241e6072846ed5c0838f28f48a31" Oct 13 18:00:27 crc kubenswrapper[4720]: I1013 18:00:27.168787 4720 scope.go:117] "RemoveContainer" containerID="f16b061e305a205bc6cd67c30a1e6ff436af254fc7370be55f944123cc9cd230" Oct 13 18:00:27 crc kubenswrapper[4720]: E1013 18:00:27.169292 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:00:39 crc kubenswrapper[4720]: I1013 18:00:39.168366 4720 scope.go:117] "RemoveContainer" containerID="f16b061e305a205bc6cd67c30a1e6ff436af254fc7370be55f944123cc9cd230" Oct 13 18:00:39 crc kubenswrapper[4720]: E1013 18:00:39.168998 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:00:53 crc kubenswrapper[4720]: I1013 18:00:53.169968 4720 scope.go:117] "RemoveContainer" containerID="f16b061e305a205bc6cd67c30a1e6ff436af254fc7370be55f944123cc9cd230" Oct 13 18:00:53 crc kubenswrapper[4720]: E1013 18:00:53.173069 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:01:00 crc kubenswrapper[4720]: I1013 18:01:00.191442 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29339641-fqkjh"] Oct 13 18:01:00 crc kubenswrapper[4720]: E1013 18:01:00.194097 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49220229-7d5e-4014-9918-d9479cd53eff" containerName="collect-profiles" Oct 13 18:01:00 crc kubenswrapper[4720]: I1013 18:01:00.194270 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="49220229-7d5e-4014-9918-d9479cd53eff" containerName="collect-profiles" Oct 13 18:01:00 crc kubenswrapper[4720]: I1013 18:01:00.194756 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="49220229-7d5e-4014-9918-d9479cd53eff" containerName="collect-profiles" Oct 13 18:01:00 crc kubenswrapper[4720]: I1013 18:01:00.195824 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29339641-fqkjh" Oct 13 18:01:00 crc kubenswrapper[4720]: I1013 18:01:00.205281 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29339641-fqkjh"] Oct 13 18:01:00 crc kubenswrapper[4720]: I1013 18:01:00.327705 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f21ba5f0-a0a5-4a29-9025-614d7f33c643-fernet-keys\") pod \"keystone-cron-29339641-fqkjh\" (UID: \"f21ba5f0-a0a5-4a29-9025-614d7f33c643\") " pod="openstack/keystone-cron-29339641-fqkjh" Oct 13 18:01:00 crc kubenswrapper[4720]: I1013 18:01:00.327768 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f21ba5f0-a0a5-4a29-9025-614d7f33c643-config-data\") pod \"keystone-cron-29339641-fqkjh\" (UID: \"f21ba5f0-a0a5-4a29-9025-614d7f33c643\") " pod="openstack/keystone-cron-29339641-fqkjh" Oct 13 18:01:00 crc kubenswrapper[4720]: I1013 18:01:00.327914 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f21ba5f0-a0a5-4a29-9025-614d7f33c643-combined-ca-bundle\") pod \"keystone-cron-29339641-fqkjh\" (UID: \"f21ba5f0-a0a5-4a29-9025-614d7f33c643\") " pod="openstack/keystone-cron-29339641-fqkjh" Oct 13 18:01:00 crc kubenswrapper[4720]: I1013 18:01:00.327938 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jcwl\" (UniqueName: \"kubernetes.io/projected/f21ba5f0-a0a5-4a29-9025-614d7f33c643-kube-api-access-6jcwl\") pod \"keystone-cron-29339641-fqkjh\" (UID: \"f21ba5f0-a0a5-4a29-9025-614d7f33c643\") " pod="openstack/keystone-cron-29339641-fqkjh" Oct 13 18:01:00 crc kubenswrapper[4720]: I1013 18:01:00.429411 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f21ba5f0-a0a5-4a29-9025-614d7f33c643-fernet-keys\") pod \"keystone-cron-29339641-fqkjh\" (UID: \"f21ba5f0-a0a5-4a29-9025-614d7f33c643\") " pod="openstack/keystone-cron-29339641-fqkjh" Oct 13 18:01:00 crc kubenswrapper[4720]: I1013 18:01:00.429479 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f21ba5f0-a0a5-4a29-9025-614d7f33c643-config-data\") pod \"keystone-cron-29339641-fqkjh\" (UID: \"f21ba5f0-a0a5-4a29-9025-614d7f33c643\") " pod="openstack/keystone-cron-29339641-fqkjh" Oct 13 18:01:00 crc kubenswrapper[4720]: I1013 18:01:00.429580 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f21ba5f0-a0a5-4a29-9025-614d7f33c643-combined-ca-bundle\") pod \"keystone-cron-29339641-fqkjh\" (UID: \"f21ba5f0-a0a5-4a29-9025-614d7f33c643\") " pod="openstack/keystone-cron-29339641-fqkjh" Oct 13 18:01:00 crc kubenswrapper[4720]: I1013 18:01:00.429608 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jcwl\" (UniqueName: \"kubernetes.io/projected/f21ba5f0-a0a5-4a29-9025-614d7f33c643-kube-api-access-6jcwl\") pod \"keystone-cron-29339641-fqkjh\" (UID: \"f21ba5f0-a0a5-4a29-9025-614d7f33c643\") " pod="openstack/keystone-cron-29339641-fqkjh" Oct 13 18:01:00 crc kubenswrapper[4720]: I1013 18:01:00.435060 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f21ba5f0-a0a5-4a29-9025-614d7f33c643-fernet-keys\") pod \"keystone-cron-29339641-fqkjh\" (UID: \"f21ba5f0-a0a5-4a29-9025-614d7f33c643\") " pod="openstack/keystone-cron-29339641-fqkjh" Oct 13 18:01:00 crc kubenswrapper[4720]: I1013 18:01:00.435996 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f21ba5f0-a0a5-4a29-9025-614d7f33c643-config-data\") pod \"keystone-cron-29339641-fqkjh\" (UID: \"f21ba5f0-a0a5-4a29-9025-614d7f33c643\") " pod="openstack/keystone-cron-29339641-fqkjh" Oct 13 18:01:00 crc kubenswrapper[4720]: I1013 18:01:00.446973 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f21ba5f0-a0a5-4a29-9025-614d7f33c643-combined-ca-bundle\") pod \"keystone-cron-29339641-fqkjh\" (UID: \"f21ba5f0-a0a5-4a29-9025-614d7f33c643\") " pod="openstack/keystone-cron-29339641-fqkjh" Oct 13 18:01:00 crc kubenswrapper[4720]: I1013 18:01:00.449401 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jcwl\" (UniqueName: \"kubernetes.io/projected/f21ba5f0-a0a5-4a29-9025-614d7f33c643-kube-api-access-6jcwl\") pod \"keystone-cron-29339641-fqkjh\" (UID: \"f21ba5f0-a0a5-4a29-9025-614d7f33c643\") " pod="openstack/keystone-cron-29339641-fqkjh" Oct 13 18:01:00 crc kubenswrapper[4720]: I1013 18:01:00.522302 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29339641-fqkjh" Oct 13 18:01:01 crc kubenswrapper[4720]: I1013 18:01:01.009494 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29339641-fqkjh"] Oct 13 18:01:01 crc kubenswrapper[4720]: I1013 18:01:01.365801 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29339641-fqkjh" event={"ID":"f21ba5f0-a0a5-4a29-9025-614d7f33c643","Type":"ContainerStarted","Data":"60ea2dbcb86764b59fe16e7b470047723856e72d76b202d2f05d26f10e116fe0"} Oct 13 18:01:01 crc kubenswrapper[4720]: I1013 18:01:01.367369 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29339641-fqkjh" event={"ID":"f21ba5f0-a0a5-4a29-9025-614d7f33c643","Type":"ContainerStarted","Data":"b2e88cf5bbfccdecd929503da730ecdc54c5b98c0f9002f2553b2602897e64a6"} Oct 13 18:01:01 crc kubenswrapper[4720]: I1013 18:01:01.385808 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29339641-fqkjh" podStartSLOduration=1.385782189 podStartE2EDuration="1.385782189s" podCreationTimestamp="2025-10-13 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 18:01:01.378549212 +0000 UTC m=+2206.835799344" watchObservedRunningTime="2025-10-13 18:01:01.385782189 +0000 UTC m=+2206.843032321" Oct 13 18:01:03 crc kubenswrapper[4720]: I1013 18:01:03.397800 4720 generic.go:334] "Generic (PLEG): container finished" podID="f21ba5f0-a0a5-4a29-9025-614d7f33c643" containerID="60ea2dbcb86764b59fe16e7b470047723856e72d76b202d2f05d26f10e116fe0" exitCode=0 Oct 13 18:01:03 crc kubenswrapper[4720]: I1013 18:01:03.397879 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29339641-fqkjh" event={"ID":"f21ba5f0-a0a5-4a29-9025-614d7f33c643","Type":"ContainerDied","Data":"60ea2dbcb86764b59fe16e7b470047723856e72d76b202d2f05d26f10e116fe0"} Oct 13 18:01:04 crc kubenswrapper[4720]: I1013 18:01:04.169240 4720 scope.go:117] "RemoveContainer" containerID="f16b061e305a205bc6cd67c30a1e6ff436af254fc7370be55f944123cc9cd230" Oct 13 18:01:04 crc kubenswrapper[4720]: E1013 18:01:04.169750 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:01:04 crc kubenswrapper[4720]: I1013 18:01:04.734246 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29339641-fqkjh" Oct 13 18:01:04 crc kubenswrapper[4720]: I1013 18:01:04.825755 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f21ba5f0-a0a5-4a29-9025-614d7f33c643-combined-ca-bundle\") pod \"f21ba5f0-a0a5-4a29-9025-614d7f33c643\" (UID: \"f21ba5f0-a0a5-4a29-9025-614d7f33c643\") " Oct 13 18:01:04 crc kubenswrapper[4720]: I1013 18:01:04.825950 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f21ba5f0-a0a5-4a29-9025-614d7f33c643-fernet-keys\") pod \"f21ba5f0-a0a5-4a29-9025-614d7f33c643\" (UID: \"f21ba5f0-a0a5-4a29-9025-614d7f33c643\") " Oct 13 18:01:04 crc kubenswrapper[4720]: I1013 18:01:04.826009 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f21ba5f0-a0a5-4a29-9025-614d7f33c643-config-data\") pod \"f21ba5f0-a0a5-4a29-9025-614d7f33c643\" (UID: \"f21ba5f0-a0a5-4a29-9025-614d7f33c643\") " Oct 13 18:01:04 crc kubenswrapper[4720]: I1013 18:01:04.826117 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jcwl\" (UniqueName: \"kubernetes.io/projected/f21ba5f0-a0a5-4a29-9025-614d7f33c643-kube-api-access-6jcwl\") pod \"f21ba5f0-a0a5-4a29-9025-614d7f33c643\" (UID: \"f21ba5f0-a0a5-4a29-9025-614d7f33c643\") " Oct 13 18:01:04 crc kubenswrapper[4720]: I1013 18:01:04.831958 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f21ba5f0-a0a5-4a29-9025-614d7f33c643-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "f21ba5f0-a0a5-4a29-9025-614d7f33c643" (UID: "f21ba5f0-a0a5-4a29-9025-614d7f33c643"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 18:01:04 crc kubenswrapper[4720]: I1013 18:01:04.835432 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f21ba5f0-a0a5-4a29-9025-614d7f33c643-kube-api-access-6jcwl" (OuterVolumeSpecName: "kube-api-access-6jcwl") pod "f21ba5f0-a0a5-4a29-9025-614d7f33c643" (UID: "f21ba5f0-a0a5-4a29-9025-614d7f33c643"). InnerVolumeSpecName "kube-api-access-6jcwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 18:01:04 crc kubenswrapper[4720]: I1013 18:01:04.865276 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f21ba5f0-a0a5-4a29-9025-614d7f33c643-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f21ba5f0-a0a5-4a29-9025-614d7f33c643" (UID: "f21ba5f0-a0a5-4a29-9025-614d7f33c643"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 18:01:04 crc kubenswrapper[4720]: I1013 18:01:04.913618 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f21ba5f0-a0a5-4a29-9025-614d7f33c643-config-data" (OuterVolumeSpecName: "config-data") pod "f21ba5f0-a0a5-4a29-9025-614d7f33c643" (UID: "f21ba5f0-a0a5-4a29-9025-614d7f33c643"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 18:01:04 crc kubenswrapper[4720]: I1013 18:01:04.928460 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jcwl\" (UniqueName: \"kubernetes.io/projected/f21ba5f0-a0a5-4a29-9025-614d7f33c643-kube-api-access-6jcwl\") on node \"crc\" DevicePath \"\"" Oct 13 18:01:04 crc kubenswrapper[4720]: I1013 18:01:04.928487 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f21ba5f0-a0a5-4a29-9025-614d7f33c643-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 18:01:04 crc kubenswrapper[4720]: I1013 18:01:04.928496 4720 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f21ba5f0-a0a5-4a29-9025-614d7f33c643-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 13 18:01:04 crc kubenswrapper[4720]: I1013 18:01:04.928504 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f21ba5f0-a0a5-4a29-9025-614d7f33c643-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 18:01:05 crc kubenswrapper[4720]: I1013 18:01:05.419338 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29339641-fqkjh" event={"ID":"f21ba5f0-a0a5-4a29-9025-614d7f33c643","Type":"ContainerDied","Data":"b2e88cf5bbfccdecd929503da730ecdc54c5b98c0f9002f2553b2602897e64a6"} Oct 13 18:01:05 crc kubenswrapper[4720]: I1013 18:01:05.419801 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2e88cf5bbfccdecd929503da730ecdc54c5b98c0f9002f2553b2602897e64a6" Oct 13 18:01:05 crc kubenswrapper[4720]: I1013 18:01:05.419487 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29339641-fqkjh" Oct 13 18:01:15 crc kubenswrapper[4720]: I1013 18:01:15.183078 4720 scope.go:117] "RemoveContainer" containerID="f16b061e305a205bc6cd67c30a1e6ff436af254fc7370be55f944123cc9cd230" Oct 13 18:01:15 crc kubenswrapper[4720]: E1013 18:01:15.184486 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:01:27 crc kubenswrapper[4720]: I1013 18:01:27.169088 4720 scope.go:117] "RemoveContainer" containerID="f16b061e305a205bc6cd67c30a1e6ff436af254fc7370be55f944123cc9cd230" Oct 13 18:01:27 crc kubenswrapper[4720]: E1013 18:01:27.170531 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:01:39 crc kubenswrapper[4720]: I1013 18:01:39.168558 4720 scope.go:117] "RemoveContainer" containerID="f16b061e305a205bc6cd67c30a1e6ff436af254fc7370be55f944123cc9cd230" Oct 13 18:01:39 crc kubenswrapper[4720]: E1013 18:01:39.169689 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:01:51 crc kubenswrapper[4720]: I1013 18:01:51.168006 4720 scope.go:117] "RemoveContainer" containerID="f16b061e305a205bc6cd67c30a1e6ff436af254fc7370be55f944123cc9cd230" Oct 13 18:01:51 crc kubenswrapper[4720]: E1013 18:01:51.168616 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:02:02 crc kubenswrapper[4720]: I1013 18:02:02.168585 4720 scope.go:117] "RemoveContainer" containerID="f16b061e305a205bc6cd67c30a1e6ff436af254fc7370be55f944123cc9cd230" Oct 13 18:02:02 crc kubenswrapper[4720]: E1013 18:02:02.169637 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:02:08 crc kubenswrapper[4720]: I1013 18:02:08.155404 4720 generic.go:334] "Generic (PLEG): container finished" podID="0b253b74-8253-44c4-962c-b01331772a19" containerID="c497def0fc86eecbbcf43bcef3b7a3d019afd3fc8bc3bb51eb6d0f02a592baee" exitCode=0 Oct 13 18:02:08 crc kubenswrapper[4720]: I1013 18:02:08.156162 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ht9ww" event={"ID":"0b253b74-8253-44c4-962c-b01331772a19","Type":"ContainerDied","Data":"c497def0fc86eecbbcf43bcef3b7a3d019afd3fc8bc3bb51eb6d0f02a592baee"} Oct 13 18:02:09 crc kubenswrapper[4720]: I1013 18:02:09.688114 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ht9ww" Oct 13 18:02:09 crc kubenswrapper[4720]: I1013 18:02:09.804885 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqskz\" (UniqueName: \"kubernetes.io/projected/0b253b74-8253-44c4-962c-b01331772a19-kube-api-access-kqskz\") pod \"0b253b74-8253-44c4-962c-b01331772a19\" (UID: \"0b253b74-8253-44c4-962c-b01331772a19\") " Oct 13 18:02:09 crc kubenswrapper[4720]: I1013 18:02:09.804969 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/0b253b74-8253-44c4-962c-b01331772a19-libvirt-secret-0\") pod \"0b253b74-8253-44c4-962c-b01331772a19\" (UID: \"0b253b74-8253-44c4-962c-b01331772a19\") " Oct 13 18:02:09 crc kubenswrapper[4720]: I1013 18:02:09.804996 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b253b74-8253-44c4-962c-b01331772a19-libvirt-combined-ca-bundle\") pod \"0b253b74-8253-44c4-962c-b01331772a19\" (UID: \"0b253b74-8253-44c4-962c-b01331772a19\") " Oct 13 18:02:09 crc kubenswrapper[4720]: I1013 18:02:09.805046 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0b253b74-8253-44c4-962c-b01331772a19-ssh-key\") pod \"0b253b74-8253-44c4-962c-b01331772a19\" (UID: \"0b253b74-8253-44c4-962c-b01331772a19\") " Oct 13 18:02:09 crc kubenswrapper[4720]: I1013 18:02:09.805110 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b253b74-8253-44c4-962c-b01331772a19-inventory\") pod \"0b253b74-8253-44c4-962c-b01331772a19\" (UID: \"0b253b74-8253-44c4-962c-b01331772a19\") " Oct 13 18:02:09 crc kubenswrapper[4720]: I1013 18:02:09.811268 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b253b74-8253-44c4-962c-b01331772a19-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "0b253b74-8253-44c4-962c-b01331772a19" (UID: "0b253b74-8253-44c4-962c-b01331772a19"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 18:02:09 crc kubenswrapper[4720]: I1013 18:02:09.816807 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b253b74-8253-44c4-962c-b01331772a19-kube-api-access-kqskz" (OuterVolumeSpecName: "kube-api-access-kqskz") pod "0b253b74-8253-44c4-962c-b01331772a19" (UID: "0b253b74-8253-44c4-962c-b01331772a19"). InnerVolumeSpecName "kube-api-access-kqskz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 18:02:09 crc kubenswrapper[4720]: I1013 18:02:09.838076 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b253b74-8253-44c4-962c-b01331772a19-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "0b253b74-8253-44c4-962c-b01331772a19" (UID: "0b253b74-8253-44c4-962c-b01331772a19"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 18:02:09 crc kubenswrapper[4720]: I1013 18:02:09.839141 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b253b74-8253-44c4-962c-b01331772a19-inventory" (OuterVolumeSpecName: "inventory") pod "0b253b74-8253-44c4-962c-b01331772a19" (UID: "0b253b74-8253-44c4-962c-b01331772a19"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 18:02:09 crc kubenswrapper[4720]: I1013 18:02:09.840817 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b253b74-8253-44c4-962c-b01331772a19-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0b253b74-8253-44c4-962c-b01331772a19" (UID: "0b253b74-8253-44c4-962c-b01331772a19"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 18:02:09 crc kubenswrapper[4720]: I1013 18:02:09.907430 4720 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0b253b74-8253-44c4-962c-b01331772a19-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 13 18:02:09 crc kubenswrapper[4720]: I1013 18:02:09.907467 4720 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b253b74-8253-44c4-962c-b01331772a19-inventory\") on node \"crc\" DevicePath \"\"" Oct 13 18:02:09 crc kubenswrapper[4720]: I1013 18:02:09.907477 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqskz\" (UniqueName: \"kubernetes.io/projected/0b253b74-8253-44c4-962c-b01331772a19-kube-api-access-kqskz\") on node \"crc\" DevicePath \"\"" Oct 13 18:02:09 crc kubenswrapper[4720]: I1013 18:02:09.907487 4720 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/0b253b74-8253-44c4-962c-b01331772a19-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Oct 13 18:02:09 crc kubenswrapper[4720]: I1013 18:02:09.907496 4720 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b253b74-8253-44c4-962c-b01331772a19-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 18:02:10 crc kubenswrapper[4720]: I1013 18:02:10.179727 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ht9ww" event={"ID":"0b253b74-8253-44c4-962c-b01331772a19","Type":"ContainerDied","Data":"cd4b1dfd3ccf2cea1d0063eb97e7e2da1cd1bdb94d4ada8c61fd60088e96fc4c"} Oct 13 18:02:10 crc kubenswrapper[4720]: I1013 18:02:10.179769 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd4b1dfd3ccf2cea1d0063eb97e7e2da1cd1bdb94d4ada8c61fd60088e96fc4c" Oct 13 18:02:10 crc kubenswrapper[4720]: I1013 18:02:10.179826 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-ht9ww" Oct 13 18:02:10 crc kubenswrapper[4720]: I1013 18:02:10.301523 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-tgnks"] Oct 13 18:02:10 crc kubenswrapper[4720]: E1013 18:02:10.301919 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f21ba5f0-a0a5-4a29-9025-614d7f33c643" containerName="keystone-cron" Oct 13 18:02:10 crc kubenswrapper[4720]: I1013 18:02:10.301962 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="f21ba5f0-a0a5-4a29-9025-614d7f33c643" containerName="keystone-cron" Oct 13 18:02:10 crc kubenswrapper[4720]: E1013 18:02:10.301985 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b253b74-8253-44c4-962c-b01331772a19" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Oct 13 18:02:10 crc kubenswrapper[4720]: I1013 18:02:10.301998 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b253b74-8253-44c4-962c-b01331772a19" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Oct 13 18:02:10 crc kubenswrapper[4720]: I1013 18:02:10.302276 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b253b74-8253-44c4-962c-b01331772a19" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Oct 13 18:02:10 crc kubenswrapper[4720]: I1013 18:02:10.302327 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="f21ba5f0-a0a5-4a29-9025-614d7f33c643" containerName="keystone-cron" Oct 13 18:02:10 crc kubenswrapper[4720]: I1013 18:02:10.303071 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tgnks" Oct 13 18:02:10 crc kubenswrapper[4720]: I1013 18:02:10.307045 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 13 18:02:10 crc kubenswrapper[4720]: I1013 18:02:10.307045 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 13 18:02:10 crc kubenswrapper[4720]: I1013 18:02:10.307229 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Oct 13 18:02:10 crc kubenswrapper[4720]: I1013 18:02:10.307299 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l2fds" Oct 13 18:02:10 crc kubenswrapper[4720]: I1013 18:02:10.307427 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Oct 13 18:02:10 crc kubenswrapper[4720]: I1013 18:02:10.307439 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 13 18:02:10 crc kubenswrapper[4720]: I1013 18:02:10.307697 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Oct 13 18:02:10 crc kubenswrapper[4720]: I1013 18:02:10.312717 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-tgnks"] Oct 13 18:02:10 crc kubenswrapper[4720]: I1013 18:02:10.416493 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/9810822f-63d1-4a31-bde3-6353a5ee9007-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tgnks\" (UID: \"9810822f-63d1-4a31-bde3-6353a5ee9007\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tgnks" Oct 13 18:02:10 crc kubenswrapper[4720]: I1013 18:02:10.416599 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9810822f-63d1-4a31-bde3-6353a5ee9007-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tgnks\" (UID: \"9810822f-63d1-4a31-bde3-6353a5ee9007\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tgnks" Oct 13 18:02:10 crc kubenswrapper[4720]: I1013 18:02:10.416637 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/9810822f-63d1-4a31-bde3-6353a5ee9007-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tgnks\" (UID: \"9810822f-63d1-4a31-bde3-6353a5ee9007\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tgnks" Oct 13 18:02:10 crc kubenswrapper[4720]: I1013 18:02:10.416730 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/9810822f-63d1-4a31-bde3-6353a5ee9007-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tgnks\" (UID: \"9810822f-63d1-4a31-bde3-6353a5ee9007\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tgnks" Oct 13 18:02:10 crc kubenswrapper[4720]: I1013 18:02:10.416807 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/9810822f-63d1-4a31-bde3-6353a5ee9007-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tgnks\" (UID: \"9810822f-63d1-4a31-bde3-6353a5ee9007\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tgnks" Oct 13 18:02:10 crc kubenswrapper[4720]: I1013 18:02:10.416855 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/9810822f-63d1-4a31-bde3-6353a5ee9007-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tgnks\" (UID: \"9810822f-63d1-4a31-bde3-6353a5ee9007\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tgnks" Oct 13 18:02:10 crc kubenswrapper[4720]: I1013 18:02:10.416912 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7fjm\" (UniqueName: \"kubernetes.io/projected/9810822f-63d1-4a31-bde3-6353a5ee9007-kube-api-access-d7fjm\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tgnks\" (UID: \"9810822f-63d1-4a31-bde3-6353a5ee9007\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tgnks" Oct 13 18:02:10 crc kubenswrapper[4720]: I1013 18:02:10.416944 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9810822f-63d1-4a31-bde3-6353a5ee9007-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tgnks\" (UID: \"9810822f-63d1-4a31-bde3-6353a5ee9007\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tgnks" Oct 13 18:02:10 crc kubenswrapper[4720]: I1013 18:02:10.416993 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9810822f-63d1-4a31-bde3-6353a5ee9007-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tgnks\" (UID: \"9810822f-63d1-4a31-bde3-6353a5ee9007\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tgnks" Oct 13 18:02:10 crc kubenswrapper[4720]: I1013 18:02:10.519005 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/9810822f-63d1-4a31-bde3-6353a5ee9007-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tgnks\" (UID: \"9810822f-63d1-4a31-bde3-6353a5ee9007\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tgnks" Oct 13 18:02:10 crc kubenswrapper[4720]: I1013 18:02:10.519099 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/9810822f-63d1-4a31-bde3-6353a5ee9007-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tgnks\" (UID: \"9810822f-63d1-4a31-bde3-6353a5ee9007\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tgnks" Oct 13 18:02:10 crc kubenswrapper[4720]: I1013 18:02:10.519173 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7fjm\" (UniqueName: \"kubernetes.io/projected/9810822f-63d1-4a31-bde3-6353a5ee9007-kube-api-access-d7fjm\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tgnks\" (UID: \"9810822f-63d1-4a31-bde3-6353a5ee9007\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tgnks" Oct 13 18:02:10 crc kubenswrapper[4720]: I1013 18:02:10.519244 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9810822f-63d1-4a31-bde3-6353a5ee9007-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tgnks\" (UID: \"9810822f-63d1-4a31-bde3-6353a5ee9007\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tgnks" Oct 13 18:02:10 crc kubenswrapper[4720]: I1013 18:02:10.519310 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9810822f-63d1-4a31-bde3-6353a5ee9007-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tgnks\" (UID: \"9810822f-63d1-4a31-bde3-6353a5ee9007\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tgnks" Oct 13 18:02:10 crc kubenswrapper[4720]: I1013 18:02:10.519368 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/9810822f-63d1-4a31-bde3-6353a5ee9007-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tgnks\" (UID: \"9810822f-63d1-4a31-bde3-6353a5ee9007\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tgnks" Oct 13 18:02:10 crc kubenswrapper[4720]: I1013 18:02:10.519443 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9810822f-63d1-4a31-bde3-6353a5ee9007-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tgnks\" (UID: \"9810822f-63d1-4a31-bde3-6353a5ee9007\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tgnks" Oct 13 18:02:10 crc kubenswrapper[4720]: I1013 18:02:10.519484 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/9810822f-63d1-4a31-bde3-6353a5ee9007-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tgnks\" (UID: \"9810822f-63d1-4a31-bde3-6353a5ee9007\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tgnks" Oct 13 18:02:10 crc kubenswrapper[4720]: I1013 18:02:10.519613 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/9810822f-63d1-4a31-bde3-6353a5ee9007-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tgnks\" (UID: \"9810822f-63d1-4a31-bde3-6353a5ee9007\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tgnks" Oct 13 18:02:10 crc kubenswrapper[4720]: I1013 18:02:10.520336 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/9810822f-63d1-4a31-bde3-6353a5ee9007-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tgnks\" (UID: \"9810822f-63d1-4a31-bde3-6353a5ee9007\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tgnks" Oct 13 18:02:10 crc kubenswrapper[4720]: I1013 18:02:10.524528 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/9810822f-63d1-4a31-bde3-6353a5ee9007-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tgnks\" (UID: \"9810822f-63d1-4a31-bde3-6353a5ee9007\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tgnks" Oct 13 18:02:10 crc kubenswrapper[4720]: I1013 18:02:10.525622 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/9810822f-63d1-4a31-bde3-6353a5ee9007-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tgnks\" (UID: \"9810822f-63d1-4a31-bde3-6353a5ee9007\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tgnks" Oct 13 18:02:10 crc kubenswrapper[4720]: I1013 18:02:10.525856 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9810822f-63d1-4a31-bde3-6353a5ee9007-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tgnks\" (UID: \"9810822f-63d1-4a31-bde3-6353a5ee9007\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tgnks" Oct 13 18:02:10 crc kubenswrapper[4720]: I1013 18:02:10.526622 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9810822f-63d1-4a31-bde3-6353a5ee9007-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tgnks\" (UID: \"9810822f-63d1-4a31-bde3-6353a5ee9007\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tgnks" Oct 13 18:02:10 crc kubenswrapper[4720]: I1013 18:02:10.527404 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9810822f-63d1-4a31-bde3-6353a5ee9007-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tgnks\" (UID: \"9810822f-63d1-4a31-bde3-6353a5ee9007\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tgnks" Oct 13 18:02:10 crc kubenswrapper[4720]: I1013 18:02:10.528994 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/9810822f-63d1-4a31-bde3-6353a5ee9007-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tgnks\" (UID: \"9810822f-63d1-4a31-bde3-6353a5ee9007\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tgnks" Oct 13 18:02:10 crc kubenswrapper[4720]: I1013 18:02:10.535385 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/9810822f-63d1-4a31-bde3-6353a5ee9007-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tgnks\" (UID: \"9810822f-63d1-4a31-bde3-6353a5ee9007\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tgnks" Oct 13 18:02:10 crc kubenswrapper[4720]: I1013 18:02:10.541437 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7fjm\" (UniqueName: \"kubernetes.io/projected/9810822f-63d1-4a31-bde3-6353a5ee9007-kube-api-access-d7fjm\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tgnks\" (UID: \"9810822f-63d1-4a31-bde3-6353a5ee9007\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tgnks" Oct 13 18:02:10 crc kubenswrapper[4720]: I1013 18:02:10.628311 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tgnks" Oct 13 18:02:11 crc kubenswrapper[4720]: I1013 18:02:11.230462 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-tgnks"] Oct 13 18:02:11 crc kubenswrapper[4720]: I1013 18:02:11.233468 4720 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 13 18:02:12 crc kubenswrapper[4720]: I1013 18:02:12.196563 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tgnks" event={"ID":"9810822f-63d1-4a31-bde3-6353a5ee9007","Type":"ContainerStarted","Data":"d4365eb9d82d880775d77205e5f8fbf4bbd727e467d0c2494950402e0278e679"} Oct 13 18:02:12 crc kubenswrapper[4720]: I1013 18:02:12.196891 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tgnks" event={"ID":"9810822f-63d1-4a31-bde3-6353a5ee9007","Type":"ContainerStarted","Data":"d000e25a4ce9c7654dcc6fa00adf15859db2b0b950d524651ae9f20285ad434b"} Oct 13 18:02:12 crc kubenswrapper[4720]: I1013 18:02:12.222863 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tgnks" podStartSLOduration=1.605199057 podStartE2EDuration="2.222845219s" podCreationTimestamp="2025-10-13 18:02:10 +0000 UTC" firstStartedPulling="2025-10-13 18:02:11.233132596 +0000 UTC m=+2276.690382748" lastFinishedPulling="2025-10-13 18:02:11.850778748 +0000 UTC m=+2277.308028910" observedRunningTime="2025-10-13 18:02:12.218457046 +0000 UTC m=+2277.675707198" watchObservedRunningTime="2025-10-13 18:02:12.222845219 +0000 UTC m=+2277.680095361" Oct 13 18:02:16 crc kubenswrapper[4720]: I1013 18:02:16.168072 4720 scope.go:117] "RemoveContainer" containerID="f16b061e305a205bc6cd67c30a1e6ff436af254fc7370be55f944123cc9cd230" Oct 13 18:02:16 crc kubenswrapper[4720]: E1013 18:02:16.169516 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:02:29 crc kubenswrapper[4720]: I1013 18:02:29.168563 4720 scope.go:117] "RemoveContainer" containerID="f16b061e305a205bc6cd67c30a1e6ff436af254fc7370be55f944123cc9cd230" Oct 13 18:02:29 crc kubenswrapper[4720]: E1013 18:02:29.171162 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:02:37 crc kubenswrapper[4720]: I1013 18:02:37.138901 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6qczp"] Oct 13 18:02:37 crc kubenswrapper[4720]: I1013 18:02:37.144839 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6qczp" Oct 13 18:02:37 crc kubenswrapper[4720]: I1013 18:02:37.154156 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6qczp"] Oct 13 18:02:37 crc kubenswrapper[4720]: I1013 18:02:37.322342 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwf5m\" (UniqueName: \"kubernetes.io/projected/48f72697-fee4-4a05-ae1f-0c8eac474248-kube-api-access-rwf5m\") pod \"certified-operators-6qczp\" (UID: \"48f72697-fee4-4a05-ae1f-0c8eac474248\") " pod="openshift-marketplace/certified-operators-6qczp" Oct 13 18:02:37 crc kubenswrapper[4720]: I1013 18:02:37.322503 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48f72697-fee4-4a05-ae1f-0c8eac474248-utilities\") pod \"certified-operators-6qczp\" (UID: \"48f72697-fee4-4a05-ae1f-0c8eac474248\") " pod="openshift-marketplace/certified-operators-6qczp" Oct 13 18:02:37 crc kubenswrapper[4720]: I1013 18:02:37.322569 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48f72697-fee4-4a05-ae1f-0c8eac474248-catalog-content\") pod \"certified-operators-6qczp\" (UID: \"48f72697-fee4-4a05-ae1f-0c8eac474248\") " pod="openshift-marketplace/certified-operators-6qczp" Oct 13 18:02:37 crc kubenswrapper[4720]: I1013 18:02:37.424267 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwf5m\" (UniqueName: \"kubernetes.io/projected/48f72697-fee4-4a05-ae1f-0c8eac474248-kube-api-access-rwf5m\") pod \"certified-operators-6qczp\" (UID: \"48f72697-fee4-4a05-ae1f-0c8eac474248\") " pod="openshift-marketplace/certified-operators-6qczp" Oct 13 18:02:37 crc kubenswrapper[4720]: I1013 18:02:37.424355 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48f72697-fee4-4a05-ae1f-0c8eac474248-utilities\") pod \"certified-operators-6qczp\" (UID: \"48f72697-fee4-4a05-ae1f-0c8eac474248\") " pod="openshift-marketplace/certified-operators-6qczp" Oct 13 18:02:37 crc kubenswrapper[4720]: I1013 18:02:37.424396 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48f72697-fee4-4a05-ae1f-0c8eac474248-catalog-content\") pod \"certified-operators-6qczp\" (UID: \"48f72697-fee4-4a05-ae1f-0c8eac474248\") " pod="openshift-marketplace/certified-operators-6qczp" Oct 13 18:02:37 crc kubenswrapper[4720]: I1013 18:02:37.424927 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48f72697-fee4-4a05-ae1f-0c8eac474248-catalog-content\") pod \"certified-operators-6qczp\" (UID: \"48f72697-fee4-4a05-ae1f-0c8eac474248\") " pod="openshift-marketplace/certified-operators-6qczp" Oct 13 18:02:37 crc kubenswrapper[4720]: I1013 18:02:37.425263 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48f72697-fee4-4a05-ae1f-0c8eac474248-utilities\") pod \"certified-operators-6qczp\" (UID: \"48f72697-fee4-4a05-ae1f-0c8eac474248\") " pod="openshift-marketplace/certified-operators-6qczp" Oct 13 18:02:37 crc kubenswrapper[4720]: I1013 18:02:37.452581 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwf5m\" (UniqueName: \"kubernetes.io/projected/48f72697-fee4-4a05-ae1f-0c8eac474248-kube-api-access-rwf5m\") pod \"certified-operators-6qczp\" (UID: \"48f72697-fee4-4a05-ae1f-0c8eac474248\") " pod="openshift-marketplace/certified-operators-6qczp" Oct 13 18:02:37 crc kubenswrapper[4720]: I1013 18:02:37.471172 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6qczp" Oct 13 18:02:37 crc kubenswrapper[4720]: I1013 18:02:37.974686 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6qczp"] Oct 13 18:02:38 crc kubenswrapper[4720]: I1013 18:02:38.488785 4720 generic.go:334] "Generic (PLEG): container finished" podID="48f72697-fee4-4a05-ae1f-0c8eac474248" containerID="4210b400f39cff46fb8df3626279cc41c6d039542d38a682a19c8aaaf8fec53d" exitCode=0 Oct 13 18:02:38 crc kubenswrapper[4720]: I1013 18:02:38.488831 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6qczp" event={"ID":"48f72697-fee4-4a05-ae1f-0c8eac474248","Type":"ContainerDied","Data":"4210b400f39cff46fb8df3626279cc41c6d039542d38a682a19c8aaaf8fec53d"} Oct 13 18:02:38 crc kubenswrapper[4720]: I1013 18:02:38.488857 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6qczp" event={"ID":"48f72697-fee4-4a05-ae1f-0c8eac474248","Type":"ContainerStarted","Data":"4462123b1b9f1d5244adc39eee2138245fffef1fddd4a821aaf09fb65afbb52c"} Oct 13 18:02:40 crc kubenswrapper[4720]: I1013 18:02:40.168731 4720 scope.go:117] "RemoveContainer" containerID="f16b061e305a205bc6cd67c30a1e6ff436af254fc7370be55f944123cc9cd230" Oct 13 18:02:40 crc kubenswrapper[4720]: E1013 18:02:40.169910 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:02:40 crc kubenswrapper[4720]: I1013 18:02:40.512243 4720 generic.go:334] "Generic (PLEG): container finished" podID="48f72697-fee4-4a05-ae1f-0c8eac474248" containerID="450729b3bed24c204a23b35c1a37293d00f8aa6d5be63ef4ae0751964d4029f7" exitCode=0 Oct 13 18:02:40 crc kubenswrapper[4720]: I1013 18:02:40.512312 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6qczp" event={"ID":"48f72697-fee4-4a05-ae1f-0c8eac474248","Type":"ContainerDied","Data":"450729b3bed24c204a23b35c1a37293d00f8aa6d5be63ef4ae0751964d4029f7"} Oct 13 18:02:41 crc kubenswrapper[4720]: I1013 18:02:41.521969 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6qczp" event={"ID":"48f72697-fee4-4a05-ae1f-0c8eac474248","Type":"ContainerStarted","Data":"a9219151e16dc65d7636ad32bd323bd7cc685a6ebe599defaf9ef04e87eb20dc"} Oct 13 18:02:41 crc kubenswrapper[4720]: I1013 18:02:41.541084 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6qczp" podStartSLOduration=1.7487822259999999 podStartE2EDuration="4.541063146s" podCreationTimestamp="2025-10-13 18:02:37 +0000 UTC" firstStartedPulling="2025-10-13 18:02:38.490941419 +0000 UTC m=+2303.948191551" lastFinishedPulling="2025-10-13 18:02:41.283222339 +0000 UTC m=+2306.740472471" observedRunningTime="2025-10-13 18:02:41.535967715 +0000 UTC m=+2306.993217847" watchObservedRunningTime="2025-10-13 18:02:41.541063146 +0000 UTC m=+2306.998313278" Oct 13 18:02:41 crc kubenswrapper[4720]: I1013 18:02:41.992056 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kh5q8"] Oct 13 18:02:41 crc kubenswrapper[4720]: I1013 18:02:41.994688 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kh5q8" Oct 13 18:02:42 crc kubenswrapper[4720]: I1013 18:02:42.007689 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kh5q8"] Oct 13 18:02:42 crc kubenswrapper[4720]: I1013 18:02:42.110316 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c4d1044-b0fe-43d7-b229-824775c8479f-utilities\") pod \"redhat-operators-kh5q8\" (UID: \"3c4d1044-b0fe-43d7-b229-824775c8479f\") " pod="openshift-marketplace/redhat-operators-kh5q8" Oct 13 18:02:42 crc kubenswrapper[4720]: I1013 18:02:42.111732 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdk5l\" (UniqueName: \"kubernetes.io/projected/3c4d1044-b0fe-43d7-b229-824775c8479f-kube-api-access-bdk5l\") pod \"redhat-operators-kh5q8\" (UID: \"3c4d1044-b0fe-43d7-b229-824775c8479f\") " pod="openshift-marketplace/redhat-operators-kh5q8" Oct 13 18:02:42 crc kubenswrapper[4720]: I1013 18:02:42.111849 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c4d1044-b0fe-43d7-b229-824775c8479f-catalog-content\") pod \"redhat-operators-kh5q8\" (UID: \"3c4d1044-b0fe-43d7-b229-824775c8479f\") " pod="openshift-marketplace/redhat-operators-kh5q8" Oct 13 18:02:42 crc kubenswrapper[4720]: I1013 18:02:42.213351 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c4d1044-b0fe-43d7-b229-824775c8479f-catalog-content\") pod \"redhat-operators-kh5q8\" (UID: \"3c4d1044-b0fe-43d7-b229-824775c8479f\") " pod="openshift-marketplace/redhat-operators-kh5q8" Oct 13 18:02:42 crc kubenswrapper[4720]: I1013 18:02:42.213952 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c4d1044-b0fe-43d7-b229-824775c8479f-catalog-content\") pod \"redhat-operators-kh5q8\" (UID: \"3c4d1044-b0fe-43d7-b229-824775c8479f\") " pod="openshift-marketplace/redhat-operators-kh5q8" Oct 13 18:02:42 crc kubenswrapper[4720]: I1013 18:02:42.214004 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c4d1044-b0fe-43d7-b229-824775c8479f-utilities\") pod \"redhat-operators-kh5q8\" (UID: \"3c4d1044-b0fe-43d7-b229-824775c8479f\") " pod="openshift-marketplace/redhat-operators-kh5q8" Oct 13 18:02:42 crc kubenswrapper[4720]: I1013 18:02:42.215418 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c4d1044-b0fe-43d7-b229-824775c8479f-utilities\") pod \"redhat-operators-kh5q8\" (UID: \"3c4d1044-b0fe-43d7-b229-824775c8479f\") " pod="openshift-marketplace/redhat-operators-kh5q8" Oct 13 18:02:42 crc kubenswrapper[4720]: I1013 18:02:42.215695 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdk5l\" (UniqueName: \"kubernetes.io/projected/3c4d1044-b0fe-43d7-b229-824775c8479f-kube-api-access-bdk5l\") pod \"redhat-operators-kh5q8\" (UID: \"3c4d1044-b0fe-43d7-b229-824775c8479f\") " pod="openshift-marketplace/redhat-operators-kh5q8" Oct 13 18:02:42 crc kubenswrapper[4720]: I1013 18:02:42.238037 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdk5l\" (UniqueName: \"kubernetes.io/projected/3c4d1044-b0fe-43d7-b229-824775c8479f-kube-api-access-bdk5l\") pod \"redhat-operators-kh5q8\" (UID: \"3c4d1044-b0fe-43d7-b229-824775c8479f\") " pod="openshift-marketplace/redhat-operators-kh5q8" Oct 13 18:02:42 crc kubenswrapper[4720]: I1013 18:02:42.342320 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kh5q8" Oct 13 18:02:42 crc kubenswrapper[4720]: I1013 18:02:42.834579 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kh5q8"] Oct 13 18:02:43 crc kubenswrapper[4720]: I1013 18:02:43.558552 4720 generic.go:334] "Generic (PLEG): container finished" podID="3c4d1044-b0fe-43d7-b229-824775c8479f" containerID="7e986fa9c3f13ca94dc70b8819f38e60aecf216724424cc18493384512884cf3" exitCode=0 Oct 13 18:02:43 crc kubenswrapper[4720]: I1013 18:02:43.558639 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kh5q8" event={"ID":"3c4d1044-b0fe-43d7-b229-824775c8479f","Type":"ContainerDied","Data":"7e986fa9c3f13ca94dc70b8819f38e60aecf216724424cc18493384512884cf3"} Oct 13 18:02:43 crc kubenswrapper[4720]: I1013 18:02:43.558899 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kh5q8" event={"ID":"3c4d1044-b0fe-43d7-b229-824775c8479f","Type":"ContainerStarted","Data":"61fbf98f11da844be97993ad41e7ce40e16e5ce0aa50cd479ed045ece5b6acbf"} Oct 13 18:02:45 crc kubenswrapper[4720]: I1013 18:02:45.586266 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kh5q8" event={"ID":"3c4d1044-b0fe-43d7-b229-824775c8479f","Type":"ContainerStarted","Data":"8640d273bd4e6362c3114139f622c6c84b2f346a83b4162978f4e3e51c9bdb75"} Oct 13 18:02:47 crc kubenswrapper[4720]: I1013 18:02:47.472105 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6qczp" Oct 13 18:02:47 crc kubenswrapper[4720]: I1013 18:02:47.473413 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6qczp" Oct 13 18:02:47 crc kubenswrapper[4720]: I1013 18:02:47.526905 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6qczp" Oct 13 18:02:47 crc kubenswrapper[4720]: I1013 18:02:47.610518 4720 generic.go:334] "Generic (PLEG): container finished" podID="3c4d1044-b0fe-43d7-b229-824775c8479f" containerID="8640d273bd4e6362c3114139f622c6c84b2f346a83b4162978f4e3e51c9bdb75" exitCode=0 Oct 13 18:02:47 crc kubenswrapper[4720]: I1013 18:02:47.610628 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kh5q8" event={"ID":"3c4d1044-b0fe-43d7-b229-824775c8479f","Type":"ContainerDied","Data":"8640d273bd4e6362c3114139f622c6c84b2f346a83b4162978f4e3e51c9bdb75"} Oct 13 18:02:47 crc kubenswrapper[4720]: I1013 18:02:47.680915 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6qczp" Oct 13 18:02:48 crc kubenswrapper[4720]: I1013 18:02:48.625231 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kh5q8" event={"ID":"3c4d1044-b0fe-43d7-b229-824775c8479f","Type":"ContainerStarted","Data":"6bc494b41f126aafe968fa1c41943f94d0448ebb9c70ba5546413ab41751fd3a"} Oct 13 18:02:48 crc kubenswrapper[4720]: I1013 18:02:48.657300 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kh5q8" podStartSLOduration=2.939981716 podStartE2EDuration="7.65728424s" podCreationTimestamp="2025-10-13 18:02:41 +0000 UTC" firstStartedPulling="2025-10-13 18:02:43.561327785 +0000 UTC m=+2309.018577917" lastFinishedPulling="2025-10-13 18:02:48.278630309 +0000 UTC m=+2313.735880441" observedRunningTime="2025-10-13 18:02:48.64251059 +0000 UTC m=+2314.099760732" watchObservedRunningTime="2025-10-13 18:02:48.65728424 +0000 UTC m=+2314.114534372" Oct 13 18:02:49 crc kubenswrapper[4720]: I1013 18:02:49.967840 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6qczp"] Oct 13 18:02:49 crc kubenswrapper[4720]: I1013 18:02:49.968180 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6qczp" podUID="48f72697-fee4-4a05-ae1f-0c8eac474248" containerName="registry-server" containerID="cri-o://a9219151e16dc65d7636ad32bd323bd7cc685a6ebe599defaf9ef04e87eb20dc" gracePeriod=2 Oct 13 18:02:50 crc kubenswrapper[4720]: I1013 18:02:50.411683 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6qczp" Oct 13 18:02:50 crc kubenswrapper[4720]: I1013 18:02:50.502600 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48f72697-fee4-4a05-ae1f-0c8eac474248-catalog-content\") pod \"48f72697-fee4-4a05-ae1f-0c8eac474248\" (UID: \"48f72697-fee4-4a05-ae1f-0c8eac474248\") " Oct 13 18:02:50 crc kubenswrapper[4720]: I1013 18:02:50.502668 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwf5m\" (UniqueName: \"kubernetes.io/projected/48f72697-fee4-4a05-ae1f-0c8eac474248-kube-api-access-rwf5m\") pod \"48f72697-fee4-4a05-ae1f-0c8eac474248\" (UID: \"48f72697-fee4-4a05-ae1f-0c8eac474248\") " Oct 13 18:02:50 crc kubenswrapper[4720]: I1013 18:02:50.502795 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48f72697-fee4-4a05-ae1f-0c8eac474248-utilities\") pod \"48f72697-fee4-4a05-ae1f-0c8eac474248\" (UID: \"48f72697-fee4-4a05-ae1f-0c8eac474248\") " Oct 13 18:02:50 crc kubenswrapper[4720]: I1013 18:02:50.504052 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48f72697-fee4-4a05-ae1f-0c8eac474248-utilities" (OuterVolumeSpecName: "utilities") pod "48f72697-fee4-4a05-ae1f-0c8eac474248" (UID: "48f72697-fee4-4a05-ae1f-0c8eac474248"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 18:02:50 crc kubenswrapper[4720]: I1013 18:02:50.513164 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48f72697-fee4-4a05-ae1f-0c8eac474248-kube-api-access-rwf5m" (OuterVolumeSpecName: "kube-api-access-rwf5m") pod "48f72697-fee4-4a05-ae1f-0c8eac474248" (UID: "48f72697-fee4-4a05-ae1f-0c8eac474248"). InnerVolumeSpecName "kube-api-access-rwf5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 18:02:50 crc kubenswrapper[4720]: I1013 18:02:50.550769 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48f72697-fee4-4a05-ae1f-0c8eac474248-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "48f72697-fee4-4a05-ae1f-0c8eac474248" (UID: "48f72697-fee4-4a05-ae1f-0c8eac474248"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 18:02:50 crc kubenswrapper[4720]: I1013 18:02:50.604931 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48f72697-fee4-4a05-ae1f-0c8eac474248-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 18:02:50 crc kubenswrapper[4720]: I1013 18:02:50.604964 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwf5m\" (UniqueName: \"kubernetes.io/projected/48f72697-fee4-4a05-ae1f-0c8eac474248-kube-api-access-rwf5m\") on node \"crc\" DevicePath \"\"" Oct 13 18:02:50 crc kubenswrapper[4720]: I1013 18:02:50.604974 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48f72697-fee4-4a05-ae1f-0c8eac474248-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 18:02:50 crc kubenswrapper[4720]: I1013 18:02:50.646002 4720 generic.go:334] "Generic (PLEG): container finished" podID="48f72697-fee4-4a05-ae1f-0c8eac474248" containerID="a9219151e16dc65d7636ad32bd323bd7cc685a6ebe599defaf9ef04e87eb20dc" exitCode=0 Oct 13 18:02:50 crc kubenswrapper[4720]: I1013 18:02:50.646050 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6qczp" event={"ID":"48f72697-fee4-4a05-ae1f-0c8eac474248","Type":"ContainerDied","Data":"a9219151e16dc65d7636ad32bd323bd7cc685a6ebe599defaf9ef04e87eb20dc"} Oct 13 18:02:50 crc kubenswrapper[4720]: I1013 18:02:50.646077 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6qczp" event={"ID":"48f72697-fee4-4a05-ae1f-0c8eac474248","Type":"ContainerDied","Data":"4462123b1b9f1d5244adc39eee2138245fffef1fddd4a821aaf09fb65afbb52c"} Oct 13 18:02:50 crc kubenswrapper[4720]: I1013 18:02:50.646092 4720 scope.go:117] "RemoveContainer" containerID="a9219151e16dc65d7636ad32bd323bd7cc685a6ebe599defaf9ef04e87eb20dc" Oct 13 18:02:50 crc kubenswrapper[4720]: I1013 18:02:50.646221 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6qczp" Oct 13 18:02:50 crc kubenswrapper[4720]: I1013 18:02:50.682283 4720 scope.go:117] "RemoveContainer" containerID="450729b3bed24c204a23b35c1a37293d00f8aa6d5be63ef4ae0751964d4029f7" Oct 13 18:02:50 crc kubenswrapper[4720]: I1013 18:02:50.692369 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6qczp"] Oct 13 18:02:50 crc kubenswrapper[4720]: I1013 18:02:50.700104 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6qczp"] Oct 13 18:02:50 crc kubenswrapper[4720]: I1013 18:02:50.713484 4720 scope.go:117] "RemoveContainer" containerID="4210b400f39cff46fb8df3626279cc41c6d039542d38a682a19c8aaaf8fec53d" Oct 13 18:02:50 crc kubenswrapper[4720]: I1013 18:02:50.763094 4720 scope.go:117] "RemoveContainer" containerID="a9219151e16dc65d7636ad32bd323bd7cc685a6ebe599defaf9ef04e87eb20dc" Oct 13 18:02:50 crc kubenswrapper[4720]: E1013 18:02:50.763621 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9219151e16dc65d7636ad32bd323bd7cc685a6ebe599defaf9ef04e87eb20dc\": container with ID starting with a9219151e16dc65d7636ad32bd323bd7cc685a6ebe599defaf9ef04e87eb20dc not found: ID does not exist" containerID="a9219151e16dc65d7636ad32bd323bd7cc685a6ebe599defaf9ef04e87eb20dc" Oct 13 18:02:50 crc kubenswrapper[4720]: I1013 18:02:50.763669 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9219151e16dc65d7636ad32bd323bd7cc685a6ebe599defaf9ef04e87eb20dc"} err="failed to get container status \"a9219151e16dc65d7636ad32bd323bd7cc685a6ebe599defaf9ef04e87eb20dc\": rpc error: code = NotFound desc = could not find container \"a9219151e16dc65d7636ad32bd323bd7cc685a6ebe599defaf9ef04e87eb20dc\": container with ID starting with a9219151e16dc65d7636ad32bd323bd7cc685a6ebe599defaf9ef04e87eb20dc not found: ID does not exist" Oct 13 18:02:50 crc kubenswrapper[4720]: I1013 18:02:50.763690 4720 scope.go:117] "RemoveContainer" containerID="450729b3bed24c204a23b35c1a37293d00f8aa6d5be63ef4ae0751964d4029f7" Oct 13 18:02:50 crc kubenswrapper[4720]: E1013 18:02:50.764149 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"450729b3bed24c204a23b35c1a37293d00f8aa6d5be63ef4ae0751964d4029f7\": container with ID starting with 450729b3bed24c204a23b35c1a37293d00f8aa6d5be63ef4ae0751964d4029f7 not found: ID does not exist" containerID="450729b3bed24c204a23b35c1a37293d00f8aa6d5be63ef4ae0751964d4029f7" Oct 13 18:02:50 crc kubenswrapper[4720]: I1013 18:02:50.764225 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"450729b3bed24c204a23b35c1a37293d00f8aa6d5be63ef4ae0751964d4029f7"} err="failed to get container status \"450729b3bed24c204a23b35c1a37293d00f8aa6d5be63ef4ae0751964d4029f7\": rpc error: code = NotFound desc = could not find container \"450729b3bed24c204a23b35c1a37293d00f8aa6d5be63ef4ae0751964d4029f7\": container with ID starting with 450729b3bed24c204a23b35c1a37293d00f8aa6d5be63ef4ae0751964d4029f7 not found: ID does not exist" Oct 13 18:02:50 crc kubenswrapper[4720]: I1013 18:02:50.764261 4720 scope.go:117] "RemoveContainer" containerID="4210b400f39cff46fb8df3626279cc41c6d039542d38a682a19c8aaaf8fec53d" Oct 13 18:02:50 crc kubenswrapper[4720]: E1013 18:02:50.764780 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4210b400f39cff46fb8df3626279cc41c6d039542d38a682a19c8aaaf8fec53d\": container with ID starting with 4210b400f39cff46fb8df3626279cc41c6d039542d38a682a19c8aaaf8fec53d not found: ID does not exist" containerID="4210b400f39cff46fb8df3626279cc41c6d039542d38a682a19c8aaaf8fec53d" Oct 13 18:02:50 crc kubenswrapper[4720]: I1013 18:02:50.764814 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4210b400f39cff46fb8df3626279cc41c6d039542d38a682a19c8aaaf8fec53d"} err="failed to get container status \"4210b400f39cff46fb8df3626279cc41c6d039542d38a682a19c8aaaf8fec53d\": rpc error: code = NotFound desc = could not find container \"4210b400f39cff46fb8df3626279cc41c6d039542d38a682a19c8aaaf8fec53d\": container with ID starting with 4210b400f39cff46fb8df3626279cc41c6d039542d38a682a19c8aaaf8fec53d not found: ID does not exist" Oct 13 18:02:51 crc kubenswrapper[4720]: I1013 18:02:51.179981 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48f72697-fee4-4a05-ae1f-0c8eac474248" path="/var/lib/kubelet/pods/48f72697-fee4-4a05-ae1f-0c8eac474248/volumes" Oct 13 18:02:52 crc kubenswrapper[4720]: I1013 18:02:52.342760 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kh5q8" Oct 13 18:02:52 crc kubenswrapper[4720]: I1013 18:02:52.343093 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kh5q8" Oct 13 18:02:53 crc kubenswrapper[4720]: I1013 18:02:53.403732 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kh5q8" podUID="3c4d1044-b0fe-43d7-b229-824775c8479f" containerName="registry-server" probeResult="failure" output=< Oct 13 18:02:53 crc kubenswrapper[4720]: timeout: failed to connect service ":50051" within 1s Oct 13 18:02:53 crc kubenswrapper[4720]: > Oct 13 18:02:55 crc kubenswrapper[4720]: I1013 18:02:55.178377 4720 scope.go:117] "RemoveContainer" containerID="f16b061e305a205bc6cd67c30a1e6ff436af254fc7370be55f944123cc9cd230" Oct 13 18:02:55 crc kubenswrapper[4720]: E1013 18:02:55.179019 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:03:02 crc kubenswrapper[4720]: I1013 18:03:02.403157 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kh5q8" Oct 13 18:03:02 crc kubenswrapper[4720]: I1013 18:03:02.481183 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kh5q8" Oct 13 18:03:02 crc kubenswrapper[4720]: I1013 18:03:02.655064 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kh5q8"] Oct 13 18:03:03 crc kubenswrapper[4720]: I1013 18:03:03.783013 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kh5q8" podUID="3c4d1044-b0fe-43d7-b229-824775c8479f" containerName="registry-server" containerID="cri-o://6bc494b41f126aafe968fa1c41943f94d0448ebb9c70ba5546413ab41751fd3a" gracePeriod=2 Oct 13 18:03:04 crc kubenswrapper[4720]: I1013 18:03:04.368948 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kh5q8" Oct 13 18:03:04 crc kubenswrapper[4720]: I1013 18:03:04.504610 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c4d1044-b0fe-43d7-b229-824775c8479f-utilities\") pod \"3c4d1044-b0fe-43d7-b229-824775c8479f\" (UID: \"3c4d1044-b0fe-43d7-b229-824775c8479f\") " Oct 13 18:03:04 crc kubenswrapper[4720]: I1013 18:03:04.504863 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdk5l\" (UniqueName: \"kubernetes.io/projected/3c4d1044-b0fe-43d7-b229-824775c8479f-kube-api-access-bdk5l\") pod \"3c4d1044-b0fe-43d7-b229-824775c8479f\" (UID: \"3c4d1044-b0fe-43d7-b229-824775c8479f\") " Oct 13 18:03:04 crc kubenswrapper[4720]: I1013 18:03:04.505279 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c4d1044-b0fe-43d7-b229-824775c8479f-catalog-content\") pod \"3c4d1044-b0fe-43d7-b229-824775c8479f\" (UID: \"3c4d1044-b0fe-43d7-b229-824775c8479f\") " Oct 13 18:03:04 crc kubenswrapper[4720]: I1013 18:03:04.505705 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c4d1044-b0fe-43d7-b229-824775c8479f-utilities" (OuterVolumeSpecName: "utilities") pod "3c4d1044-b0fe-43d7-b229-824775c8479f" (UID: "3c4d1044-b0fe-43d7-b229-824775c8479f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 18:03:04 crc kubenswrapper[4720]: I1013 18:03:04.513458 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c4d1044-b0fe-43d7-b229-824775c8479f-kube-api-access-bdk5l" (OuterVolumeSpecName: "kube-api-access-bdk5l") pod "3c4d1044-b0fe-43d7-b229-824775c8479f" (UID: "3c4d1044-b0fe-43d7-b229-824775c8479f"). InnerVolumeSpecName "kube-api-access-bdk5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 18:03:04 crc kubenswrapper[4720]: I1013 18:03:04.590044 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c4d1044-b0fe-43d7-b229-824775c8479f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3c4d1044-b0fe-43d7-b229-824775c8479f" (UID: "3c4d1044-b0fe-43d7-b229-824775c8479f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 18:03:04 crc kubenswrapper[4720]: I1013 18:03:04.606807 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdk5l\" (UniqueName: \"kubernetes.io/projected/3c4d1044-b0fe-43d7-b229-824775c8479f-kube-api-access-bdk5l\") on node \"crc\" DevicePath \"\"" Oct 13 18:03:04 crc kubenswrapper[4720]: I1013 18:03:04.606837 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c4d1044-b0fe-43d7-b229-824775c8479f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 18:03:04 crc kubenswrapper[4720]: I1013 18:03:04.606847 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c4d1044-b0fe-43d7-b229-824775c8479f-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 18:03:04 crc kubenswrapper[4720]: I1013 18:03:04.794692 4720 generic.go:334] "Generic (PLEG): container finished" podID="3c4d1044-b0fe-43d7-b229-824775c8479f" containerID="6bc494b41f126aafe968fa1c41943f94d0448ebb9c70ba5546413ab41751fd3a" exitCode=0 Oct 13 18:03:04 crc kubenswrapper[4720]: I1013 18:03:04.794785 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kh5q8" Oct 13 18:03:04 crc kubenswrapper[4720]: I1013 18:03:04.794806 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kh5q8" event={"ID":"3c4d1044-b0fe-43d7-b229-824775c8479f","Type":"ContainerDied","Data":"6bc494b41f126aafe968fa1c41943f94d0448ebb9c70ba5546413ab41751fd3a"} Oct 13 18:03:04 crc kubenswrapper[4720]: I1013 18:03:04.795241 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kh5q8" event={"ID":"3c4d1044-b0fe-43d7-b229-824775c8479f","Type":"ContainerDied","Data":"61fbf98f11da844be97993ad41e7ce40e16e5ce0aa50cd479ed045ece5b6acbf"} Oct 13 18:03:04 crc kubenswrapper[4720]: I1013 18:03:04.795268 4720 scope.go:117] "RemoveContainer" containerID="6bc494b41f126aafe968fa1c41943f94d0448ebb9c70ba5546413ab41751fd3a" Oct 13 18:03:04 crc kubenswrapper[4720]: I1013 18:03:04.820851 4720 scope.go:117] "RemoveContainer" containerID="8640d273bd4e6362c3114139f622c6c84b2f346a83b4162978f4e3e51c9bdb75" Oct 13 18:03:04 crc kubenswrapper[4720]: I1013 18:03:04.841616 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kh5q8"] Oct 13 18:03:04 crc kubenswrapper[4720]: I1013 18:03:04.853468 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kh5q8"] Oct 13 18:03:04 crc kubenswrapper[4720]: I1013 18:03:04.867026 4720 scope.go:117] "RemoveContainer" containerID="7e986fa9c3f13ca94dc70b8819f38e60aecf216724424cc18493384512884cf3" Oct 13 18:03:04 crc kubenswrapper[4720]: I1013 18:03:04.935972 4720 scope.go:117] "RemoveContainer" containerID="6bc494b41f126aafe968fa1c41943f94d0448ebb9c70ba5546413ab41751fd3a" Oct 13 18:03:04 crc kubenswrapper[4720]: E1013 18:03:04.936488 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bc494b41f126aafe968fa1c41943f94d0448ebb9c70ba5546413ab41751fd3a\": container with ID starting with 6bc494b41f126aafe968fa1c41943f94d0448ebb9c70ba5546413ab41751fd3a not found: ID does not exist" containerID="6bc494b41f126aafe968fa1c41943f94d0448ebb9c70ba5546413ab41751fd3a" Oct 13 18:03:04 crc kubenswrapper[4720]: I1013 18:03:04.936528 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bc494b41f126aafe968fa1c41943f94d0448ebb9c70ba5546413ab41751fd3a"} err="failed to get container status \"6bc494b41f126aafe968fa1c41943f94d0448ebb9c70ba5546413ab41751fd3a\": rpc error: code = NotFound desc = could not find container \"6bc494b41f126aafe968fa1c41943f94d0448ebb9c70ba5546413ab41751fd3a\": container with ID starting with 6bc494b41f126aafe968fa1c41943f94d0448ebb9c70ba5546413ab41751fd3a not found: ID does not exist" Oct 13 18:03:04 crc kubenswrapper[4720]: I1013 18:03:04.936556 4720 scope.go:117] "RemoveContainer" containerID="8640d273bd4e6362c3114139f622c6c84b2f346a83b4162978f4e3e51c9bdb75" Oct 13 18:03:04 crc kubenswrapper[4720]: E1013 18:03:04.937262 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8640d273bd4e6362c3114139f622c6c84b2f346a83b4162978f4e3e51c9bdb75\": container with ID starting with 8640d273bd4e6362c3114139f622c6c84b2f346a83b4162978f4e3e51c9bdb75 not found: ID does not exist" containerID="8640d273bd4e6362c3114139f622c6c84b2f346a83b4162978f4e3e51c9bdb75" Oct 13 18:03:04 crc kubenswrapper[4720]: I1013 18:03:04.937300 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8640d273bd4e6362c3114139f622c6c84b2f346a83b4162978f4e3e51c9bdb75"} err="failed to get container status \"8640d273bd4e6362c3114139f622c6c84b2f346a83b4162978f4e3e51c9bdb75\": rpc error: code = NotFound desc = could not find container \"8640d273bd4e6362c3114139f622c6c84b2f346a83b4162978f4e3e51c9bdb75\": container with ID starting with 8640d273bd4e6362c3114139f622c6c84b2f346a83b4162978f4e3e51c9bdb75 not found: ID does not exist" Oct 13 18:03:04 crc kubenswrapper[4720]: I1013 18:03:04.937323 4720 scope.go:117] "RemoveContainer" containerID="7e986fa9c3f13ca94dc70b8819f38e60aecf216724424cc18493384512884cf3" Oct 13 18:03:04 crc kubenswrapper[4720]: E1013 18:03:04.937587 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e986fa9c3f13ca94dc70b8819f38e60aecf216724424cc18493384512884cf3\": container with ID starting with 7e986fa9c3f13ca94dc70b8819f38e60aecf216724424cc18493384512884cf3 not found: ID does not exist" containerID="7e986fa9c3f13ca94dc70b8819f38e60aecf216724424cc18493384512884cf3" Oct 13 18:03:04 crc kubenswrapper[4720]: I1013 18:03:04.937619 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e986fa9c3f13ca94dc70b8819f38e60aecf216724424cc18493384512884cf3"} err="failed to get container status \"7e986fa9c3f13ca94dc70b8819f38e60aecf216724424cc18493384512884cf3\": rpc error: code = NotFound desc = could not find container \"7e986fa9c3f13ca94dc70b8819f38e60aecf216724424cc18493384512884cf3\": container with ID starting with 7e986fa9c3f13ca94dc70b8819f38e60aecf216724424cc18493384512884cf3 not found: ID does not exist" Oct 13 18:03:05 crc kubenswrapper[4720]: I1013 18:03:05.189319 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c4d1044-b0fe-43d7-b229-824775c8479f" path="/var/lib/kubelet/pods/3c4d1044-b0fe-43d7-b229-824775c8479f/volumes" Oct 13 18:03:10 crc kubenswrapper[4720]: I1013 18:03:10.169319 4720 scope.go:117] "RemoveContainer" containerID="f16b061e305a205bc6cd67c30a1e6ff436af254fc7370be55f944123cc9cd230" Oct 13 18:03:10 crc kubenswrapper[4720]: E1013 18:03:10.170503 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:03:21 crc kubenswrapper[4720]: I1013 18:03:21.168213 4720 scope.go:117] "RemoveContainer" containerID="f16b061e305a205bc6cd67c30a1e6ff436af254fc7370be55f944123cc9cd230" Oct 13 18:03:21 crc kubenswrapper[4720]: E1013 18:03:21.169064 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:03:32 crc kubenswrapper[4720]: I1013 18:03:32.168134 4720 scope.go:117] "RemoveContainer" containerID="f16b061e305a205bc6cd67c30a1e6ff436af254fc7370be55f944123cc9cd230" Oct 13 18:03:32 crc kubenswrapper[4720]: E1013 18:03:32.169013 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:03:44 crc kubenswrapper[4720]: I1013 18:03:44.168699 4720 scope.go:117] "RemoveContainer" containerID="f16b061e305a205bc6cd67c30a1e6ff436af254fc7370be55f944123cc9cd230" Oct 13 18:03:44 crc kubenswrapper[4720]: E1013 18:03:44.169514 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:03:58 crc kubenswrapper[4720]: I1013 18:03:58.168666 4720 scope.go:117] "RemoveContainer" containerID="f16b061e305a205bc6cd67c30a1e6ff436af254fc7370be55f944123cc9cd230" Oct 13 18:03:58 crc kubenswrapper[4720]: E1013 18:03:58.170506 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:04:09 crc kubenswrapper[4720]: I1013 18:04:09.168119 4720 scope.go:117] "RemoveContainer" containerID="f16b061e305a205bc6cd67c30a1e6ff436af254fc7370be55f944123cc9cd230" Oct 13 18:04:09 crc kubenswrapper[4720]: E1013 18:04:09.168959 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:04:24 crc kubenswrapper[4720]: I1013 18:04:24.168320 4720 scope.go:117] "RemoveContainer" containerID="f16b061e305a205bc6cd67c30a1e6ff436af254fc7370be55f944123cc9cd230" Oct 13 18:04:24 crc kubenswrapper[4720]: E1013 18:04:24.169133 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:04:38 crc kubenswrapper[4720]: I1013 18:04:38.168743 4720 scope.go:117] "RemoveContainer" containerID="f16b061e305a205bc6cd67c30a1e6ff436af254fc7370be55f944123cc9cd230" Oct 13 18:04:38 crc kubenswrapper[4720]: E1013 18:04:38.170224 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:04:49 crc kubenswrapper[4720]: I1013 18:04:49.169787 4720 scope.go:117] "RemoveContainer" containerID="f16b061e305a205bc6cd67c30a1e6ff436af254fc7370be55f944123cc9cd230" Oct 13 18:04:49 crc kubenswrapper[4720]: E1013 18:04:49.171078 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:05:00 crc kubenswrapper[4720]: I1013 18:05:00.167974 4720 scope.go:117] "RemoveContainer" containerID="f16b061e305a205bc6cd67c30a1e6ff436af254fc7370be55f944123cc9cd230" Oct 13 18:05:00 crc kubenswrapper[4720]: E1013 18:05:00.168643 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:05:14 crc kubenswrapper[4720]: I1013 18:05:14.168889 4720 scope.go:117] "RemoveContainer" containerID="f16b061e305a205bc6cd67c30a1e6ff436af254fc7370be55f944123cc9cd230" Oct 13 18:05:14 crc kubenswrapper[4720]: E1013 18:05:14.169790 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:05:27 crc kubenswrapper[4720]: I1013 18:05:27.168525 4720 scope.go:117] "RemoveContainer" containerID="f16b061e305a205bc6cd67c30a1e6ff436af254fc7370be55f944123cc9cd230" Oct 13 18:05:28 crc kubenswrapper[4720]: I1013 18:05:28.413555 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" event={"ID":"ce442c80-fcde-4b79-b6f9-f8f25771dfd4","Type":"ContainerStarted","Data":"d60282f0b598e1436e2840341bf3d33836603e66bded554693a91d04186c59fa"} Oct 13 18:06:04 crc kubenswrapper[4720]: I1013 18:06:04.810492 4720 generic.go:334] "Generic (PLEG): container finished" podID="9810822f-63d1-4a31-bde3-6353a5ee9007" containerID="d4365eb9d82d880775d77205e5f8fbf4bbd727e467d0c2494950402e0278e679" exitCode=0 Oct 13 18:06:04 crc kubenswrapper[4720]: I1013 18:06:04.810560 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tgnks" event={"ID":"9810822f-63d1-4a31-bde3-6353a5ee9007","Type":"ContainerDied","Data":"d4365eb9d82d880775d77205e5f8fbf4bbd727e467d0c2494950402e0278e679"} Oct 13 18:06:06 crc kubenswrapper[4720]: I1013 18:06:06.340322 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tgnks" Oct 13 18:06:06 crc kubenswrapper[4720]: I1013 18:06:06.438554 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7fjm\" (UniqueName: \"kubernetes.io/projected/9810822f-63d1-4a31-bde3-6353a5ee9007-kube-api-access-d7fjm\") pod \"9810822f-63d1-4a31-bde3-6353a5ee9007\" (UID: \"9810822f-63d1-4a31-bde3-6353a5ee9007\") " Oct 13 18:06:06 crc kubenswrapper[4720]: I1013 18:06:06.438677 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/9810822f-63d1-4a31-bde3-6353a5ee9007-nova-migration-ssh-key-0\") pod \"9810822f-63d1-4a31-bde3-6353a5ee9007\" (UID: \"9810822f-63d1-4a31-bde3-6353a5ee9007\") " Oct 13 18:06:06 crc kubenswrapper[4720]: I1013 18:06:06.438717 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9810822f-63d1-4a31-bde3-6353a5ee9007-nova-combined-ca-bundle\") pod \"9810822f-63d1-4a31-bde3-6353a5ee9007\" (UID: \"9810822f-63d1-4a31-bde3-6353a5ee9007\") " Oct 13 18:06:06 crc kubenswrapper[4720]: I1013 18:06:06.438748 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9810822f-63d1-4a31-bde3-6353a5ee9007-inventory\") pod \"9810822f-63d1-4a31-bde3-6353a5ee9007\" (UID: \"9810822f-63d1-4a31-bde3-6353a5ee9007\") " Oct 13 18:06:06 crc kubenswrapper[4720]: I1013 18:06:06.438787 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/9810822f-63d1-4a31-bde3-6353a5ee9007-nova-cell1-compute-config-0\") pod \"9810822f-63d1-4a31-bde3-6353a5ee9007\" (UID: \"9810822f-63d1-4a31-bde3-6353a5ee9007\") " Oct 13 18:06:06 crc kubenswrapper[4720]: I1013 18:06:06.438824 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/9810822f-63d1-4a31-bde3-6353a5ee9007-nova-extra-config-0\") pod \"9810822f-63d1-4a31-bde3-6353a5ee9007\" (UID: \"9810822f-63d1-4a31-bde3-6353a5ee9007\") " Oct 13 18:06:06 crc kubenswrapper[4720]: I1013 18:06:06.438842 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/9810822f-63d1-4a31-bde3-6353a5ee9007-nova-migration-ssh-key-1\") pod \"9810822f-63d1-4a31-bde3-6353a5ee9007\" (UID: \"9810822f-63d1-4a31-bde3-6353a5ee9007\") " Oct 13 18:06:06 crc kubenswrapper[4720]: I1013 18:06:06.438895 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9810822f-63d1-4a31-bde3-6353a5ee9007-ssh-key\") pod \"9810822f-63d1-4a31-bde3-6353a5ee9007\" (UID: \"9810822f-63d1-4a31-bde3-6353a5ee9007\") " Oct 13 18:06:06 crc kubenswrapper[4720]: I1013 18:06:06.438974 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/9810822f-63d1-4a31-bde3-6353a5ee9007-nova-cell1-compute-config-1\") pod \"9810822f-63d1-4a31-bde3-6353a5ee9007\" (UID: \"9810822f-63d1-4a31-bde3-6353a5ee9007\") " Oct 13 18:06:06 crc kubenswrapper[4720]: I1013 18:06:06.446912 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9810822f-63d1-4a31-bde3-6353a5ee9007-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "9810822f-63d1-4a31-bde3-6353a5ee9007" (UID: "9810822f-63d1-4a31-bde3-6353a5ee9007"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 18:06:06 crc kubenswrapper[4720]: I1013 18:06:06.447858 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9810822f-63d1-4a31-bde3-6353a5ee9007-kube-api-access-d7fjm" (OuterVolumeSpecName: "kube-api-access-d7fjm") pod "9810822f-63d1-4a31-bde3-6353a5ee9007" (UID: "9810822f-63d1-4a31-bde3-6353a5ee9007"). InnerVolumeSpecName "kube-api-access-d7fjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 18:06:06 crc kubenswrapper[4720]: I1013 18:06:06.473607 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9810822f-63d1-4a31-bde3-6353a5ee9007-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "9810822f-63d1-4a31-bde3-6353a5ee9007" (UID: "9810822f-63d1-4a31-bde3-6353a5ee9007"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 18:06:06 crc kubenswrapper[4720]: I1013 18:06:06.475673 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9810822f-63d1-4a31-bde3-6353a5ee9007-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9810822f-63d1-4a31-bde3-6353a5ee9007" (UID: "9810822f-63d1-4a31-bde3-6353a5ee9007"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 18:06:06 crc kubenswrapper[4720]: I1013 18:06:06.488276 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9810822f-63d1-4a31-bde3-6353a5ee9007-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "9810822f-63d1-4a31-bde3-6353a5ee9007" (UID: "9810822f-63d1-4a31-bde3-6353a5ee9007"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 18:06:06 crc kubenswrapper[4720]: I1013 18:06:06.488837 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9810822f-63d1-4a31-bde3-6353a5ee9007-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "9810822f-63d1-4a31-bde3-6353a5ee9007" (UID: "9810822f-63d1-4a31-bde3-6353a5ee9007"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 18:06:06 crc kubenswrapper[4720]: I1013 18:06:06.492030 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9810822f-63d1-4a31-bde3-6353a5ee9007-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "9810822f-63d1-4a31-bde3-6353a5ee9007" (UID: "9810822f-63d1-4a31-bde3-6353a5ee9007"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 18:06:06 crc kubenswrapper[4720]: I1013 18:06:06.500670 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9810822f-63d1-4a31-bde3-6353a5ee9007-inventory" (OuterVolumeSpecName: "inventory") pod "9810822f-63d1-4a31-bde3-6353a5ee9007" (UID: "9810822f-63d1-4a31-bde3-6353a5ee9007"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 18:06:06 crc kubenswrapper[4720]: I1013 18:06:06.517945 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9810822f-63d1-4a31-bde3-6353a5ee9007-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "9810822f-63d1-4a31-bde3-6353a5ee9007" (UID: "9810822f-63d1-4a31-bde3-6353a5ee9007"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 18:06:06 crc kubenswrapper[4720]: I1013 18:06:06.540695 4720 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9810822f-63d1-4a31-bde3-6353a5ee9007-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 13 18:06:06 crc kubenswrapper[4720]: I1013 18:06:06.540720 4720 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/9810822f-63d1-4a31-bde3-6353a5ee9007-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Oct 13 18:06:06 crc kubenswrapper[4720]: I1013 18:06:06.540731 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7fjm\" (UniqueName: \"kubernetes.io/projected/9810822f-63d1-4a31-bde3-6353a5ee9007-kube-api-access-d7fjm\") on node \"crc\" DevicePath \"\"" Oct 13 18:06:06 crc kubenswrapper[4720]: I1013 18:06:06.540741 4720 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/9810822f-63d1-4a31-bde3-6353a5ee9007-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Oct 13 18:06:06 crc kubenswrapper[4720]: I1013 18:06:06.540750 4720 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9810822f-63d1-4a31-bde3-6353a5ee9007-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 18:06:06 crc kubenswrapper[4720]: I1013 18:06:06.540757 4720 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9810822f-63d1-4a31-bde3-6353a5ee9007-inventory\") on node \"crc\" DevicePath \"\"" Oct 13 18:06:06 crc kubenswrapper[4720]: I1013 18:06:06.540765 4720 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/9810822f-63d1-4a31-bde3-6353a5ee9007-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Oct 13 18:06:06 crc kubenswrapper[4720]: I1013 18:06:06.540773 4720 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/9810822f-63d1-4a31-bde3-6353a5ee9007-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Oct 13 18:06:06 crc kubenswrapper[4720]: I1013 18:06:06.540781 4720 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/9810822f-63d1-4a31-bde3-6353a5ee9007-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Oct 13 18:06:06 crc kubenswrapper[4720]: I1013 18:06:06.832146 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tgnks" event={"ID":"9810822f-63d1-4a31-bde3-6353a5ee9007","Type":"ContainerDied","Data":"d000e25a4ce9c7654dcc6fa00adf15859db2b0b950d524651ae9f20285ad434b"} Oct 13 18:06:06 crc kubenswrapper[4720]: I1013 18:06:06.832183 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d000e25a4ce9c7654dcc6fa00adf15859db2b0b950d524651ae9f20285ad434b" Oct 13 18:06:06 crc kubenswrapper[4720]: I1013 18:06:06.832210 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tgnks" Oct 13 18:06:06 crc kubenswrapper[4720]: I1013 18:06:06.940143 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bsltd"] Oct 13 18:06:06 crc kubenswrapper[4720]: E1013 18:06:06.940712 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c4d1044-b0fe-43d7-b229-824775c8479f" containerName="extract-content" Oct 13 18:06:06 crc kubenswrapper[4720]: I1013 18:06:06.940725 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c4d1044-b0fe-43d7-b229-824775c8479f" containerName="extract-content" Oct 13 18:06:06 crc kubenswrapper[4720]: E1013 18:06:06.940743 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c4d1044-b0fe-43d7-b229-824775c8479f" containerName="registry-server" Oct 13 18:06:06 crc kubenswrapper[4720]: I1013 18:06:06.940749 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c4d1044-b0fe-43d7-b229-824775c8479f" containerName="registry-server" Oct 13 18:06:06 crc kubenswrapper[4720]: E1013 18:06:06.940770 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9810822f-63d1-4a31-bde3-6353a5ee9007" containerName="nova-edpm-deployment-openstack-edpm-ipam" Oct 13 18:06:06 crc kubenswrapper[4720]: I1013 18:06:06.940778 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="9810822f-63d1-4a31-bde3-6353a5ee9007" containerName="nova-edpm-deployment-openstack-edpm-ipam" Oct 13 18:06:06 crc kubenswrapper[4720]: E1013 18:06:06.940796 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48f72697-fee4-4a05-ae1f-0c8eac474248" containerName="extract-utilities" Oct 13 18:06:06 crc kubenswrapper[4720]: I1013 18:06:06.940802 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="48f72697-fee4-4a05-ae1f-0c8eac474248" containerName="extract-utilities" Oct 13 18:06:06 crc kubenswrapper[4720]: E1013 18:06:06.940813 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c4d1044-b0fe-43d7-b229-824775c8479f" containerName="extract-utilities" Oct 13 18:06:06 crc kubenswrapper[4720]: I1013 18:06:06.940818 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c4d1044-b0fe-43d7-b229-824775c8479f" containerName="extract-utilities" Oct 13 18:06:06 crc kubenswrapper[4720]: E1013 18:06:06.940826 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48f72697-fee4-4a05-ae1f-0c8eac474248" containerName="extract-content" Oct 13 18:06:06 crc kubenswrapper[4720]: I1013 18:06:06.940832 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="48f72697-fee4-4a05-ae1f-0c8eac474248" containerName="extract-content" Oct 13 18:06:06 crc kubenswrapper[4720]: E1013 18:06:06.940842 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48f72697-fee4-4a05-ae1f-0c8eac474248" containerName="registry-server" Oct 13 18:06:06 crc kubenswrapper[4720]: I1013 18:06:06.940847 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="48f72697-fee4-4a05-ae1f-0c8eac474248" containerName="registry-server" Oct 13 18:06:06 crc kubenswrapper[4720]: I1013 18:06:06.941015 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="9810822f-63d1-4a31-bde3-6353a5ee9007" containerName="nova-edpm-deployment-openstack-edpm-ipam" Oct 13 18:06:06 crc kubenswrapper[4720]: I1013 18:06:06.941026 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="48f72697-fee4-4a05-ae1f-0c8eac474248" containerName="registry-server" Oct 13 18:06:06 crc kubenswrapper[4720]: I1013 18:06:06.941045 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c4d1044-b0fe-43d7-b229-824775c8479f" containerName="registry-server" Oct 13 18:06:06 crc kubenswrapper[4720]: I1013 18:06:06.941646 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bsltd" Oct 13 18:06:06 crc kubenswrapper[4720]: I1013 18:06:06.945567 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-l2fds" Oct 13 18:06:06 crc kubenswrapper[4720]: I1013 18:06:06.945743 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 13 18:06:06 crc kubenswrapper[4720]: I1013 18:06:06.945936 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Oct 13 18:06:06 crc kubenswrapper[4720]: I1013 18:06:06.946041 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 13 18:06:06 crc kubenswrapper[4720]: I1013 18:06:06.946327 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 13 18:06:06 crc kubenswrapper[4720]: I1013 18:06:06.961573 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bsltd"] Oct 13 18:06:07 crc kubenswrapper[4720]: I1013 18:06:07.047708 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx78n\" (UniqueName: \"kubernetes.io/projected/6c0e5c67-6b6c-4b09-8d45-f37f83c017a7-kube-api-access-jx78n\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bsltd\" (UID: \"6c0e5c67-6b6c-4b09-8d45-f37f83c017a7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bsltd" Oct 13 18:06:07 crc kubenswrapper[4720]: I1013 18:06:07.047889 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/6c0e5c67-6b6c-4b09-8d45-f37f83c017a7-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bsltd\" (UID: \"6c0e5c67-6b6c-4b09-8d45-f37f83c017a7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bsltd" Oct 13 18:06:07 crc kubenswrapper[4720]: I1013 18:06:07.047948 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c0e5c67-6b6c-4b09-8d45-f37f83c017a7-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bsltd\" (UID: \"6c0e5c67-6b6c-4b09-8d45-f37f83c017a7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bsltd" Oct 13 18:06:07 crc kubenswrapper[4720]: I1013 18:06:07.048162 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/6c0e5c67-6b6c-4b09-8d45-f37f83c017a7-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bsltd\" (UID: \"6c0e5c67-6b6c-4b09-8d45-f37f83c017a7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bsltd" Oct 13 18:06:07 crc kubenswrapper[4720]: I1013 18:06:07.048531 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c0e5c67-6b6c-4b09-8d45-f37f83c017a7-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bsltd\" (UID: \"6c0e5c67-6b6c-4b09-8d45-f37f83c017a7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bsltd" Oct 13 18:06:07 crc kubenswrapper[4720]: I1013 18:06:07.048667 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/6c0e5c67-6b6c-4b09-8d45-f37f83c017a7-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bsltd\" (UID: \"6c0e5c67-6b6c-4b09-8d45-f37f83c017a7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bsltd" Oct 13 18:06:07 crc kubenswrapper[4720]: I1013 18:06:07.048710 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6c0e5c67-6b6c-4b09-8d45-f37f83c017a7-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bsltd\" (UID: \"6c0e5c67-6b6c-4b09-8d45-f37f83c017a7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bsltd" Oct 13 18:06:07 crc kubenswrapper[4720]: I1013 18:06:07.149986 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/6c0e5c67-6b6c-4b09-8d45-f37f83c017a7-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bsltd\" (UID: \"6c0e5c67-6b6c-4b09-8d45-f37f83c017a7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bsltd" Oct 13 18:06:07 crc kubenswrapper[4720]: I1013 18:06:07.150031 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c0e5c67-6b6c-4b09-8d45-f37f83c017a7-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bsltd\" (UID: \"6c0e5c67-6b6c-4b09-8d45-f37f83c017a7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bsltd" Oct 13 18:06:07 crc kubenswrapper[4720]: I1013 18:06:07.150099 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/6c0e5c67-6b6c-4b09-8d45-f37f83c017a7-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bsltd\" (UID: \"6c0e5c67-6b6c-4b09-8d45-f37f83c017a7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bsltd" Oct 13 18:06:07 crc kubenswrapper[4720]: I1013 18:06:07.150209 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c0e5c67-6b6c-4b09-8d45-f37f83c017a7-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bsltd\" (UID: \"6c0e5c67-6b6c-4b09-8d45-f37f83c017a7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bsltd" Oct 13 18:06:07 crc kubenswrapper[4720]: I1013 18:06:07.150274 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/6c0e5c67-6b6c-4b09-8d45-f37f83c017a7-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bsltd\" (UID: \"6c0e5c67-6b6c-4b09-8d45-f37f83c017a7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bsltd" Oct 13 18:06:07 crc kubenswrapper[4720]: I1013 18:06:07.150303 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6c0e5c67-6b6c-4b09-8d45-f37f83c017a7-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bsltd\" (UID: \"6c0e5c67-6b6c-4b09-8d45-f37f83c017a7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bsltd" Oct 13 18:06:07 crc kubenswrapper[4720]: I1013 18:06:07.150336 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jx78n\" (UniqueName: \"kubernetes.io/projected/6c0e5c67-6b6c-4b09-8d45-f37f83c017a7-kube-api-access-jx78n\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bsltd\" (UID: \"6c0e5c67-6b6c-4b09-8d45-f37f83c017a7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bsltd" Oct 13 18:06:07 crc kubenswrapper[4720]: I1013 18:06:07.157119 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/6c0e5c67-6b6c-4b09-8d45-f37f83c017a7-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bsltd\" (UID: \"6c0e5c67-6b6c-4b09-8d45-f37f83c017a7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bsltd" Oct 13 18:06:07 crc kubenswrapper[4720]: I1013 18:06:07.157271 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/6c0e5c67-6b6c-4b09-8d45-f37f83c017a7-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bsltd\" (UID: \"6c0e5c67-6b6c-4b09-8d45-f37f83c017a7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bsltd" Oct 13 18:06:07 crc kubenswrapper[4720]: I1013 18:06:07.157665 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6c0e5c67-6b6c-4b09-8d45-f37f83c017a7-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bsltd\" (UID: \"6c0e5c67-6b6c-4b09-8d45-f37f83c017a7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bsltd" Oct 13 18:06:07 crc kubenswrapper[4720]: I1013 18:06:07.158613 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c0e5c67-6b6c-4b09-8d45-f37f83c017a7-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bsltd\" (UID: \"6c0e5c67-6b6c-4b09-8d45-f37f83c017a7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bsltd" Oct 13 18:06:07 crc kubenswrapper[4720]: I1013 18:06:07.159292 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/6c0e5c67-6b6c-4b09-8d45-f37f83c017a7-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bsltd\" (UID: \"6c0e5c67-6b6c-4b09-8d45-f37f83c017a7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bsltd" Oct 13 18:06:07 crc kubenswrapper[4720]: I1013 18:06:07.170217 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c0e5c67-6b6c-4b09-8d45-f37f83c017a7-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bsltd\" (UID: \"6c0e5c67-6b6c-4b09-8d45-f37f83c017a7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bsltd" Oct 13 18:06:07 crc kubenswrapper[4720]: I1013 18:06:07.174388 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jx78n\" (UniqueName: \"kubernetes.io/projected/6c0e5c67-6b6c-4b09-8d45-f37f83c017a7-kube-api-access-jx78n\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-bsltd\" (UID: \"6c0e5c67-6b6c-4b09-8d45-f37f83c017a7\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bsltd" Oct 13 18:06:07 crc kubenswrapper[4720]: I1013 18:06:07.267986 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bsltd" Oct 13 18:06:07 crc kubenswrapper[4720]: I1013 18:06:07.834886 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bsltd"] Oct 13 18:06:08 crc kubenswrapper[4720]: I1013 18:06:08.854827 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bsltd" event={"ID":"6c0e5c67-6b6c-4b09-8d45-f37f83c017a7","Type":"ContainerStarted","Data":"ce04bc8a131cb264f7fc055781df47a42910f22a873304495a1d2be66de3d7e0"} Oct 13 18:06:09 crc kubenswrapper[4720]: I1013 18:06:09.868998 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bsltd" event={"ID":"6c0e5c67-6b6c-4b09-8d45-f37f83c017a7","Type":"ContainerStarted","Data":"feb1f5ef25f1350d6abe1f35962955332cf5782b42e88e45d39e73a1b720715e"} Oct 13 18:06:09 crc kubenswrapper[4720]: I1013 18:06:09.900406 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bsltd" podStartSLOduration=2.4050622 podStartE2EDuration="3.900389882s" podCreationTimestamp="2025-10-13 18:06:06 +0000 UTC" firstStartedPulling="2025-10-13 18:06:07.836706554 +0000 UTC m=+2513.293956726" lastFinishedPulling="2025-10-13 18:06:09.332034276 +0000 UTC m=+2514.789284408" observedRunningTime="2025-10-13 18:06:09.898032411 +0000 UTC m=+2515.355282553" watchObservedRunningTime="2025-10-13 18:06:09.900389882 +0000 UTC m=+2515.357640024" Oct 13 18:07:45 crc kubenswrapper[4720]: I1013 18:07:45.212784 4720 patch_prober.go:28] interesting pod/machine-config-daemon-htwnl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 18:07:45 crc kubenswrapper[4720]: I1013 18:07:45.213460 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 18:08:15 crc kubenswrapper[4720]: I1013 18:08:15.212628 4720 patch_prober.go:28] interesting pod/machine-config-daemon-htwnl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 18:08:15 crc kubenswrapper[4720]: I1013 18:08:15.213560 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 18:08:45 crc kubenswrapper[4720]: I1013 18:08:45.212493 4720 patch_prober.go:28] interesting pod/machine-config-daemon-htwnl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 18:08:45 crc kubenswrapper[4720]: I1013 18:08:45.213000 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 18:08:45 crc kubenswrapper[4720]: I1013 18:08:45.213044 4720 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" Oct 13 18:08:45 crc kubenswrapper[4720]: I1013 18:08:45.213777 4720 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d60282f0b598e1436e2840341bf3d33836603e66bded554693a91d04186c59fa"} pod="openshift-machine-config-operator/machine-config-daemon-htwnl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 13 18:08:45 crc kubenswrapper[4720]: I1013 18:08:45.213834 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerName="machine-config-daemon" containerID="cri-o://d60282f0b598e1436e2840341bf3d33836603e66bded554693a91d04186c59fa" gracePeriod=600 Oct 13 18:08:45 crc kubenswrapper[4720]: I1013 18:08:45.481849 4720 generic.go:334] "Generic (PLEG): container finished" podID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerID="d60282f0b598e1436e2840341bf3d33836603e66bded554693a91d04186c59fa" exitCode=0 Oct 13 18:08:45 crc kubenswrapper[4720]: I1013 18:08:45.481942 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" event={"ID":"ce442c80-fcde-4b79-b6f9-f8f25771dfd4","Type":"ContainerDied","Data":"d60282f0b598e1436e2840341bf3d33836603e66bded554693a91d04186c59fa"} Oct 13 18:08:45 crc kubenswrapper[4720]: I1013 18:08:45.482284 4720 scope.go:117] "RemoveContainer" containerID="f16b061e305a205bc6cd67c30a1e6ff436af254fc7370be55f944123cc9cd230" Oct 13 18:08:46 crc kubenswrapper[4720]: I1013 18:08:46.498652 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" event={"ID":"ce442c80-fcde-4b79-b6f9-f8f25771dfd4","Type":"ContainerStarted","Data":"8955f01d60828c6a158946e7dd0ce8b2360fc11ddc5d461465ae7048137b38b7"} Oct 13 18:08:52 crc kubenswrapper[4720]: I1013 18:08:52.564383 4720 generic.go:334] "Generic (PLEG): container finished" podID="6c0e5c67-6b6c-4b09-8d45-f37f83c017a7" containerID="feb1f5ef25f1350d6abe1f35962955332cf5782b42e88e45d39e73a1b720715e" exitCode=0 Oct 13 18:08:52 crc kubenswrapper[4720]: I1013 18:08:52.564515 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bsltd" event={"ID":"6c0e5c67-6b6c-4b09-8d45-f37f83c017a7","Type":"ContainerDied","Data":"feb1f5ef25f1350d6abe1f35962955332cf5782b42e88e45d39e73a1b720715e"} Oct 13 18:08:54 crc kubenswrapper[4720]: I1013 18:08:54.094200 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bsltd" Oct 13 18:08:54 crc kubenswrapper[4720]: I1013 18:08:54.224552 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c0e5c67-6b6c-4b09-8d45-f37f83c017a7-inventory\") pod \"6c0e5c67-6b6c-4b09-8d45-f37f83c017a7\" (UID: \"6c0e5c67-6b6c-4b09-8d45-f37f83c017a7\") " Oct 13 18:08:54 crc kubenswrapper[4720]: I1013 18:08:54.224858 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/6c0e5c67-6b6c-4b09-8d45-f37f83c017a7-ceilometer-compute-config-data-2\") pod \"6c0e5c67-6b6c-4b09-8d45-f37f83c017a7\" (UID: \"6c0e5c67-6b6c-4b09-8d45-f37f83c017a7\") " Oct 13 18:08:54 crc kubenswrapper[4720]: I1013 18:08:54.224927 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/6c0e5c67-6b6c-4b09-8d45-f37f83c017a7-ceilometer-compute-config-data-0\") pod \"6c0e5c67-6b6c-4b09-8d45-f37f83c017a7\" (UID: \"6c0e5c67-6b6c-4b09-8d45-f37f83c017a7\") " Oct 13 18:08:54 crc kubenswrapper[4720]: I1013 18:08:54.224953 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6c0e5c67-6b6c-4b09-8d45-f37f83c017a7-ssh-key\") pod \"6c0e5c67-6b6c-4b09-8d45-f37f83c017a7\" (UID: \"6c0e5c67-6b6c-4b09-8d45-f37f83c017a7\") " Oct 13 18:08:54 crc kubenswrapper[4720]: I1013 18:08:54.225016 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/6c0e5c67-6b6c-4b09-8d45-f37f83c017a7-ceilometer-compute-config-data-1\") pod \"6c0e5c67-6b6c-4b09-8d45-f37f83c017a7\" (UID: \"6c0e5c67-6b6c-4b09-8d45-f37f83c017a7\") " Oct 13 18:08:54 crc kubenswrapper[4720]: I1013 18:08:54.225107 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c0e5c67-6b6c-4b09-8d45-f37f83c017a7-telemetry-combined-ca-bundle\") pod \"6c0e5c67-6b6c-4b09-8d45-f37f83c017a7\" (UID: \"6c0e5c67-6b6c-4b09-8d45-f37f83c017a7\") " Oct 13 18:08:54 crc kubenswrapper[4720]: I1013 18:08:54.225238 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jx78n\" (UniqueName: \"kubernetes.io/projected/6c0e5c67-6b6c-4b09-8d45-f37f83c017a7-kube-api-access-jx78n\") pod \"6c0e5c67-6b6c-4b09-8d45-f37f83c017a7\" (UID: \"6c0e5c67-6b6c-4b09-8d45-f37f83c017a7\") " Oct 13 18:08:54 crc kubenswrapper[4720]: I1013 18:08:54.233560 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c0e5c67-6b6c-4b09-8d45-f37f83c017a7-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "6c0e5c67-6b6c-4b09-8d45-f37f83c017a7" (UID: "6c0e5c67-6b6c-4b09-8d45-f37f83c017a7"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 18:08:54 crc kubenswrapper[4720]: I1013 18:08:54.233587 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c0e5c67-6b6c-4b09-8d45-f37f83c017a7-kube-api-access-jx78n" (OuterVolumeSpecName: "kube-api-access-jx78n") pod "6c0e5c67-6b6c-4b09-8d45-f37f83c017a7" (UID: "6c0e5c67-6b6c-4b09-8d45-f37f83c017a7"). InnerVolumeSpecName "kube-api-access-jx78n". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 18:08:54 crc kubenswrapper[4720]: I1013 18:08:54.252339 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c0e5c67-6b6c-4b09-8d45-f37f83c017a7-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "6c0e5c67-6b6c-4b09-8d45-f37f83c017a7" (UID: "6c0e5c67-6b6c-4b09-8d45-f37f83c017a7"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 18:08:54 crc kubenswrapper[4720]: I1013 18:08:54.256829 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c0e5c67-6b6c-4b09-8d45-f37f83c017a7-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "6c0e5c67-6b6c-4b09-8d45-f37f83c017a7" (UID: "6c0e5c67-6b6c-4b09-8d45-f37f83c017a7"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 18:08:54 crc kubenswrapper[4720]: I1013 18:08:54.260318 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c0e5c67-6b6c-4b09-8d45-f37f83c017a7-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6c0e5c67-6b6c-4b09-8d45-f37f83c017a7" (UID: "6c0e5c67-6b6c-4b09-8d45-f37f83c017a7"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 18:08:54 crc kubenswrapper[4720]: I1013 18:08:54.271792 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c0e5c67-6b6c-4b09-8d45-f37f83c017a7-inventory" (OuterVolumeSpecName: "inventory") pod "6c0e5c67-6b6c-4b09-8d45-f37f83c017a7" (UID: "6c0e5c67-6b6c-4b09-8d45-f37f83c017a7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 18:08:54 crc kubenswrapper[4720]: I1013 18:08:54.281490 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c0e5c67-6b6c-4b09-8d45-f37f83c017a7-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "6c0e5c67-6b6c-4b09-8d45-f37f83c017a7" (UID: "6c0e5c67-6b6c-4b09-8d45-f37f83c017a7"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 18:08:54 crc kubenswrapper[4720]: I1013 18:08:54.328029 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jx78n\" (UniqueName: \"kubernetes.io/projected/6c0e5c67-6b6c-4b09-8d45-f37f83c017a7-kube-api-access-jx78n\") on node \"crc\" DevicePath \"\"" Oct 13 18:08:54 crc kubenswrapper[4720]: I1013 18:08:54.328072 4720 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c0e5c67-6b6c-4b09-8d45-f37f83c017a7-inventory\") on node \"crc\" DevicePath \"\"" Oct 13 18:08:54 crc kubenswrapper[4720]: I1013 18:08:54.328086 4720 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/6c0e5c67-6b6c-4b09-8d45-f37f83c017a7-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Oct 13 18:08:54 crc kubenswrapper[4720]: I1013 18:08:54.328099 4720 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/6c0e5c67-6b6c-4b09-8d45-f37f83c017a7-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Oct 13 18:08:54 crc kubenswrapper[4720]: I1013 18:08:54.328112 4720 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6c0e5c67-6b6c-4b09-8d45-f37f83c017a7-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 13 18:08:54 crc kubenswrapper[4720]: I1013 18:08:54.328123 4720 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/6c0e5c67-6b6c-4b09-8d45-f37f83c017a7-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Oct 13 18:08:54 crc kubenswrapper[4720]: I1013 18:08:54.328135 4720 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c0e5c67-6b6c-4b09-8d45-f37f83c017a7-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 13 18:08:54 crc kubenswrapper[4720]: I1013 18:08:54.590346 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bsltd" event={"ID":"6c0e5c67-6b6c-4b09-8d45-f37f83c017a7","Type":"ContainerDied","Data":"ce04bc8a131cb264f7fc055781df47a42910f22a873304495a1d2be66de3d7e0"} Oct 13 18:08:54 crc kubenswrapper[4720]: I1013 18:08:54.590408 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce04bc8a131cb264f7fc055781df47a42910f22a873304495a1d2be66de3d7e0" Oct 13 18:08:54 crc kubenswrapper[4720]: I1013 18:08:54.590428 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-bsltd" Oct 13 18:09:20 crc kubenswrapper[4720]: I1013 18:09:20.865122 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-s5g79"] Oct 13 18:09:20 crc kubenswrapper[4720]: E1013 18:09:20.866381 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c0e5c67-6b6c-4b09-8d45-f37f83c017a7" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Oct 13 18:09:20 crc kubenswrapper[4720]: I1013 18:09:20.866424 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c0e5c67-6b6c-4b09-8d45-f37f83c017a7" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Oct 13 18:09:20 crc kubenswrapper[4720]: I1013 18:09:20.866735 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c0e5c67-6b6c-4b09-8d45-f37f83c017a7" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Oct 13 18:09:20 crc kubenswrapper[4720]: I1013 18:09:20.868941 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s5g79" Oct 13 18:09:20 crc kubenswrapper[4720]: I1013 18:09:20.904453 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s5g79"] Oct 13 18:09:20 crc kubenswrapper[4720]: I1013 18:09:20.993449 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd557d6f-c260-41ba-a1ef-1d9ad6485866-utilities\") pod \"redhat-marketplace-s5g79\" (UID: \"bd557d6f-c260-41ba-a1ef-1d9ad6485866\") " pod="openshift-marketplace/redhat-marketplace-s5g79" Oct 13 18:09:20 crc kubenswrapper[4720]: I1013 18:09:20.993672 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhkr8\" (UniqueName: \"kubernetes.io/projected/bd557d6f-c260-41ba-a1ef-1d9ad6485866-kube-api-access-fhkr8\") pod \"redhat-marketplace-s5g79\" (UID: \"bd557d6f-c260-41ba-a1ef-1d9ad6485866\") " pod="openshift-marketplace/redhat-marketplace-s5g79" Oct 13 18:09:20 crc kubenswrapper[4720]: I1013 18:09:20.993782 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd557d6f-c260-41ba-a1ef-1d9ad6485866-catalog-content\") pod \"redhat-marketplace-s5g79\" (UID: \"bd557d6f-c260-41ba-a1ef-1d9ad6485866\") " pod="openshift-marketplace/redhat-marketplace-s5g79" Oct 13 18:09:21 crc kubenswrapper[4720]: I1013 18:09:21.096459 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd557d6f-c260-41ba-a1ef-1d9ad6485866-utilities\") pod \"redhat-marketplace-s5g79\" (UID: \"bd557d6f-c260-41ba-a1ef-1d9ad6485866\") " pod="openshift-marketplace/redhat-marketplace-s5g79" Oct 13 18:09:21 crc kubenswrapper[4720]: I1013 18:09:21.096594 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhkr8\" (UniqueName: \"kubernetes.io/projected/bd557d6f-c260-41ba-a1ef-1d9ad6485866-kube-api-access-fhkr8\") pod \"redhat-marketplace-s5g79\" (UID: \"bd557d6f-c260-41ba-a1ef-1d9ad6485866\") " pod="openshift-marketplace/redhat-marketplace-s5g79" Oct 13 18:09:21 crc kubenswrapper[4720]: I1013 18:09:21.096651 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd557d6f-c260-41ba-a1ef-1d9ad6485866-catalog-content\") pod \"redhat-marketplace-s5g79\" (UID: \"bd557d6f-c260-41ba-a1ef-1d9ad6485866\") " pod="openshift-marketplace/redhat-marketplace-s5g79" Oct 13 18:09:21 crc kubenswrapper[4720]: I1013 18:09:21.097051 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd557d6f-c260-41ba-a1ef-1d9ad6485866-utilities\") pod \"redhat-marketplace-s5g79\" (UID: \"bd557d6f-c260-41ba-a1ef-1d9ad6485866\") " pod="openshift-marketplace/redhat-marketplace-s5g79" Oct 13 18:09:21 crc kubenswrapper[4720]: I1013 18:09:21.097274 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd557d6f-c260-41ba-a1ef-1d9ad6485866-catalog-content\") pod \"redhat-marketplace-s5g79\" (UID: \"bd557d6f-c260-41ba-a1ef-1d9ad6485866\") " pod="openshift-marketplace/redhat-marketplace-s5g79" Oct 13 18:09:21 crc kubenswrapper[4720]: I1013 18:09:21.123338 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhkr8\" (UniqueName: \"kubernetes.io/projected/bd557d6f-c260-41ba-a1ef-1d9ad6485866-kube-api-access-fhkr8\") pod \"redhat-marketplace-s5g79\" (UID: \"bd557d6f-c260-41ba-a1ef-1d9ad6485866\") " pod="openshift-marketplace/redhat-marketplace-s5g79" Oct 13 18:09:21 crc kubenswrapper[4720]: I1013 18:09:21.189285 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s5g79" Oct 13 18:09:21 crc kubenswrapper[4720]: I1013 18:09:21.624528 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s5g79"] Oct 13 18:09:21 crc kubenswrapper[4720]: I1013 18:09:21.876111 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s5g79" event={"ID":"bd557d6f-c260-41ba-a1ef-1d9ad6485866","Type":"ContainerStarted","Data":"0110756735fb4e2164f1cb13eecdaa2a4be0bd4dfbcd189d98bfbfe912411551"} Oct 13 18:09:22 crc kubenswrapper[4720]: I1013 18:09:22.895947 4720 generic.go:334] "Generic (PLEG): container finished" podID="bd557d6f-c260-41ba-a1ef-1d9ad6485866" containerID="d2200e93c23a17cd19d2dc9dbf773244a3b0e3358dbe44f793abf2a3c5c089b5" exitCode=0 Oct 13 18:09:22 crc kubenswrapper[4720]: I1013 18:09:22.896000 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s5g79" event={"ID":"bd557d6f-c260-41ba-a1ef-1d9ad6485866","Type":"ContainerDied","Data":"d2200e93c23a17cd19d2dc9dbf773244a3b0e3358dbe44f793abf2a3c5c089b5"} Oct 13 18:09:22 crc kubenswrapper[4720]: I1013 18:09:22.901178 4720 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 13 18:09:23 crc kubenswrapper[4720]: E1013 18:09:23.905279 4720 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.17:35620->38.102.83.17:42869: write tcp 38.102.83.17:35620->38.102.83.17:42869: write: broken pipe Oct 13 18:09:23 crc kubenswrapper[4720]: I1013 18:09:23.921487 4720 generic.go:334] "Generic (PLEG): container finished" podID="bd557d6f-c260-41ba-a1ef-1d9ad6485866" containerID="fb105ee781ffc2e8c650d9a57bd13fa04f71c19a985c56a7d6f4d49f005d1a06" exitCode=0 Oct 13 18:09:23 crc kubenswrapper[4720]: I1013 18:09:23.921542 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s5g79" event={"ID":"bd557d6f-c260-41ba-a1ef-1d9ad6485866","Type":"ContainerDied","Data":"fb105ee781ffc2e8c650d9a57bd13fa04f71c19a985c56a7d6f4d49f005d1a06"} Oct 13 18:09:24 crc kubenswrapper[4720]: I1013 18:09:24.933332 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s5g79" event={"ID":"bd557d6f-c260-41ba-a1ef-1d9ad6485866","Type":"ContainerStarted","Data":"bf2080b4a71a60affb5e97358210b15f0c2a681161cdfc434da1771a5727260b"} Oct 13 18:09:24 crc kubenswrapper[4720]: I1013 18:09:24.955105 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-s5g79" podStartSLOduration=3.513316442 podStartE2EDuration="4.95507005s" podCreationTimestamp="2025-10-13 18:09:20 +0000 UTC" firstStartedPulling="2025-10-13 18:09:22.900455557 +0000 UTC m=+2708.357705719" lastFinishedPulling="2025-10-13 18:09:24.342209185 +0000 UTC m=+2709.799459327" observedRunningTime="2025-10-13 18:09:24.951150829 +0000 UTC m=+2710.408400971" watchObservedRunningTime="2025-10-13 18:09:24.95507005 +0000 UTC m=+2710.412320232" Oct 13 18:09:25 crc kubenswrapper[4720]: I1013 18:09:25.846595 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5tbc6"] Oct 13 18:09:25 crc kubenswrapper[4720]: I1013 18:09:25.850278 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5tbc6" Oct 13 18:09:25 crc kubenswrapper[4720]: I1013 18:09:25.870667 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5tbc6"] Oct 13 18:09:25 crc kubenswrapper[4720]: I1013 18:09:25.995310 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24129184-4fa9-4b35-8fc9-f2b19ae96dcc-utilities\") pod \"community-operators-5tbc6\" (UID: \"24129184-4fa9-4b35-8fc9-f2b19ae96dcc\") " pod="openshift-marketplace/community-operators-5tbc6" Oct 13 18:09:25 crc kubenswrapper[4720]: I1013 18:09:25.995701 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p9tc\" (UniqueName: \"kubernetes.io/projected/24129184-4fa9-4b35-8fc9-f2b19ae96dcc-kube-api-access-2p9tc\") pod \"community-operators-5tbc6\" (UID: \"24129184-4fa9-4b35-8fc9-f2b19ae96dcc\") " pod="openshift-marketplace/community-operators-5tbc6" Oct 13 18:09:25 crc kubenswrapper[4720]: I1013 18:09:25.995896 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24129184-4fa9-4b35-8fc9-f2b19ae96dcc-catalog-content\") pod \"community-operators-5tbc6\" (UID: \"24129184-4fa9-4b35-8fc9-f2b19ae96dcc\") " pod="openshift-marketplace/community-operators-5tbc6" Oct 13 18:09:26 crc kubenswrapper[4720]: I1013 18:09:26.097584 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2p9tc\" (UniqueName: \"kubernetes.io/projected/24129184-4fa9-4b35-8fc9-f2b19ae96dcc-kube-api-access-2p9tc\") pod \"community-operators-5tbc6\" (UID: \"24129184-4fa9-4b35-8fc9-f2b19ae96dcc\") " pod="openshift-marketplace/community-operators-5tbc6" Oct 13 18:09:26 crc kubenswrapper[4720]: I1013 18:09:26.097768 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24129184-4fa9-4b35-8fc9-f2b19ae96dcc-catalog-content\") pod \"community-operators-5tbc6\" (UID: \"24129184-4fa9-4b35-8fc9-f2b19ae96dcc\") " pod="openshift-marketplace/community-operators-5tbc6" Oct 13 18:09:26 crc kubenswrapper[4720]: I1013 18:09:26.097812 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24129184-4fa9-4b35-8fc9-f2b19ae96dcc-utilities\") pod \"community-operators-5tbc6\" (UID: \"24129184-4fa9-4b35-8fc9-f2b19ae96dcc\") " pod="openshift-marketplace/community-operators-5tbc6" Oct 13 18:09:26 crc kubenswrapper[4720]: I1013 18:09:26.098403 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24129184-4fa9-4b35-8fc9-f2b19ae96dcc-utilities\") pod \"community-operators-5tbc6\" (UID: \"24129184-4fa9-4b35-8fc9-f2b19ae96dcc\") " pod="openshift-marketplace/community-operators-5tbc6" Oct 13 18:09:26 crc kubenswrapper[4720]: I1013 18:09:26.098403 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24129184-4fa9-4b35-8fc9-f2b19ae96dcc-catalog-content\") pod \"community-operators-5tbc6\" (UID: \"24129184-4fa9-4b35-8fc9-f2b19ae96dcc\") " pod="openshift-marketplace/community-operators-5tbc6" Oct 13 18:09:26 crc kubenswrapper[4720]: I1013 18:09:26.116778 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2p9tc\" (UniqueName: \"kubernetes.io/projected/24129184-4fa9-4b35-8fc9-f2b19ae96dcc-kube-api-access-2p9tc\") pod \"community-operators-5tbc6\" (UID: \"24129184-4fa9-4b35-8fc9-f2b19ae96dcc\") " pod="openshift-marketplace/community-operators-5tbc6" Oct 13 18:09:26 crc kubenswrapper[4720]: I1013 18:09:26.199532 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5tbc6" Oct 13 18:09:26 crc kubenswrapper[4720]: I1013 18:09:26.793242 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5tbc6"] Oct 13 18:09:26 crc kubenswrapper[4720]: W1013 18:09:26.796037 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24129184_4fa9_4b35_8fc9_f2b19ae96dcc.slice/crio-704ddee89b207a10fbd464da95764f5ab1a650642a94ccd9dd9ec28d002c2f10 WatchSource:0}: Error finding container 704ddee89b207a10fbd464da95764f5ab1a650642a94ccd9dd9ec28d002c2f10: Status 404 returned error can't find the container with id 704ddee89b207a10fbd464da95764f5ab1a650642a94ccd9dd9ec28d002c2f10 Oct 13 18:09:26 crc kubenswrapper[4720]: I1013 18:09:26.959039 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5tbc6" event={"ID":"24129184-4fa9-4b35-8fc9-f2b19ae96dcc","Type":"ContainerStarted","Data":"704ddee89b207a10fbd464da95764f5ab1a650642a94ccd9dd9ec28d002c2f10"} Oct 13 18:09:27 crc kubenswrapper[4720]: I1013 18:09:27.967818 4720 generic.go:334] "Generic (PLEG): container finished" podID="24129184-4fa9-4b35-8fc9-f2b19ae96dcc" containerID="8ac9112f708fd3452c746fef2272f30d0537b69b3873284c0fda3395ef0c1ed9" exitCode=0 Oct 13 18:09:27 crc kubenswrapper[4720]: I1013 18:09:27.967869 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5tbc6" event={"ID":"24129184-4fa9-4b35-8fc9-f2b19ae96dcc","Type":"ContainerDied","Data":"8ac9112f708fd3452c746fef2272f30d0537b69b3873284c0fda3395ef0c1ed9"} Oct 13 18:09:29 crc kubenswrapper[4720]: I1013 18:09:29.993567 4720 generic.go:334] "Generic (PLEG): container finished" podID="24129184-4fa9-4b35-8fc9-f2b19ae96dcc" containerID="58f09478cd7694cbefb38c49b48d28f8fd07d3cab7cb6ceb5128d17f916d5629" exitCode=0 Oct 13 18:09:29 crc kubenswrapper[4720]: I1013 18:09:29.993715 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5tbc6" event={"ID":"24129184-4fa9-4b35-8fc9-f2b19ae96dcc","Type":"ContainerDied","Data":"58f09478cd7694cbefb38c49b48d28f8fd07d3cab7cb6ceb5128d17f916d5629"} Oct 13 18:09:31 crc kubenswrapper[4720]: I1013 18:09:31.009025 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5tbc6" event={"ID":"24129184-4fa9-4b35-8fc9-f2b19ae96dcc","Type":"ContainerStarted","Data":"6c61fb83bcfa85ccf4a1ac2d1463aa4a63576acb337ab85db00bee0d5f76c15e"} Oct 13 18:09:31 crc kubenswrapper[4720]: I1013 18:09:31.036608 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5tbc6" podStartSLOduration=3.425963164 podStartE2EDuration="6.036587064s" podCreationTimestamp="2025-10-13 18:09:25 +0000 UTC" firstStartedPulling="2025-10-13 18:09:27.969820485 +0000 UTC m=+2713.427070617" lastFinishedPulling="2025-10-13 18:09:30.580444385 +0000 UTC m=+2716.037694517" observedRunningTime="2025-10-13 18:09:31.03024075 +0000 UTC m=+2716.487490892" watchObservedRunningTime="2025-10-13 18:09:31.036587064 +0000 UTC m=+2716.493837196" Oct 13 18:09:31 crc kubenswrapper[4720]: I1013 18:09:31.190753 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-s5g79" Oct 13 18:09:31 crc kubenswrapper[4720]: I1013 18:09:31.190783 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-s5g79" Oct 13 18:09:31 crc kubenswrapper[4720]: I1013 18:09:31.243420 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-s5g79" Oct 13 18:09:32 crc kubenswrapper[4720]: I1013 18:09:32.064956 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-s5g79" Oct 13 18:09:33 crc kubenswrapper[4720]: I1013 18:09:33.422766 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s5g79"] Oct 13 18:09:34 crc kubenswrapper[4720]: I1013 18:09:34.037512 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-s5g79" podUID="bd557d6f-c260-41ba-a1ef-1d9ad6485866" containerName="registry-server" containerID="cri-o://bf2080b4a71a60affb5e97358210b15f0c2a681161cdfc434da1771a5727260b" gracePeriod=2 Oct 13 18:09:34 crc kubenswrapper[4720]: I1013 18:09:34.611819 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s5g79" Oct 13 18:09:34 crc kubenswrapper[4720]: I1013 18:09:34.695837 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhkr8\" (UniqueName: \"kubernetes.io/projected/bd557d6f-c260-41ba-a1ef-1d9ad6485866-kube-api-access-fhkr8\") pod \"bd557d6f-c260-41ba-a1ef-1d9ad6485866\" (UID: \"bd557d6f-c260-41ba-a1ef-1d9ad6485866\") " Oct 13 18:09:34 crc kubenswrapper[4720]: I1013 18:09:34.695909 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd557d6f-c260-41ba-a1ef-1d9ad6485866-utilities\") pod \"bd557d6f-c260-41ba-a1ef-1d9ad6485866\" (UID: \"bd557d6f-c260-41ba-a1ef-1d9ad6485866\") " Oct 13 18:09:34 crc kubenswrapper[4720]: I1013 18:09:34.696049 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd557d6f-c260-41ba-a1ef-1d9ad6485866-catalog-content\") pod \"bd557d6f-c260-41ba-a1ef-1d9ad6485866\" (UID: \"bd557d6f-c260-41ba-a1ef-1d9ad6485866\") " Oct 13 18:09:34 crc kubenswrapper[4720]: I1013 18:09:34.697235 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd557d6f-c260-41ba-a1ef-1d9ad6485866-utilities" (OuterVolumeSpecName: "utilities") pod "bd557d6f-c260-41ba-a1ef-1d9ad6485866" (UID: "bd557d6f-c260-41ba-a1ef-1d9ad6485866"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 18:09:34 crc kubenswrapper[4720]: I1013 18:09:34.713748 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd557d6f-c260-41ba-a1ef-1d9ad6485866-kube-api-access-fhkr8" (OuterVolumeSpecName: "kube-api-access-fhkr8") pod "bd557d6f-c260-41ba-a1ef-1d9ad6485866" (UID: "bd557d6f-c260-41ba-a1ef-1d9ad6485866"). InnerVolumeSpecName "kube-api-access-fhkr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 18:09:34 crc kubenswrapper[4720]: I1013 18:09:34.726405 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd557d6f-c260-41ba-a1ef-1d9ad6485866-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bd557d6f-c260-41ba-a1ef-1d9ad6485866" (UID: "bd557d6f-c260-41ba-a1ef-1d9ad6485866"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 18:09:34 crc kubenswrapper[4720]: I1013 18:09:34.799064 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhkr8\" (UniqueName: \"kubernetes.io/projected/bd557d6f-c260-41ba-a1ef-1d9ad6485866-kube-api-access-fhkr8\") on node \"crc\" DevicePath \"\"" Oct 13 18:09:34 crc kubenswrapper[4720]: I1013 18:09:34.799442 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd557d6f-c260-41ba-a1ef-1d9ad6485866-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 18:09:34 crc kubenswrapper[4720]: I1013 18:09:34.799542 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd557d6f-c260-41ba-a1ef-1d9ad6485866-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 18:09:35 crc kubenswrapper[4720]: I1013 18:09:35.053926 4720 generic.go:334] "Generic (PLEG): container finished" podID="bd557d6f-c260-41ba-a1ef-1d9ad6485866" containerID="bf2080b4a71a60affb5e97358210b15f0c2a681161cdfc434da1771a5727260b" exitCode=0 Oct 13 18:09:35 crc kubenswrapper[4720]: I1013 18:09:35.053979 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s5g79" event={"ID":"bd557d6f-c260-41ba-a1ef-1d9ad6485866","Type":"ContainerDied","Data":"bf2080b4a71a60affb5e97358210b15f0c2a681161cdfc434da1771a5727260b"} Oct 13 18:09:35 crc kubenswrapper[4720]: I1013 18:09:35.054012 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s5g79" event={"ID":"bd557d6f-c260-41ba-a1ef-1d9ad6485866","Type":"ContainerDied","Data":"0110756735fb4e2164f1cb13eecdaa2a4be0bd4dfbcd189d98bfbfe912411551"} Oct 13 18:09:35 crc kubenswrapper[4720]: I1013 18:09:35.054031 4720 scope.go:117] "RemoveContainer" containerID="bf2080b4a71a60affb5e97358210b15f0c2a681161cdfc434da1771a5727260b" Oct 13 18:09:35 crc kubenswrapper[4720]: I1013 18:09:35.054054 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s5g79" Oct 13 18:09:35 crc kubenswrapper[4720]: I1013 18:09:35.108681 4720 scope.go:117] "RemoveContainer" containerID="fb105ee781ffc2e8c650d9a57bd13fa04f71c19a985c56a7d6f4d49f005d1a06" Oct 13 18:09:35 crc kubenswrapper[4720]: I1013 18:09:35.119092 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s5g79"] Oct 13 18:09:35 crc kubenswrapper[4720]: I1013 18:09:35.140834 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-s5g79"] Oct 13 18:09:35 crc kubenswrapper[4720]: I1013 18:09:35.146782 4720 scope.go:117] "RemoveContainer" containerID="d2200e93c23a17cd19d2dc9dbf773244a3b0e3358dbe44f793abf2a3c5c089b5" Oct 13 18:09:35 crc kubenswrapper[4720]: I1013 18:09:35.189304 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd557d6f-c260-41ba-a1ef-1d9ad6485866" path="/var/lib/kubelet/pods/bd557d6f-c260-41ba-a1ef-1d9ad6485866/volumes" Oct 13 18:09:35 crc kubenswrapper[4720]: I1013 18:09:35.216259 4720 scope.go:117] "RemoveContainer" containerID="bf2080b4a71a60affb5e97358210b15f0c2a681161cdfc434da1771a5727260b" Oct 13 18:09:35 crc kubenswrapper[4720]: E1013 18:09:35.216821 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf2080b4a71a60affb5e97358210b15f0c2a681161cdfc434da1771a5727260b\": container with ID starting with bf2080b4a71a60affb5e97358210b15f0c2a681161cdfc434da1771a5727260b not found: ID does not exist" containerID="bf2080b4a71a60affb5e97358210b15f0c2a681161cdfc434da1771a5727260b" Oct 13 18:09:35 crc kubenswrapper[4720]: I1013 18:09:35.217010 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf2080b4a71a60affb5e97358210b15f0c2a681161cdfc434da1771a5727260b"} err="failed to get container status \"bf2080b4a71a60affb5e97358210b15f0c2a681161cdfc434da1771a5727260b\": rpc error: code = NotFound desc = could not find container \"bf2080b4a71a60affb5e97358210b15f0c2a681161cdfc434da1771a5727260b\": container with ID starting with bf2080b4a71a60affb5e97358210b15f0c2a681161cdfc434da1771a5727260b not found: ID does not exist" Oct 13 18:09:35 crc kubenswrapper[4720]: I1013 18:09:35.217176 4720 scope.go:117] "RemoveContainer" containerID="fb105ee781ffc2e8c650d9a57bd13fa04f71c19a985c56a7d6f4d49f005d1a06" Oct 13 18:09:35 crc kubenswrapper[4720]: E1013 18:09:35.217877 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb105ee781ffc2e8c650d9a57bd13fa04f71c19a985c56a7d6f4d49f005d1a06\": container with ID starting with fb105ee781ffc2e8c650d9a57bd13fa04f71c19a985c56a7d6f4d49f005d1a06 not found: ID does not exist" containerID="fb105ee781ffc2e8c650d9a57bd13fa04f71c19a985c56a7d6f4d49f005d1a06" Oct 13 18:09:35 crc kubenswrapper[4720]: I1013 18:09:35.217945 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb105ee781ffc2e8c650d9a57bd13fa04f71c19a985c56a7d6f4d49f005d1a06"} err="failed to get container status \"fb105ee781ffc2e8c650d9a57bd13fa04f71c19a985c56a7d6f4d49f005d1a06\": rpc error: code = NotFound desc = could not find container \"fb105ee781ffc2e8c650d9a57bd13fa04f71c19a985c56a7d6f4d49f005d1a06\": container with ID starting with fb105ee781ffc2e8c650d9a57bd13fa04f71c19a985c56a7d6f4d49f005d1a06 not found: ID does not exist" Oct 13 18:09:35 crc kubenswrapper[4720]: I1013 18:09:35.217988 4720 scope.go:117] "RemoveContainer" containerID="d2200e93c23a17cd19d2dc9dbf773244a3b0e3358dbe44f793abf2a3c5c089b5" Oct 13 18:09:35 crc kubenswrapper[4720]: E1013 18:09:35.218775 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2200e93c23a17cd19d2dc9dbf773244a3b0e3358dbe44f793abf2a3c5c089b5\": container with ID starting with d2200e93c23a17cd19d2dc9dbf773244a3b0e3358dbe44f793abf2a3c5c089b5 not found: ID does not exist" containerID="d2200e93c23a17cd19d2dc9dbf773244a3b0e3358dbe44f793abf2a3c5c089b5" Oct 13 18:09:35 crc kubenswrapper[4720]: I1013 18:09:35.218824 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2200e93c23a17cd19d2dc9dbf773244a3b0e3358dbe44f793abf2a3c5c089b5"} err="failed to get container status \"d2200e93c23a17cd19d2dc9dbf773244a3b0e3358dbe44f793abf2a3c5c089b5\": rpc error: code = NotFound desc = could not find container \"d2200e93c23a17cd19d2dc9dbf773244a3b0e3358dbe44f793abf2a3c5c089b5\": container with ID starting with d2200e93c23a17cd19d2dc9dbf773244a3b0e3358dbe44f793abf2a3c5c089b5 not found: ID does not exist" Oct 13 18:09:36 crc kubenswrapper[4720]: I1013 18:09:36.200227 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5tbc6" Oct 13 18:09:36 crc kubenswrapper[4720]: I1013 18:09:36.200875 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5tbc6" Oct 13 18:09:36 crc kubenswrapper[4720]: I1013 18:09:36.277629 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5tbc6" Oct 13 18:09:37 crc kubenswrapper[4720]: I1013 18:09:37.139895 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5tbc6" Oct 13 18:09:37 crc kubenswrapper[4720]: I1013 18:09:37.830140 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5tbc6"] Oct 13 18:09:39 crc kubenswrapper[4720]: I1013 18:09:39.108427 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5tbc6" podUID="24129184-4fa9-4b35-8fc9-f2b19ae96dcc" containerName="registry-server" containerID="cri-o://6c61fb83bcfa85ccf4a1ac2d1463aa4a63576acb337ab85db00bee0d5f76c15e" gracePeriod=2 Oct 13 18:09:39 crc kubenswrapper[4720]: I1013 18:09:39.559144 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5tbc6" Oct 13 18:09:39 crc kubenswrapper[4720]: I1013 18:09:39.716388 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24129184-4fa9-4b35-8fc9-f2b19ae96dcc-catalog-content\") pod \"24129184-4fa9-4b35-8fc9-f2b19ae96dcc\" (UID: \"24129184-4fa9-4b35-8fc9-f2b19ae96dcc\") " Oct 13 18:09:39 crc kubenswrapper[4720]: I1013 18:09:39.716556 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2p9tc\" (UniqueName: \"kubernetes.io/projected/24129184-4fa9-4b35-8fc9-f2b19ae96dcc-kube-api-access-2p9tc\") pod \"24129184-4fa9-4b35-8fc9-f2b19ae96dcc\" (UID: \"24129184-4fa9-4b35-8fc9-f2b19ae96dcc\") " Oct 13 18:09:39 crc kubenswrapper[4720]: I1013 18:09:39.716608 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24129184-4fa9-4b35-8fc9-f2b19ae96dcc-utilities\") pod \"24129184-4fa9-4b35-8fc9-f2b19ae96dcc\" (UID: \"24129184-4fa9-4b35-8fc9-f2b19ae96dcc\") " Oct 13 18:09:39 crc kubenswrapper[4720]: I1013 18:09:39.717884 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24129184-4fa9-4b35-8fc9-f2b19ae96dcc-utilities" (OuterVolumeSpecName: "utilities") pod "24129184-4fa9-4b35-8fc9-f2b19ae96dcc" (UID: "24129184-4fa9-4b35-8fc9-f2b19ae96dcc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 18:09:39 crc kubenswrapper[4720]: I1013 18:09:39.724366 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24129184-4fa9-4b35-8fc9-f2b19ae96dcc-kube-api-access-2p9tc" (OuterVolumeSpecName: "kube-api-access-2p9tc") pod "24129184-4fa9-4b35-8fc9-f2b19ae96dcc" (UID: "24129184-4fa9-4b35-8fc9-f2b19ae96dcc"). InnerVolumeSpecName "kube-api-access-2p9tc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 18:09:39 crc kubenswrapper[4720]: I1013 18:09:39.820062 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2p9tc\" (UniqueName: \"kubernetes.io/projected/24129184-4fa9-4b35-8fc9-f2b19ae96dcc-kube-api-access-2p9tc\") on node \"crc\" DevicePath \"\"" Oct 13 18:09:39 crc kubenswrapper[4720]: I1013 18:09:39.820127 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24129184-4fa9-4b35-8fc9-f2b19ae96dcc-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 18:09:39 crc kubenswrapper[4720]: I1013 18:09:39.889311 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24129184-4fa9-4b35-8fc9-f2b19ae96dcc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "24129184-4fa9-4b35-8fc9-f2b19ae96dcc" (UID: "24129184-4fa9-4b35-8fc9-f2b19ae96dcc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 18:09:39 crc kubenswrapper[4720]: I1013 18:09:39.922608 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24129184-4fa9-4b35-8fc9-f2b19ae96dcc-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 18:09:40 crc kubenswrapper[4720]: I1013 18:09:40.058616 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Oct 13 18:09:40 crc kubenswrapper[4720]: E1013 18:09:40.058997 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd557d6f-c260-41ba-a1ef-1d9ad6485866" containerName="registry-server" Oct 13 18:09:40 crc kubenswrapper[4720]: I1013 18:09:40.059021 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd557d6f-c260-41ba-a1ef-1d9ad6485866" containerName="registry-server" Oct 13 18:09:40 crc kubenswrapper[4720]: E1013 18:09:40.059031 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd557d6f-c260-41ba-a1ef-1d9ad6485866" containerName="extract-content" Oct 13 18:09:40 crc kubenswrapper[4720]: I1013 18:09:40.059040 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd557d6f-c260-41ba-a1ef-1d9ad6485866" containerName="extract-content" Oct 13 18:09:40 crc kubenswrapper[4720]: E1013 18:09:40.059063 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24129184-4fa9-4b35-8fc9-f2b19ae96dcc" containerName="extract-utilities" Oct 13 18:09:40 crc kubenswrapper[4720]: I1013 18:09:40.059073 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="24129184-4fa9-4b35-8fc9-f2b19ae96dcc" containerName="extract-utilities" Oct 13 18:09:40 crc kubenswrapper[4720]: E1013 18:09:40.059087 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24129184-4fa9-4b35-8fc9-f2b19ae96dcc" containerName="registry-server" Oct 13 18:09:40 crc kubenswrapper[4720]: I1013 18:09:40.059095 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="24129184-4fa9-4b35-8fc9-f2b19ae96dcc" containerName="registry-server" Oct 13 18:09:40 crc kubenswrapper[4720]: E1013 18:09:40.059132 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd557d6f-c260-41ba-a1ef-1d9ad6485866" containerName="extract-utilities" Oct 13 18:09:40 crc kubenswrapper[4720]: I1013 18:09:40.059140 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd557d6f-c260-41ba-a1ef-1d9ad6485866" containerName="extract-utilities" Oct 13 18:09:40 crc kubenswrapper[4720]: E1013 18:09:40.059152 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24129184-4fa9-4b35-8fc9-f2b19ae96dcc" containerName="extract-content" Oct 13 18:09:40 crc kubenswrapper[4720]: I1013 18:09:40.059160 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="24129184-4fa9-4b35-8fc9-f2b19ae96dcc" containerName="extract-content" Oct 13 18:09:40 crc kubenswrapper[4720]: I1013 18:09:40.059389 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd557d6f-c260-41ba-a1ef-1d9ad6485866" containerName="registry-server" Oct 13 18:09:40 crc kubenswrapper[4720]: I1013 18:09:40.059415 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="24129184-4fa9-4b35-8fc9-f2b19ae96dcc" containerName="registry-server" Oct 13 18:09:40 crc kubenswrapper[4720]: I1013 18:09:40.060050 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Oct 13 18:09:40 crc kubenswrapper[4720]: I1013 18:09:40.064392 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Oct 13 18:09:40 crc kubenswrapper[4720]: I1013 18:09:40.064410 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Oct 13 18:09:40 crc kubenswrapper[4720]: I1013 18:09:40.065695 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Oct 13 18:09:40 crc kubenswrapper[4720]: I1013 18:09:40.069125 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-jr545" Oct 13 18:09:40 crc kubenswrapper[4720]: I1013 18:09:40.078807 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Oct 13 18:09:40 crc kubenswrapper[4720]: I1013 18:09:40.132288 4720 generic.go:334] "Generic (PLEG): container finished" podID="24129184-4fa9-4b35-8fc9-f2b19ae96dcc" containerID="6c61fb83bcfa85ccf4a1ac2d1463aa4a63576acb337ab85db00bee0d5f76c15e" exitCode=0 Oct 13 18:09:40 crc kubenswrapper[4720]: I1013 18:09:40.132345 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5tbc6" event={"ID":"24129184-4fa9-4b35-8fc9-f2b19ae96dcc","Type":"ContainerDied","Data":"6c61fb83bcfa85ccf4a1ac2d1463aa4a63576acb337ab85db00bee0d5f76c15e"} Oct 13 18:09:40 crc kubenswrapper[4720]: I1013 18:09:40.132382 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5tbc6" event={"ID":"24129184-4fa9-4b35-8fc9-f2b19ae96dcc","Type":"ContainerDied","Data":"704ddee89b207a10fbd464da95764f5ab1a650642a94ccd9dd9ec28d002c2f10"} Oct 13 18:09:40 crc kubenswrapper[4720]: I1013 18:09:40.132417 4720 scope.go:117] "RemoveContainer" containerID="6c61fb83bcfa85ccf4a1ac2d1463aa4a63576acb337ab85db00bee0d5f76c15e" Oct 13 18:09:40 crc kubenswrapper[4720]: I1013 18:09:40.132639 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5tbc6" Oct 13 18:09:40 crc kubenswrapper[4720]: I1013 18:09:40.136802 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"ece01f62-fd6d-4c42-9c9a-3bc25feed3cb\") " pod="openstack/tempest-tests-tempest" Oct 13 18:09:40 crc kubenswrapper[4720]: I1013 18:09:40.136897 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ece01f62-fd6d-4c42-9c9a-3bc25feed3cb-config-data\") pod \"tempest-tests-tempest\" (UID: \"ece01f62-fd6d-4c42-9c9a-3bc25feed3cb\") " pod="openstack/tempest-tests-tempest" Oct 13 18:09:40 crc kubenswrapper[4720]: I1013 18:09:40.136996 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ece01f62-fd6d-4c42-9c9a-3bc25feed3cb-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"ece01f62-fd6d-4c42-9c9a-3bc25feed3cb\") " pod="openstack/tempest-tests-tempest" Oct 13 18:09:40 crc kubenswrapper[4720]: I1013 18:09:40.137381 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ece01f62-fd6d-4c42-9c9a-3bc25feed3cb-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"ece01f62-fd6d-4c42-9c9a-3bc25feed3cb\") " pod="openstack/tempest-tests-tempest" Oct 13 18:09:40 crc kubenswrapper[4720]: I1013 18:09:40.137456 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/ece01f62-fd6d-4c42-9c9a-3bc25feed3cb-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"ece01f62-fd6d-4c42-9c9a-3bc25feed3cb\") " pod="openstack/tempest-tests-tempest" Oct 13 18:09:40 crc kubenswrapper[4720]: I1013 18:09:40.137924 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ece01f62-fd6d-4c42-9c9a-3bc25feed3cb-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"ece01f62-fd6d-4c42-9c9a-3bc25feed3cb\") " pod="openstack/tempest-tests-tempest" Oct 13 18:09:40 crc kubenswrapper[4720]: I1013 18:09:40.138017 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/ece01f62-fd6d-4c42-9c9a-3bc25feed3cb-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"ece01f62-fd6d-4c42-9c9a-3bc25feed3cb\") " pod="openstack/tempest-tests-tempest" Oct 13 18:09:40 crc kubenswrapper[4720]: I1013 18:09:40.138216 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/ece01f62-fd6d-4c42-9c9a-3bc25feed3cb-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"ece01f62-fd6d-4c42-9c9a-3bc25feed3cb\") " pod="openstack/tempest-tests-tempest" Oct 13 18:09:40 crc kubenswrapper[4720]: I1013 18:09:40.138311 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhfvz\" (UniqueName: \"kubernetes.io/projected/ece01f62-fd6d-4c42-9c9a-3bc25feed3cb-kube-api-access-jhfvz\") pod \"tempest-tests-tempest\" (UID: \"ece01f62-fd6d-4c42-9c9a-3bc25feed3cb\") " pod="openstack/tempest-tests-tempest" Oct 13 18:09:40 crc kubenswrapper[4720]: I1013 18:09:40.154584 4720 scope.go:117] "RemoveContainer" containerID="58f09478cd7694cbefb38c49b48d28f8fd07d3cab7cb6ceb5128d17f916d5629" Oct 13 18:09:40 crc kubenswrapper[4720]: I1013 18:09:40.185880 4720 scope.go:117] "RemoveContainer" containerID="8ac9112f708fd3452c746fef2272f30d0537b69b3873284c0fda3395ef0c1ed9" Oct 13 18:09:40 crc kubenswrapper[4720]: I1013 18:09:40.186984 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5tbc6"] Oct 13 18:09:40 crc kubenswrapper[4720]: I1013 18:09:40.193891 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5tbc6"] Oct 13 18:09:40 crc kubenswrapper[4720]: I1013 18:09:40.225069 4720 scope.go:117] "RemoveContainer" containerID="6c61fb83bcfa85ccf4a1ac2d1463aa4a63576acb337ab85db00bee0d5f76c15e" Oct 13 18:09:40 crc kubenswrapper[4720]: E1013 18:09:40.225917 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c61fb83bcfa85ccf4a1ac2d1463aa4a63576acb337ab85db00bee0d5f76c15e\": container with ID starting with 6c61fb83bcfa85ccf4a1ac2d1463aa4a63576acb337ab85db00bee0d5f76c15e not found: ID does not exist" containerID="6c61fb83bcfa85ccf4a1ac2d1463aa4a63576acb337ab85db00bee0d5f76c15e" Oct 13 18:09:40 crc kubenswrapper[4720]: I1013 18:09:40.225961 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c61fb83bcfa85ccf4a1ac2d1463aa4a63576acb337ab85db00bee0d5f76c15e"} err="failed to get container status \"6c61fb83bcfa85ccf4a1ac2d1463aa4a63576acb337ab85db00bee0d5f76c15e\": rpc error: code = NotFound desc = could not find container \"6c61fb83bcfa85ccf4a1ac2d1463aa4a63576acb337ab85db00bee0d5f76c15e\": container with ID starting with 6c61fb83bcfa85ccf4a1ac2d1463aa4a63576acb337ab85db00bee0d5f76c15e not found: ID does not exist" Oct 13 18:09:40 crc kubenswrapper[4720]: I1013 18:09:40.225987 4720 scope.go:117] "RemoveContainer" containerID="58f09478cd7694cbefb38c49b48d28f8fd07d3cab7cb6ceb5128d17f916d5629" Oct 13 18:09:40 crc kubenswrapper[4720]: E1013 18:09:40.226477 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58f09478cd7694cbefb38c49b48d28f8fd07d3cab7cb6ceb5128d17f916d5629\": container with ID starting with 58f09478cd7694cbefb38c49b48d28f8fd07d3cab7cb6ceb5128d17f916d5629 not found: ID does not exist" containerID="58f09478cd7694cbefb38c49b48d28f8fd07d3cab7cb6ceb5128d17f916d5629" Oct 13 18:09:40 crc kubenswrapper[4720]: I1013 18:09:40.226506 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58f09478cd7694cbefb38c49b48d28f8fd07d3cab7cb6ceb5128d17f916d5629"} err="failed to get container status \"58f09478cd7694cbefb38c49b48d28f8fd07d3cab7cb6ceb5128d17f916d5629\": rpc error: code = NotFound desc = could not find container \"58f09478cd7694cbefb38c49b48d28f8fd07d3cab7cb6ceb5128d17f916d5629\": container with ID starting with 58f09478cd7694cbefb38c49b48d28f8fd07d3cab7cb6ceb5128d17f916d5629 not found: ID does not exist" Oct 13 18:09:40 crc kubenswrapper[4720]: I1013 18:09:40.226523 4720 scope.go:117] "RemoveContainer" containerID="8ac9112f708fd3452c746fef2272f30d0537b69b3873284c0fda3395ef0c1ed9" Oct 13 18:09:40 crc kubenswrapper[4720]: E1013 18:09:40.226890 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ac9112f708fd3452c746fef2272f30d0537b69b3873284c0fda3395ef0c1ed9\": container with ID starting with 8ac9112f708fd3452c746fef2272f30d0537b69b3873284c0fda3395ef0c1ed9 not found: ID does not exist" containerID="8ac9112f708fd3452c746fef2272f30d0537b69b3873284c0fda3395ef0c1ed9" Oct 13 18:09:40 crc kubenswrapper[4720]: I1013 18:09:40.226977 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ac9112f708fd3452c746fef2272f30d0537b69b3873284c0fda3395ef0c1ed9"} err="failed to get container status \"8ac9112f708fd3452c746fef2272f30d0537b69b3873284c0fda3395ef0c1ed9\": rpc error: code = NotFound desc = could not find container \"8ac9112f708fd3452c746fef2272f30d0537b69b3873284c0fda3395ef0c1ed9\": container with ID starting with 8ac9112f708fd3452c746fef2272f30d0537b69b3873284c0fda3395ef0c1ed9 not found: ID does not exist" Oct 13 18:09:40 crc kubenswrapper[4720]: I1013 18:09:40.240372 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"ece01f62-fd6d-4c42-9c9a-3bc25feed3cb\") " pod="openstack/tempest-tests-tempest" Oct 13 18:09:40 crc kubenswrapper[4720]: I1013 18:09:40.240446 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ece01f62-fd6d-4c42-9c9a-3bc25feed3cb-config-data\") pod \"tempest-tests-tempest\" (UID: \"ece01f62-fd6d-4c42-9c9a-3bc25feed3cb\") " pod="openstack/tempest-tests-tempest" Oct 13 18:09:40 crc kubenswrapper[4720]: I1013 18:09:40.240488 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ece01f62-fd6d-4c42-9c9a-3bc25feed3cb-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"ece01f62-fd6d-4c42-9c9a-3bc25feed3cb\") " pod="openstack/tempest-tests-tempest" Oct 13 18:09:40 crc kubenswrapper[4720]: I1013 18:09:40.240577 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ece01f62-fd6d-4c42-9c9a-3bc25feed3cb-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"ece01f62-fd6d-4c42-9c9a-3bc25feed3cb\") " pod="openstack/tempest-tests-tempest" Oct 13 18:09:40 crc kubenswrapper[4720]: I1013 18:09:40.240614 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/ece01f62-fd6d-4c42-9c9a-3bc25feed3cb-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"ece01f62-fd6d-4c42-9c9a-3bc25feed3cb\") " pod="openstack/tempest-tests-tempest" Oct 13 18:09:40 crc kubenswrapper[4720]: I1013 18:09:40.240818 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ece01f62-fd6d-4c42-9c9a-3bc25feed3cb-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"ece01f62-fd6d-4c42-9c9a-3bc25feed3cb\") " pod="openstack/tempest-tests-tempest" Oct 13 18:09:40 crc kubenswrapper[4720]: I1013 18:09:40.240870 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/ece01f62-fd6d-4c42-9c9a-3bc25feed3cb-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"ece01f62-fd6d-4c42-9c9a-3bc25feed3cb\") " pod="openstack/tempest-tests-tempest" Oct 13 18:09:40 crc kubenswrapper[4720]: I1013 18:09:40.240990 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/ece01f62-fd6d-4c42-9c9a-3bc25feed3cb-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"ece01f62-fd6d-4c42-9c9a-3bc25feed3cb\") " pod="openstack/tempest-tests-tempest" Oct 13 18:09:40 crc kubenswrapper[4720]: I1013 18:09:40.241049 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhfvz\" (UniqueName: \"kubernetes.io/projected/ece01f62-fd6d-4c42-9c9a-3bc25feed3cb-kube-api-access-jhfvz\") pod \"tempest-tests-tempest\" (UID: \"ece01f62-fd6d-4c42-9c9a-3bc25feed3cb\") " pod="openstack/tempest-tests-tempest" Oct 13 18:09:40 crc kubenswrapper[4720]: I1013 18:09:40.241660 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/ece01f62-fd6d-4c42-9c9a-3bc25feed3cb-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"ece01f62-fd6d-4c42-9c9a-3bc25feed3cb\") " pod="openstack/tempest-tests-tempest" Oct 13 18:09:40 crc kubenswrapper[4720]: I1013 18:09:40.241737 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/ece01f62-fd6d-4c42-9c9a-3bc25feed3cb-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"ece01f62-fd6d-4c42-9c9a-3bc25feed3cb\") " pod="openstack/tempest-tests-tempest" Oct 13 18:09:40 crc kubenswrapper[4720]: I1013 18:09:40.241783 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ece01f62-fd6d-4c42-9c9a-3bc25feed3cb-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"ece01f62-fd6d-4c42-9c9a-3bc25feed3cb\") " pod="openstack/tempest-tests-tempest" Oct 13 18:09:40 crc kubenswrapper[4720]: I1013 18:09:40.242059 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ece01f62-fd6d-4c42-9c9a-3bc25feed3cb-config-data\") pod \"tempest-tests-tempest\" (UID: \"ece01f62-fd6d-4c42-9c9a-3bc25feed3cb\") " pod="openstack/tempest-tests-tempest" Oct 13 18:09:40 crc kubenswrapper[4720]: I1013 18:09:40.242489 4720 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"ece01f62-fd6d-4c42-9c9a-3bc25feed3cb\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/tempest-tests-tempest" Oct 13 18:09:40 crc kubenswrapper[4720]: I1013 18:09:40.247593 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ece01f62-fd6d-4c42-9c9a-3bc25feed3cb-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"ece01f62-fd6d-4c42-9c9a-3bc25feed3cb\") " pod="openstack/tempest-tests-tempest" Oct 13 18:09:40 crc kubenswrapper[4720]: I1013 18:09:40.248479 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ece01f62-fd6d-4c42-9c9a-3bc25feed3cb-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"ece01f62-fd6d-4c42-9c9a-3bc25feed3cb\") " pod="openstack/tempest-tests-tempest" Oct 13 18:09:40 crc kubenswrapper[4720]: I1013 18:09:40.249021 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/ece01f62-fd6d-4c42-9c9a-3bc25feed3cb-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"ece01f62-fd6d-4c42-9c9a-3bc25feed3cb\") " pod="openstack/tempest-tests-tempest" Oct 13 18:09:40 crc kubenswrapper[4720]: I1013 18:09:40.269293 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhfvz\" (UniqueName: \"kubernetes.io/projected/ece01f62-fd6d-4c42-9c9a-3bc25feed3cb-kube-api-access-jhfvz\") pod \"tempest-tests-tempest\" (UID: \"ece01f62-fd6d-4c42-9c9a-3bc25feed3cb\") " pod="openstack/tempest-tests-tempest" Oct 13 18:09:40 crc kubenswrapper[4720]: I1013 18:09:40.283491 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"ece01f62-fd6d-4c42-9c9a-3bc25feed3cb\") " pod="openstack/tempest-tests-tempest" Oct 13 18:09:40 crc kubenswrapper[4720]: I1013 18:09:40.383841 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Oct 13 18:09:40 crc kubenswrapper[4720]: I1013 18:09:40.900716 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Oct 13 18:09:41 crc kubenswrapper[4720]: I1013 18:09:41.144785 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"ece01f62-fd6d-4c42-9c9a-3bc25feed3cb","Type":"ContainerStarted","Data":"132075cdd19a221aa1736d168c3ff9eb35efd95473170c6892978258e1216b55"} Oct 13 18:09:41 crc kubenswrapper[4720]: I1013 18:09:41.185342 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24129184-4fa9-4b35-8fc9-f2b19ae96dcc" path="/var/lib/kubelet/pods/24129184-4fa9-4b35-8fc9-f2b19ae96dcc/volumes" Oct 13 18:10:09 crc kubenswrapper[4720]: E1013 18:10:09.470988 4720 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Oct 13 18:10:09 crc kubenswrapper[4720]: E1013 18:10:09.473613 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jhfvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(ece01f62-fd6d-4c42-9c9a-3bc25feed3cb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 13 18:10:09 crc kubenswrapper[4720]: E1013 18:10:09.475331 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="ece01f62-fd6d-4c42-9c9a-3bc25feed3cb" Oct 13 18:10:10 crc kubenswrapper[4720]: E1013 18:10:10.434423 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="ece01f62-fd6d-4c42-9c9a-3bc25feed3cb" Oct 13 18:10:24 crc kubenswrapper[4720]: I1013 18:10:24.717296 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Oct 13 18:10:26 crc kubenswrapper[4720]: I1013 18:10:26.621556 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"ece01f62-fd6d-4c42-9c9a-3bc25feed3cb","Type":"ContainerStarted","Data":"df57570d0bcf4bd175220efc3a65904a5ad2d5d37896900f423d316850240a6a"} Oct 13 18:10:26 crc kubenswrapper[4720]: I1013 18:10:26.643789 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.849693142 podStartE2EDuration="47.643768649s" podCreationTimestamp="2025-10-13 18:09:39 +0000 UTC" firstStartedPulling="2025-10-13 18:09:40.919500534 +0000 UTC m=+2726.376750686" lastFinishedPulling="2025-10-13 18:10:24.713576051 +0000 UTC m=+2770.170826193" observedRunningTime="2025-10-13 18:10:26.642055515 +0000 UTC m=+2772.099305657" watchObservedRunningTime="2025-10-13 18:10:26.643768649 +0000 UTC m=+2772.101018791" Oct 13 18:10:45 crc kubenswrapper[4720]: I1013 18:10:45.212631 4720 patch_prober.go:28] interesting pod/machine-config-daemon-htwnl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 18:10:45 crc kubenswrapper[4720]: I1013 18:10:45.213265 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 18:11:15 crc kubenswrapper[4720]: I1013 18:11:15.212860 4720 patch_prober.go:28] interesting pod/machine-config-daemon-htwnl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 18:11:15 crc kubenswrapper[4720]: I1013 18:11:15.213646 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 18:11:45 crc kubenswrapper[4720]: I1013 18:11:45.212385 4720 patch_prober.go:28] interesting pod/machine-config-daemon-htwnl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 18:11:45 crc kubenswrapper[4720]: I1013 18:11:45.212833 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 18:11:45 crc kubenswrapper[4720]: I1013 18:11:45.212874 4720 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" Oct 13 18:11:45 crc kubenswrapper[4720]: I1013 18:11:45.213327 4720 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8955f01d60828c6a158946e7dd0ce8b2360fc11ddc5d461465ae7048137b38b7"} pod="openshift-machine-config-operator/machine-config-daemon-htwnl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 13 18:11:45 crc kubenswrapper[4720]: I1013 18:11:45.213379 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerName="machine-config-daemon" containerID="cri-o://8955f01d60828c6a158946e7dd0ce8b2360fc11ddc5d461465ae7048137b38b7" gracePeriod=600 Oct 13 18:11:45 crc kubenswrapper[4720]: E1013 18:11:45.338897 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:11:45 crc kubenswrapper[4720]: I1013 18:11:45.476113 4720 generic.go:334] "Generic (PLEG): container finished" podID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerID="8955f01d60828c6a158946e7dd0ce8b2360fc11ddc5d461465ae7048137b38b7" exitCode=0 Oct 13 18:11:45 crc kubenswrapper[4720]: I1013 18:11:45.476177 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" event={"ID":"ce442c80-fcde-4b79-b6f9-f8f25771dfd4","Type":"ContainerDied","Data":"8955f01d60828c6a158946e7dd0ce8b2360fc11ddc5d461465ae7048137b38b7"} Oct 13 18:11:45 crc kubenswrapper[4720]: I1013 18:11:45.477723 4720 scope.go:117] "RemoveContainer" containerID="8955f01d60828c6a158946e7dd0ce8b2360fc11ddc5d461465ae7048137b38b7" Oct 13 18:11:45 crc kubenswrapper[4720]: E1013 18:11:45.478310 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:11:45 crc kubenswrapper[4720]: I1013 18:11:45.478354 4720 scope.go:117] "RemoveContainer" containerID="d60282f0b598e1436e2840341bf3d33836603e66bded554693a91d04186c59fa" Oct 13 18:11:52 crc kubenswrapper[4720]: I1013 18:11:52.963913 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-694bd85589-jdgbb" podUID="d51a0725-9566-428f-a34b-3b0345774d1f" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Oct 13 18:12:01 crc kubenswrapper[4720]: I1013 18:12:01.170051 4720 scope.go:117] "RemoveContainer" containerID="8955f01d60828c6a158946e7dd0ce8b2360fc11ddc5d461465ae7048137b38b7" Oct 13 18:12:01 crc kubenswrapper[4720]: E1013 18:12:01.171863 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:12:16 crc kubenswrapper[4720]: I1013 18:12:16.169244 4720 scope.go:117] "RemoveContainer" containerID="8955f01d60828c6a158946e7dd0ce8b2360fc11ddc5d461465ae7048137b38b7" Oct 13 18:12:16 crc kubenswrapper[4720]: E1013 18:12:16.170363 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:12:29 crc kubenswrapper[4720]: I1013 18:12:29.169431 4720 scope.go:117] "RemoveContainer" containerID="8955f01d60828c6a158946e7dd0ce8b2360fc11ddc5d461465ae7048137b38b7" Oct 13 18:12:29 crc kubenswrapper[4720]: E1013 18:12:29.170754 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:12:44 crc kubenswrapper[4720]: I1013 18:12:44.168835 4720 scope.go:117] "RemoveContainer" containerID="8955f01d60828c6a158946e7dd0ce8b2360fc11ddc5d461465ae7048137b38b7" Oct 13 18:12:44 crc kubenswrapper[4720]: E1013 18:12:44.169812 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:12:55 crc kubenswrapper[4720]: I1013 18:12:55.182428 4720 scope.go:117] "RemoveContainer" containerID="8955f01d60828c6a158946e7dd0ce8b2360fc11ddc5d461465ae7048137b38b7" Oct 13 18:12:55 crc kubenswrapper[4720]: E1013 18:12:55.183338 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:12:59 crc kubenswrapper[4720]: I1013 18:12:59.251053 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dqchz"] Oct 13 18:12:59 crc kubenswrapper[4720]: I1013 18:12:59.254356 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dqchz" Oct 13 18:12:59 crc kubenswrapper[4720]: I1013 18:12:59.270738 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dqchz"] Oct 13 18:12:59 crc kubenswrapper[4720]: I1013 18:12:59.379146 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fc5adb4-d8d0-455b-a8ae-a0bf945a48e2-utilities\") pod \"certified-operators-dqchz\" (UID: \"2fc5adb4-d8d0-455b-a8ae-a0bf945a48e2\") " pod="openshift-marketplace/certified-operators-dqchz" Oct 13 18:12:59 crc kubenswrapper[4720]: I1013 18:12:59.379486 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fc5adb4-d8d0-455b-a8ae-a0bf945a48e2-catalog-content\") pod \"certified-operators-dqchz\" (UID: \"2fc5adb4-d8d0-455b-a8ae-a0bf945a48e2\") " pod="openshift-marketplace/certified-operators-dqchz" Oct 13 18:12:59 crc kubenswrapper[4720]: I1013 18:12:59.379720 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5sdk\" (UniqueName: \"kubernetes.io/projected/2fc5adb4-d8d0-455b-a8ae-a0bf945a48e2-kube-api-access-c5sdk\") pod \"certified-operators-dqchz\" (UID: \"2fc5adb4-d8d0-455b-a8ae-a0bf945a48e2\") " pod="openshift-marketplace/certified-operators-dqchz" Oct 13 18:12:59 crc kubenswrapper[4720]: I1013 18:12:59.481732 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5sdk\" (UniqueName: \"kubernetes.io/projected/2fc5adb4-d8d0-455b-a8ae-a0bf945a48e2-kube-api-access-c5sdk\") pod \"certified-operators-dqchz\" (UID: \"2fc5adb4-d8d0-455b-a8ae-a0bf945a48e2\") " pod="openshift-marketplace/certified-operators-dqchz" Oct 13 18:12:59 crc kubenswrapper[4720]: I1013 18:12:59.481885 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fc5adb4-d8d0-455b-a8ae-a0bf945a48e2-utilities\") pod \"certified-operators-dqchz\" (UID: \"2fc5adb4-d8d0-455b-a8ae-a0bf945a48e2\") " pod="openshift-marketplace/certified-operators-dqchz" Oct 13 18:12:59 crc kubenswrapper[4720]: I1013 18:12:59.482039 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fc5adb4-d8d0-455b-a8ae-a0bf945a48e2-catalog-content\") pod \"certified-operators-dqchz\" (UID: \"2fc5adb4-d8d0-455b-a8ae-a0bf945a48e2\") " pod="openshift-marketplace/certified-operators-dqchz" Oct 13 18:12:59 crc kubenswrapper[4720]: I1013 18:12:59.482719 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fc5adb4-d8d0-455b-a8ae-a0bf945a48e2-catalog-content\") pod \"certified-operators-dqchz\" (UID: \"2fc5adb4-d8d0-455b-a8ae-a0bf945a48e2\") " pod="openshift-marketplace/certified-operators-dqchz" Oct 13 18:12:59 crc kubenswrapper[4720]: I1013 18:12:59.483074 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fc5adb4-d8d0-455b-a8ae-a0bf945a48e2-utilities\") pod \"certified-operators-dqchz\" (UID: \"2fc5adb4-d8d0-455b-a8ae-a0bf945a48e2\") " pod="openshift-marketplace/certified-operators-dqchz" Oct 13 18:12:59 crc kubenswrapper[4720]: I1013 18:12:59.507435 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5sdk\" (UniqueName: \"kubernetes.io/projected/2fc5adb4-d8d0-455b-a8ae-a0bf945a48e2-kube-api-access-c5sdk\") pod \"certified-operators-dqchz\" (UID: \"2fc5adb4-d8d0-455b-a8ae-a0bf945a48e2\") " pod="openshift-marketplace/certified-operators-dqchz" Oct 13 18:12:59 crc kubenswrapper[4720]: I1013 18:12:59.602393 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dqchz" Oct 13 18:13:00 crc kubenswrapper[4720]: I1013 18:13:00.065528 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dqchz"] Oct 13 18:13:00 crc kubenswrapper[4720]: I1013 18:13:00.314566 4720 generic.go:334] "Generic (PLEG): container finished" podID="2fc5adb4-d8d0-455b-a8ae-a0bf945a48e2" containerID="dd642ad69d3b58b50d2820ca803ad33f0b2cf93830d91ca9f0b262c2f44bfc35" exitCode=0 Oct 13 18:13:00 crc kubenswrapper[4720]: I1013 18:13:00.314612 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dqchz" event={"ID":"2fc5adb4-d8d0-455b-a8ae-a0bf945a48e2","Type":"ContainerDied","Data":"dd642ad69d3b58b50d2820ca803ad33f0b2cf93830d91ca9f0b262c2f44bfc35"} Oct 13 18:13:00 crc kubenswrapper[4720]: I1013 18:13:00.314659 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dqchz" event={"ID":"2fc5adb4-d8d0-455b-a8ae-a0bf945a48e2","Type":"ContainerStarted","Data":"832bd677ecee43992693d923075b9d60d770ad687481fb31080cbb6b9ee16145"} Oct 13 18:13:01 crc kubenswrapper[4720]: I1013 18:13:01.326454 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dqchz" event={"ID":"2fc5adb4-d8d0-455b-a8ae-a0bf945a48e2","Type":"ContainerStarted","Data":"d7d2c579abe72b65b78944d7d6f2862572e90bbd8e2fcd0b783da79b14570cc4"} Oct 13 18:13:03 crc kubenswrapper[4720]: I1013 18:13:03.358112 4720 generic.go:334] "Generic (PLEG): container finished" podID="2fc5adb4-d8d0-455b-a8ae-a0bf945a48e2" containerID="d7d2c579abe72b65b78944d7d6f2862572e90bbd8e2fcd0b783da79b14570cc4" exitCode=0 Oct 13 18:13:03 crc kubenswrapper[4720]: I1013 18:13:03.358161 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dqchz" event={"ID":"2fc5adb4-d8d0-455b-a8ae-a0bf945a48e2","Type":"ContainerDied","Data":"d7d2c579abe72b65b78944d7d6f2862572e90bbd8e2fcd0b783da79b14570cc4"} Oct 13 18:13:04 crc kubenswrapper[4720]: I1013 18:13:04.371502 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dqchz" event={"ID":"2fc5adb4-d8d0-455b-a8ae-a0bf945a48e2","Type":"ContainerStarted","Data":"a47716bbde1ba83b99934b5b8e5be01dab0da3edd4a8a5454839bb8e65342fa4"} Oct 13 18:13:04 crc kubenswrapper[4720]: I1013 18:13:04.392268 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dqchz" podStartSLOduration=1.989739583 podStartE2EDuration="5.392179459s" podCreationTimestamp="2025-10-13 18:12:59 +0000 UTC" firstStartedPulling="2025-10-13 18:13:00.316365168 +0000 UTC m=+2925.773615300" lastFinishedPulling="2025-10-13 18:13:03.718805044 +0000 UTC m=+2929.176055176" observedRunningTime="2025-10-13 18:13:04.388854213 +0000 UTC m=+2929.846104355" watchObservedRunningTime="2025-10-13 18:13:04.392179459 +0000 UTC m=+2929.849429631" Oct 13 18:13:09 crc kubenswrapper[4720]: I1013 18:13:09.602973 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dqchz" Oct 13 18:13:09 crc kubenswrapper[4720]: I1013 18:13:09.604080 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dqchz" Oct 13 18:13:09 crc kubenswrapper[4720]: I1013 18:13:09.670706 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dqchz" Oct 13 18:13:10 crc kubenswrapper[4720]: I1013 18:13:10.167971 4720 scope.go:117] "RemoveContainer" containerID="8955f01d60828c6a158946e7dd0ce8b2360fc11ddc5d461465ae7048137b38b7" Oct 13 18:13:10 crc kubenswrapper[4720]: E1013 18:13:10.168954 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:13:10 crc kubenswrapper[4720]: I1013 18:13:10.465158 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dqchz" Oct 13 18:13:10 crc kubenswrapper[4720]: I1013 18:13:10.510233 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dqchz"] Oct 13 18:13:12 crc kubenswrapper[4720]: I1013 18:13:12.446622 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dqchz" podUID="2fc5adb4-d8d0-455b-a8ae-a0bf945a48e2" containerName="registry-server" containerID="cri-o://a47716bbde1ba83b99934b5b8e5be01dab0da3edd4a8a5454839bb8e65342fa4" gracePeriod=2 Oct 13 18:13:13 crc kubenswrapper[4720]: I1013 18:13:13.025060 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dqchz" Oct 13 18:13:13 crc kubenswrapper[4720]: I1013 18:13:13.079743 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5sdk\" (UniqueName: \"kubernetes.io/projected/2fc5adb4-d8d0-455b-a8ae-a0bf945a48e2-kube-api-access-c5sdk\") pod \"2fc5adb4-d8d0-455b-a8ae-a0bf945a48e2\" (UID: \"2fc5adb4-d8d0-455b-a8ae-a0bf945a48e2\") " Oct 13 18:13:13 crc kubenswrapper[4720]: I1013 18:13:13.079845 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fc5adb4-d8d0-455b-a8ae-a0bf945a48e2-catalog-content\") pod \"2fc5adb4-d8d0-455b-a8ae-a0bf945a48e2\" (UID: \"2fc5adb4-d8d0-455b-a8ae-a0bf945a48e2\") " Oct 13 18:13:13 crc kubenswrapper[4720]: I1013 18:13:13.079956 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fc5adb4-d8d0-455b-a8ae-a0bf945a48e2-utilities\") pod \"2fc5adb4-d8d0-455b-a8ae-a0bf945a48e2\" (UID: \"2fc5adb4-d8d0-455b-a8ae-a0bf945a48e2\") " Oct 13 18:13:13 crc kubenswrapper[4720]: I1013 18:13:13.081074 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fc5adb4-d8d0-455b-a8ae-a0bf945a48e2-utilities" (OuterVolumeSpecName: "utilities") pod "2fc5adb4-d8d0-455b-a8ae-a0bf945a48e2" (UID: "2fc5adb4-d8d0-455b-a8ae-a0bf945a48e2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 18:13:13 crc kubenswrapper[4720]: I1013 18:13:13.087634 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fc5adb4-d8d0-455b-a8ae-a0bf945a48e2-kube-api-access-c5sdk" (OuterVolumeSpecName: "kube-api-access-c5sdk") pod "2fc5adb4-d8d0-455b-a8ae-a0bf945a48e2" (UID: "2fc5adb4-d8d0-455b-a8ae-a0bf945a48e2"). InnerVolumeSpecName "kube-api-access-c5sdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 18:13:13 crc kubenswrapper[4720]: I1013 18:13:13.134697 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fc5adb4-d8d0-455b-a8ae-a0bf945a48e2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2fc5adb4-d8d0-455b-a8ae-a0bf945a48e2" (UID: "2fc5adb4-d8d0-455b-a8ae-a0bf945a48e2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 18:13:13 crc kubenswrapper[4720]: I1013 18:13:13.181575 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fc5adb4-d8d0-455b-a8ae-a0bf945a48e2-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 18:13:13 crc kubenswrapper[4720]: I1013 18:13:13.181601 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5sdk\" (UniqueName: \"kubernetes.io/projected/2fc5adb4-d8d0-455b-a8ae-a0bf945a48e2-kube-api-access-c5sdk\") on node \"crc\" DevicePath \"\"" Oct 13 18:13:13 crc kubenswrapper[4720]: I1013 18:13:13.181732 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fc5adb4-d8d0-455b-a8ae-a0bf945a48e2-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 18:13:13 crc kubenswrapper[4720]: I1013 18:13:13.458713 4720 generic.go:334] "Generic (PLEG): container finished" podID="2fc5adb4-d8d0-455b-a8ae-a0bf945a48e2" containerID="a47716bbde1ba83b99934b5b8e5be01dab0da3edd4a8a5454839bb8e65342fa4" exitCode=0 Oct 13 18:13:13 crc kubenswrapper[4720]: I1013 18:13:13.458777 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dqchz" event={"ID":"2fc5adb4-d8d0-455b-a8ae-a0bf945a48e2","Type":"ContainerDied","Data":"a47716bbde1ba83b99934b5b8e5be01dab0da3edd4a8a5454839bb8e65342fa4"} Oct 13 18:13:13 crc kubenswrapper[4720]: I1013 18:13:13.458792 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dqchz" Oct 13 18:13:13 crc kubenswrapper[4720]: I1013 18:13:13.458844 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dqchz" event={"ID":"2fc5adb4-d8d0-455b-a8ae-a0bf945a48e2","Type":"ContainerDied","Data":"832bd677ecee43992693d923075b9d60d770ad687481fb31080cbb6b9ee16145"} Oct 13 18:13:13 crc kubenswrapper[4720]: I1013 18:13:13.458873 4720 scope.go:117] "RemoveContainer" containerID="a47716bbde1ba83b99934b5b8e5be01dab0da3edd4a8a5454839bb8e65342fa4" Oct 13 18:13:13 crc kubenswrapper[4720]: I1013 18:13:13.496906 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dqchz"] Oct 13 18:13:13 crc kubenswrapper[4720]: I1013 18:13:13.499930 4720 scope.go:117] "RemoveContainer" containerID="d7d2c579abe72b65b78944d7d6f2862572e90bbd8e2fcd0b783da79b14570cc4" Oct 13 18:13:13 crc kubenswrapper[4720]: I1013 18:13:13.518084 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dqchz"] Oct 13 18:13:13 crc kubenswrapper[4720]: I1013 18:13:13.529584 4720 scope.go:117] "RemoveContainer" containerID="dd642ad69d3b58b50d2820ca803ad33f0b2cf93830d91ca9f0b262c2f44bfc35" Oct 13 18:13:13 crc kubenswrapper[4720]: I1013 18:13:13.583407 4720 scope.go:117] "RemoveContainer" containerID="a47716bbde1ba83b99934b5b8e5be01dab0da3edd4a8a5454839bb8e65342fa4" Oct 13 18:13:13 crc kubenswrapper[4720]: E1013 18:13:13.583766 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a47716bbde1ba83b99934b5b8e5be01dab0da3edd4a8a5454839bb8e65342fa4\": container with ID starting with a47716bbde1ba83b99934b5b8e5be01dab0da3edd4a8a5454839bb8e65342fa4 not found: ID does not exist" containerID="a47716bbde1ba83b99934b5b8e5be01dab0da3edd4a8a5454839bb8e65342fa4" Oct 13 18:13:13 crc kubenswrapper[4720]: I1013 18:13:13.583803 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a47716bbde1ba83b99934b5b8e5be01dab0da3edd4a8a5454839bb8e65342fa4"} err="failed to get container status \"a47716bbde1ba83b99934b5b8e5be01dab0da3edd4a8a5454839bb8e65342fa4\": rpc error: code = NotFound desc = could not find container \"a47716bbde1ba83b99934b5b8e5be01dab0da3edd4a8a5454839bb8e65342fa4\": container with ID starting with a47716bbde1ba83b99934b5b8e5be01dab0da3edd4a8a5454839bb8e65342fa4 not found: ID does not exist" Oct 13 18:13:13 crc kubenswrapper[4720]: I1013 18:13:13.583855 4720 scope.go:117] "RemoveContainer" containerID="d7d2c579abe72b65b78944d7d6f2862572e90bbd8e2fcd0b783da79b14570cc4" Oct 13 18:13:13 crc kubenswrapper[4720]: E1013 18:13:13.584154 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7d2c579abe72b65b78944d7d6f2862572e90bbd8e2fcd0b783da79b14570cc4\": container with ID starting with d7d2c579abe72b65b78944d7d6f2862572e90bbd8e2fcd0b783da79b14570cc4 not found: ID does not exist" containerID="d7d2c579abe72b65b78944d7d6f2862572e90bbd8e2fcd0b783da79b14570cc4" Oct 13 18:13:13 crc kubenswrapper[4720]: I1013 18:13:13.584207 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7d2c579abe72b65b78944d7d6f2862572e90bbd8e2fcd0b783da79b14570cc4"} err="failed to get container status \"d7d2c579abe72b65b78944d7d6f2862572e90bbd8e2fcd0b783da79b14570cc4\": rpc error: code = NotFound desc = could not find container \"d7d2c579abe72b65b78944d7d6f2862572e90bbd8e2fcd0b783da79b14570cc4\": container with ID starting with d7d2c579abe72b65b78944d7d6f2862572e90bbd8e2fcd0b783da79b14570cc4 not found: ID does not exist" Oct 13 18:13:13 crc kubenswrapper[4720]: I1013 18:13:13.584234 4720 scope.go:117] "RemoveContainer" containerID="dd642ad69d3b58b50d2820ca803ad33f0b2cf93830d91ca9f0b262c2f44bfc35" Oct 13 18:13:13 crc kubenswrapper[4720]: E1013 18:13:13.584529 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd642ad69d3b58b50d2820ca803ad33f0b2cf93830d91ca9f0b262c2f44bfc35\": container with ID starting with dd642ad69d3b58b50d2820ca803ad33f0b2cf93830d91ca9f0b262c2f44bfc35 not found: ID does not exist" containerID="dd642ad69d3b58b50d2820ca803ad33f0b2cf93830d91ca9f0b262c2f44bfc35" Oct 13 18:13:13 crc kubenswrapper[4720]: I1013 18:13:13.584569 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd642ad69d3b58b50d2820ca803ad33f0b2cf93830d91ca9f0b262c2f44bfc35"} err="failed to get container status \"dd642ad69d3b58b50d2820ca803ad33f0b2cf93830d91ca9f0b262c2f44bfc35\": rpc error: code = NotFound desc = could not find container \"dd642ad69d3b58b50d2820ca803ad33f0b2cf93830d91ca9f0b262c2f44bfc35\": container with ID starting with dd642ad69d3b58b50d2820ca803ad33f0b2cf93830d91ca9f0b262c2f44bfc35 not found: ID does not exist" Oct 13 18:13:15 crc kubenswrapper[4720]: I1013 18:13:15.179041 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fc5adb4-d8d0-455b-a8ae-a0bf945a48e2" path="/var/lib/kubelet/pods/2fc5adb4-d8d0-455b-a8ae-a0bf945a48e2/volumes" Oct 13 18:13:22 crc kubenswrapper[4720]: I1013 18:13:22.168383 4720 scope.go:117] "RemoveContainer" containerID="8955f01d60828c6a158946e7dd0ce8b2360fc11ddc5d461465ae7048137b38b7" Oct 13 18:13:22 crc kubenswrapper[4720]: E1013 18:13:22.169098 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:13:34 crc kubenswrapper[4720]: I1013 18:13:34.168427 4720 scope.go:117] "RemoveContainer" containerID="8955f01d60828c6a158946e7dd0ce8b2360fc11ddc5d461465ae7048137b38b7" Oct 13 18:13:34 crc kubenswrapper[4720]: E1013 18:13:34.169275 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:13:47 crc kubenswrapper[4720]: I1013 18:13:47.169024 4720 scope.go:117] "RemoveContainer" containerID="8955f01d60828c6a158946e7dd0ce8b2360fc11ddc5d461465ae7048137b38b7" Oct 13 18:13:47 crc kubenswrapper[4720]: E1013 18:13:47.169944 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:14:02 crc kubenswrapper[4720]: I1013 18:14:02.170124 4720 scope.go:117] "RemoveContainer" containerID="8955f01d60828c6a158946e7dd0ce8b2360fc11ddc5d461465ae7048137b38b7" Oct 13 18:14:02 crc kubenswrapper[4720]: E1013 18:14:02.171061 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:14:06 crc kubenswrapper[4720]: I1013 18:14:06.149561 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ggdrl"] Oct 13 18:14:06 crc kubenswrapper[4720]: E1013 18:14:06.150529 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fc5adb4-d8d0-455b-a8ae-a0bf945a48e2" containerName="extract-content" Oct 13 18:14:06 crc kubenswrapper[4720]: I1013 18:14:06.150545 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fc5adb4-d8d0-455b-a8ae-a0bf945a48e2" containerName="extract-content" Oct 13 18:14:06 crc kubenswrapper[4720]: E1013 18:14:06.150579 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fc5adb4-d8d0-455b-a8ae-a0bf945a48e2" containerName="registry-server" Oct 13 18:14:06 crc kubenswrapper[4720]: I1013 18:14:06.150588 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fc5adb4-d8d0-455b-a8ae-a0bf945a48e2" containerName="registry-server" Oct 13 18:14:06 crc kubenswrapper[4720]: E1013 18:14:06.150607 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fc5adb4-d8d0-455b-a8ae-a0bf945a48e2" containerName="extract-utilities" Oct 13 18:14:06 crc kubenswrapper[4720]: I1013 18:14:06.150615 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fc5adb4-d8d0-455b-a8ae-a0bf945a48e2" containerName="extract-utilities" Oct 13 18:14:06 crc kubenswrapper[4720]: I1013 18:14:06.150864 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fc5adb4-d8d0-455b-a8ae-a0bf945a48e2" containerName="registry-server" Oct 13 18:14:06 crc kubenswrapper[4720]: I1013 18:14:06.152548 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ggdrl" Oct 13 18:14:06 crc kubenswrapper[4720]: I1013 18:14:06.158154 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ggdrl"] Oct 13 18:14:06 crc kubenswrapper[4720]: I1013 18:14:06.320931 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca2713c1-8a07-45bc-be61-c76c07d3f6b9-catalog-content\") pod \"redhat-operators-ggdrl\" (UID: \"ca2713c1-8a07-45bc-be61-c76c07d3f6b9\") " pod="openshift-marketplace/redhat-operators-ggdrl" Oct 13 18:14:06 crc kubenswrapper[4720]: I1013 18:14:06.321251 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca2713c1-8a07-45bc-be61-c76c07d3f6b9-utilities\") pod \"redhat-operators-ggdrl\" (UID: \"ca2713c1-8a07-45bc-be61-c76c07d3f6b9\") " pod="openshift-marketplace/redhat-operators-ggdrl" Oct 13 18:14:06 crc kubenswrapper[4720]: I1013 18:14:06.321499 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7svz\" (UniqueName: \"kubernetes.io/projected/ca2713c1-8a07-45bc-be61-c76c07d3f6b9-kube-api-access-x7svz\") pod \"redhat-operators-ggdrl\" (UID: \"ca2713c1-8a07-45bc-be61-c76c07d3f6b9\") " pod="openshift-marketplace/redhat-operators-ggdrl" Oct 13 18:14:06 crc kubenswrapper[4720]: I1013 18:14:06.422965 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7svz\" (UniqueName: \"kubernetes.io/projected/ca2713c1-8a07-45bc-be61-c76c07d3f6b9-kube-api-access-x7svz\") pod \"redhat-operators-ggdrl\" (UID: \"ca2713c1-8a07-45bc-be61-c76c07d3f6b9\") " pod="openshift-marketplace/redhat-operators-ggdrl" Oct 13 18:14:06 crc kubenswrapper[4720]: I1013 18:14:06.423417 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca2713c1-8a07-45bc-be61-c76c07d3f6b9-catalog-content\") pod \"redhat-operators-ggdrl\" (UID: \"ca2713c1-8a07-45bc-be61-c76c07d3f6b9\") " pod="openshift-marketplace/redhat-operators-ggdrl" Oct 13 18:14:06 crc kubenswrapper[4720]: I1013 18:14:06.423631 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca2713c1-8a07-45bc-be61-c76c07d3f6b9-utilities\") pod \"redhat-operators-ggdrl\" (UID: \"ca2713c1-8a07-45bc-be61-c76c07d3f6b9\") " pod="openshift-marketplace/redhat-operators-ggdrl" Oct 13 18:14:06 crc kubenswrapper[4720]: I1013 18:14:06.424509 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca2713c1-8a07-45bc-be61-c76c07d3f6b9-utilities\") pod \"redhat-operators-ggdrl\" (UID: \"ca2713c1-8a07-45bc-be61-c76c07d3f6b9\") " pod="openshift-marketplace/redhat-operators-ggdrl" Oct 13 18:14:06 crc kubenswrapper[4720]: I1013 18:14:06.424582 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca2713c1-8a07-45bc-be61-c76c07d3f6b9-catalog-content\") pod \"redhat-operators-ggdrl\" (UID: \"ca2713c1-8a07-45bc-be61-c76c07d3f6b9\") " pod="openshift-marketplace/redhat-operators-ggdrl" Oct 13 18:14:06 crc kubenswrapper[4720]: I1013 18:14:06.451168 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7svz\" (UniqueName: \"kubernetes.io/projected/ca2713c1-8a07-45bc-be61-c76c07d3f6b9-kube-api-access-x7svz\") pod \"redhat-operators-ggdrl\" (UID: \"ca2713c1-8a07-45bc-be61-c76c07d3f6b9\") " pod="openshift-marketplace/redhat-operators-ggdrl" Oct 13 18:14:06 crc kubenswrapper[4720]: I1013 18:14:06.486351 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ggdrl" Oct 13 18:14:06 crc kubenswrapper[4720]: I1013 18:14:06.967478 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ggdrl"] Oct 13 18:14:06 crc kubenswrapper[4720]: W1013 18:14:06.977981 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca2713c1_8a07_45bc_be61_c76c07d3f6b9.slice/crio-1df0a959de8e85eb985e8828b63abb263734c3467d20eb871490d772b81a870b WatchSource:0}: Error finding container 1df0a959de8e85eb985e8828b63abb263734c3467d20eb871490d772b81a870b: Status 404 returned error can't find the container with id 1df0a959de8e85eb985e8828b63abb263734c3467d20eb871490d772b81a870b Oct 13 18:14:07 crc kubenswrapper[4720]: I1013 18:14:07.020362 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ggdrl" event={"ID":"ca2713c1-8a07-45bc-be61-c76c07d3f6b9","Type":"ContainerStarted","Data":"1df0a959de8e85eb985e8828b63abb263734c3467d20eb871490d772b81a870b"} Oct 13 18:14:08 crc kubenswrapper[4720]: I1013 18:14:08.037446 4720 generic.go:334] "Generic (PLEG): container finished" podID="ca2713c1-8a07-45bc-be61-c76c07d3f6b9" containerID="a35938b20b2ded96a82694fe1fdf4f7e5f81974561d9d73ea34df2f7b0bc7269" exitCode=0 Oct 13 18:14:08 crc kubenswrapper[4720]: I1013 18:14:08.037532 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ggdrl" event={"ID":"ca2713c1-8a07-45bc-be61-c76c07d3f6b9","Type":"ContainerDied","Data":"a35938b20b2ded96a82694fe1fdf4f7e5f81974561d9d73ea34df2f7b0bc7269"} Oct 13 18:14:10 crc kubenswrapper[4720]: I1013 18:14:10.068079 4720 generic.go:334] "Generic (PLEG): container finished" podID="ca2713c1-8a07-45bc-be61-c76c07d3f6b9" containerID="b276c4433dd838873ed1bf920dc4d5d378f83f571010ff987ee78462b87c4855" exitCode=0 Oct 13 18:14:10 crc kubenswrapper[4720]: I1013 18:14:10.068183 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ggdrl" event={"ID":"ca2713c1-8a07-45bc-be61-c76c07d3f6b9","Type":"ContainerDied","Data":"b276c4433dd838873ed1bf920dc4d5d378f83f571010ff987ee78462b87c4855"} Oct 13 18:14:11 crc kubenswrapper[4720]: I1013 18:14:11.084959 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ggdrl" event={"ID":"ca2713c1-8a07-45bc-be61-c76c07d3f6b9","Type":"ContainerStarted","Data":"9927f39f0304e3276cb88c9d8ec644fc89ee9fea6ed29a80d7f236ec65bec4f5"} Oct 13 18:14:11 crc kubenswrapper[4720]: I1013 18:14:11.135370 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ggdrl" podStartSLOduration=2.717363887 podStartE2EDuration="5.135343951s" podCreationTimestamp="2025-10-13 18:14:06 +0000 UTC" firstStartedPulling="2025-10-13 18:14:08.043310615 +0000 UTC m=+2993.500560787" lastFinishedPulling="2025-10-13 18:14:10.461290689 +0000 UTC m=+2995.918540851" observedRunningTime="2025-10-13 18:14:11.103170356 +0000 UTC m=+2996.560420508" watchObservedRunningTime="2025-10-13 18:14:11.135343951 +0000 UTC m=+2996.592594093" Oct 13 18:14:14 crc kubenswrapper[4720]: I1013 18:14:14.168353 4720 scope.go:117] "RemoveContainer" containerID="8955f01d60828c6a158946e7dd0ce8b2360fc11ddc5d461465ae7048137b38b7" Oct 13 18:14:14 crc kubenswrapper[4720]: E1013 18:14:14.169303 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:14:16 crc kubenswrapper[4720]: I1013 18:14:16.486815 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ggdrl" Oct 13 18:14:16 crc kubenswrapper[4720]: I1013 18:14:16.487862 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ggdrl" Oct 13 18:14:16 crc kubenswrapper[4720]: I1013 18:14:16.564137 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ggdrl" Oct 13 18:14:17 crc kubenswrapper[4720]: I1013 18:14:17.233167 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ggdrl" Oct 13 18:14:17 crc kubenswrapper[4720]: I1013 18:14:17.285563 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ggdrl"] Oct 13 18:14:19 crc kubenswrapper[4720]: I1013 18:14:19.180053 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ggdrl" podUID="ca2713c1-8a07-45bc-be61-c76c07d3f6b9" containerName="registry-server" containerID="cri-o://9927f39f0304e3276cb88c9d8ec644fc89ee9fea6ed29a80d7f236ec65bec4f5" gracePeriod=2 Oct 13 18:14:19 crc kubenswrapper[4720]: I1013 18:14:19.723932 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ggdrl" Oct 13 18:14:19 crc kubenswrapper[4720]: I1013 18:14:19.839021 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca2713c1-8a07-45bc-be61-c76c07d3f6b9-utilities\") pod \"ca2713c1-8a07-45bc-be61-c76c07d3f6b9\" (UID: \"ca2713c1-8a07-45bc-be61-c76c07d3f6b9\") " Oct 13 18:14:19 crc kubenswrapper[4720]: I1013 18:14:19.839611 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca2713c1-8a07-45bc-be61-c76c07d3f6b9-catalog-content\") pod \"ca2713c1-8a07-45bc-be61-c76c07d3f6b9\" (UID: \"ca2713c1-8a07-45bc-be61-c76c07d3f6b9\") " Oct 13 18:14:19 crc kubenswrapper[4720]: I1013 18:14:19.839824 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7svz\" (UniqueName: \"kubernetes.io/projected/ca2713c1-8a07-45bc-be61-c76c07d3f6b9-kube-api-access-x7svz\") pod \"ca2713c1-8a07-45bc-be61-c76c07d3f6b9\" (UID: \"ca2713c1-8a07-45bc-be61-c76c07d3f6b9\") " Oct 13 18:14:19 crc kubenswrapper[4720]: I1013 18:14:19.840238 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca2713c1-8a07-45bc-be61-c76c07d3f6b9-utilities" (OuterVolumeSpecName: "utilities") pod "ca2713c1-8a07-45bc-be61-c76c07d3f6b9" (UID: "ca2713c1-8a07-45bc-be61-c76c07d3f6b9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 18:14:19 crc kubenswrapper[4720]: I1013 18:14:19.840902 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca2713c1-8a07-45bc-be61-c76c07d3f6b9-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 18:14:19 crc kubenswrapper[4720]: I1013 18:14:19.845794 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca2713c1-8a07-45bc-be61-c76c07d3f6b9-kube-api-access-x7svz" (OuterVolumeSpecName: "kube-api-access-x7svz") pod "ca2713c1-8a07-45bc-be61-c76c07d3f6b9" (UID: "ca2713c1-8a07-45bc-be61-c76c07d3f6b9"). InnerVolumeSpecName "kube-api-access-x7svz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 18:14:19 crc kubenswrapper[4720]: I1013 18:14:19.923995 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca2713c1-8a07-45bc-be61-c76c07d3f6b9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ca2713c1-8a07-45bc-be61-c76c07d3f6b9" (UID: "ca2713c1-8a07-45bc-be61-c76c07d3f6b9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 18:14:19 crc kubenswrapper[4720]: I1013 18:14:19.942720 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca2713c1-8a07-45bc-be61-c76c07d3f6b9-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 18:14:19 crc kubenswrapper[4720]: I1013 18:14:19.942770 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7svz\" (UniqueName: \"kubernetes.io/projected/ca2713c1-8a07-45bc-be61-c76c07d3f6b9-kube-api-access-x7svz\") on node \"crc\" DevicePath \"\"" Oct 13 18:14:20 crc kubenswrapper[4720]: I1013 18:14:20.198675 4720 generic.go:334] "Generic (PLEG): container finished" podID="ca2713c1-8a07-45bc-be61-c76c07d3f6b9" containerID="9927f39f0304e3276cb88c9d8ec644fc89ee9fea6ed29a80d7f236ec65bec4f5" exitCode=0 Oct 13 18:14:20 crc kubenswrapper[4720]: I1013 18:14:20.198746 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ggdrl" event={"ID":"ca2713c1-8a07-45bc-be61-c76c07d3f6b9","Type":"ContainerDied","Data":"9927f39f0304e3276cb88c9d8ec644fc89ee9fea6ed29a80d7f236ec65bec4f5"} Oct 13 18:14:20 crc kubenswrapper[4720]: I1013 18:14:20.198779 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ggdrl" Oct 13 18:14:20 crc kubenswrapper[4720]: I1013 18:14:20.198800 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ggdrl" event={"ID":"ca2713c1-8a07-45bc-be61-c76c07d3f6b9","Type":"ContainerDied","Data":"1df0a959de8e85eb985e8828b63abb263734c3467d20eb871490d772b81a870b"} Oct 13 18:14:20 crc kubenswrapper[4720]: I1013 18:14:20.198830 4720 scope.go:117] "RemoveContainer" containerID="9927f39f0304e3276cb88c9d8ec644fc89ee9fea6ed29a80d7f236ec65bec4f5" Oct 13 18:14:20 crc kubenswrapper[4720]: I1013 18:14:20.232052 4720 scope.go:117] "RemoveContainer" containerID="b276c4433dd838873ed1bf920dc4d5d378f83f571010ff987ee78462b87c4855" Oct 13 18:14:20 crc kubenswrapper[4720]: I1013 18:14:20.283152 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ggdrl"] Oct 13 18:14:20 crc kubenswrapper[4720]: I1013 18:14:20.285975 4720 scope.go:117] "RemoveContainer" containerID="a35938b20b2ded96a82694fe1fdf4f7e5f81974561d9d73ea34df2f7b0bc7269" Oct 13 18:14:20 crc kubenswrapper[4720]: I1013 18:14:20.290253 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ggdrl"] Oct 13 18:14:20 crc kubenswrapper[4720]: I1013 18:14:20.309758 4720 scope.go:117] "RemoveContainer" containerID="9927f39f0304e3276cb88c9d8ec644fc89ee9fea6ed29a80d7f236ec65bec4f5" Oct 13 18:14:20 crc kubenswrapper[4720]: E1013 18:14:20.310204 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9927f39f0304e3276cb88c9d8ec644fc89ee9fea6ed29a80d7f236ec65bec4f5\": container with ID starting with 9927f39f0304e3276cb88c9d8ec644fc89ee9fea6ed29a80d7f236ec65bec4f5 not found: ID does not exist" containerID="9927f39f0304e3276cb88c9d8ec644fc89ee9fea6ed29a80d7f236ec65bec4f5" Oct 13 18:14:20 crc kubenswrapper[4720]: I1013 18:14:20.310240 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9927f39f0304e3276cb88c9d8ec644fc89ee9fea6ed29a80d7f236ec65bec4f5"} err="failed to get container status \"9927f39f0304e3276cb88c9d8ec644fc89ee9fea6ed29a80d7f236ec65bec4f5\": rpc error: code = NotFound desc = could not find container \"9927f39f0304e3276cb88c9d8ec644fc89ee9fea6ed29a80d7f236ec65bec4f5\": container with ID starting with 9927f39f0304e3276cb88c9d8ec644fc89ee9fea6ed29a80d7f236ec65bec4f5 not found: ID does not exist" Oct 13 18:14:20 crc kubenswrapper[4720]: I1013 18:14:20.310310 4720 scope.go:117] "RemoveContainer" containerID="b276c4433dd838873ed1bf920dc4d5d378f83f571010ff987ee78462b87c4855" Oct 13 18:14:20 crc kubenswrapper[4720]: E1013 18:14:20.310681 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b276c4433dd838873ed1bf920dc4d5d378f83f571010ff987ee78462b87c4855\": container with ID starting with b276c4433dd838873ed1bf920dc4d5d378f83f571010ff987ee78462b87c4855 not found: ID does not exist" containerID="b276c4433dd838873ed1bf920dc4d5d378f83f571010ff987ee78462b87c4855" Oct 13 18:14:20 crc kubenswrapper[4720]: I1013 18:14:20.310798 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b276c4433dd838873ed1bf920dc4d5d378f83f571010ff987ee78462b87c4855"} err="failed to get container status \"b276c4433dd838873ed1bf920dc4d5d378f83f571010ff987ee78462b87c4855\": rpc error: code = NotFound desc = could not find container \"b276c4433dd838873ed1bf920dc4d5d378f83f571010ff987ee78462b87c4855\": container with ID starting with b276c4433dd838873ed1bf920dc4d5d378f83f571010ff987ee78462b87c4855 not found: ID does not exist" Oct 13 18:14:20 crc kubenswrapper[4720]: I1013 18:14:20.310898 4720 scope.go:117] "RemoveContainer" containerID="a35938b20b2ded96a82694fe1fdf4f7e5f81974561d9d73ea34df2f7b0bc7269" Oct 13 18:14:20 crc kubenswrapper[4720]: E1013 18:14:20.311342 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a35938b20b2ded96a82694fe1fdf4f7e5f81974561d9d73ea34df2f7b0bc7269\": container with ID starting with a35938b20b2ded96a82694fe1fdf4f7e5f81974561d9d73ea34df2f7b0bc7269 not found: ID does not exist" containerID="a35938b20b2ded96a82694fe1fdf4f7e5f81974561d9d73ea34df2f7b0bc7269" Oct 13 18:14:20 crc kubenswrapper[4720]: I1013 18:14:20.311367 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a35938b20b2ded96a82694fe1fdf4f7e5f81974561d9d73ea34df2f7b0bc7269"} err="failed to get container status \"a35938b20b2ded96a82694fe1fdf4f7e5f81974561d9d73ea34df2f7b0bc7269\": rpc error: code = NotFound desc = could not find container \"a35938b20b2ded96a82694fe1fdf4f7e5f81974561d9d73ea34df2f7b0bc7269\": container with ID starting with a35938b20b2ded96a82694fe1fdf4f7e5f81974561d9d73ea34df2f7b0bc7269 not found: ID does not exist" Oct 13 18:14:21 crc kubenswrapper[4720]: I1013 18:14:21.187154 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca2713c1-8a07-45bc-be61-c76c07d3f6b9" path="/var/lib/kubelet/pods/ca2713c1-8a07-45bc-be61-c76c07d3f6b9/volumes" Oct 13 18:14:29 crc kubenswrapper[4720]: I1013 18:14:29.168149 4720 scope.go:117] "RemoveContainer" containerID="8955f01d60828c6a158946e7dd0ce8b2360fc11ddc5d461465ae7048137b38b7" Oct 13 18:14:29 crc kubenswrapper[4720]: E1013 18:14:29.168918 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:14:41 crc kubenswrapper[4720]: I1013 18:14:41.168278 4720 scope.go:117] "RemoveContainer" containerID="8955f01d60828c6a158946e7dd0ce8b2360fc11ddc5d461465ae7048137b38b7" Oct 13 18:14:41 crc kubenswrapper[4720]: E1013 18:14:41.169136 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:14:52 crc kubenswrapper[4720]: I1013 18:14:52.169124 4720 scope.go:117] "RemoveContainer" containerID="8955f01d60828c6a158946e7dd0ce8b2360fc11ddc5d461465ae7048137b38b7" Oct 13 18:14:52 crc kubenswrapper[4720]: E1013 18:14:52.170172 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:15:00 crc kubenswrapper[4720]: I1013 18:15:00.178512 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29339655-lrtgw"] Oct 13 18:15:00 crc kubenswrapper[4720]: E1013 18:15:00.179528 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca2713c1-8a07-45bc-be61-c76c07d3f6b9" containerName="registry-server" Oct 13 18:15:00 crc kubenswrapper[4720]: I1013 18:15:00.179547 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca2713c1-8a07-45bc-be61-c76c07d3f6b9" containerName="registry-server" Oct 13 18:15:00 crc kubenswrapper[4720]: E1013 18:15:00.179561 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca2713c1-8a07-45bc-be61-c76c07d3f6b9" containerName="extract-utilities" Oct 13 18:15:00 crc kubenswrapper[4720]: I1013 18:15:00.179568 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca2713c1-8a07-45bc-be61-c76c07d3f6b9" containerName="extract-utilities" Oct 13 18:15:00 crc kubenswrapper[4720]: E1013 18:15:00.179579 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca2713c1-8a07-45bc-be61-c76c07d3f6b9" containerName="extract-content" Oct 13 18:15:00 crc kubenswrapper[4720]: I1013 18:15:00.179587 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca2713c1-8a07-45bc-be61-c76c07d3f6b9" containerName="extract-content" Oct 13 18:15:00 crc kubenswrapper[4720]: I1013 18:15:00.179867 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca2713c1-8a07-45bc-be61-c76c07d3f6b9" containerName="registry-server" Oct 13 18:15:00 crc kubenswrapper[4720]: I1013 18:15:00.180673 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29339655-lrtgw" Oct 13 18:15:00 crc kubenswrapper[4720]: I1013 18:15:00.183024 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 13 18:15:00 crc kubenswrapper[4720]: I1013 18:15:00.187636 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29339655-lrtgw"] Oct 13 18:15:00 crc kubenswrapper[4720]: I1013 18:15:00.189797 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 13 18:15:00 crc kubenswrapper[4720]: I1013 18:15:00.334542 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlpfz\" (UniqueName: \"kubernetes.io/projected/432384e3-388a-46fc-be1b-29e415afca79-kube-api-access-wlpfz\") pod \"collect-profiles-29339655-lrtgw\" (UID: \"432384e3-388a-46fc-be1b-29e415afca79\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339655-lrtgw" Oct 13 18:15:00 crc kubenswrapper[4720]: I1013 18:15:00.334970 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/432384e3-388a-46fc-be1b-29e415afca79-config-volume\") pod \"collect-profiles-29339655-lrtgw\" (UID: \"432384e3-388a-46fc-be1b-29e415afca79\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339655-lrtgw" Oct 13 18:15:00 crc kubenswrapper[4720]: I1013 18:15:00.335035 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/432384e3-388a-46fc-be1b-29e415afca79-secret-volume\") pod \"collect-profiles-29339655-lrtgw\" (UID: \"432384e3-388a-46fc-be1b-29e415afca79\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339655-lrtgw" Oct 13 18:15:00 crc kubenswrapper[4720]: I1013 18:15:00.436785 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/432384e3-388a-46fc-be1b-29e415afca79-config-volume\") pod \"collect-profiles-29339655-lrtgw\" (UID: \"432384e3-388a-46fc-be1b-29e415afca79\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339655-lrtgw" Oct 13 18:15:00 crc kubenswrapper[4720]: I1013 18:15:00.436841 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/432384e3-388a-46fc-be1b-29e415afca79-secret-volume\") pod \"collect-profiles-29339655-lrtgw\" (UID: \"432384e3-388a-46fc-be1b-29e415afca79\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339655-lrtgw" Oct 13 18:15:00 crc kubenswrapper[4720]: I1013 18:15:00.436924 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlpfz\" (UniqueName: \"kubernetes.io/projected/432384e3-388a-46fc-be1b-29e415afca79-kube-api-access-wlpfz\") pod \"collect-profiles-29339655-lrtgw\" (UID: \"432384e3-388a-46fc-be1b-29e415afca79\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339655-lrtgw" Oct 13 18:15:00 crc kubenswrapper[4720]: I1013 18:15:00.438163 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/432384e3-388a-46fc-be1b-29e415afca79-config-volume\") pod \"collect-profiles-29339655-lrtgw\" (UID: \"432384e3-388a-46fc-be1b-29e415afca79\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339655-lrtgw" Oct 13 18:15:00 crc kubenswrapper[4720]: I1013 18:15:00.446675 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/432384e3-388a-46fc-be1b-29e415afca79-secret-volume\") pod \"collect-profiles-29339655-lrtgw\" (UID: \"432384e3-388a-46fc-be1b-29e415afca79\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339655-lrtgw" Oct 13 18:15:00 crc kubenswrapper[4720]: I1013 18:15:00.455066 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlpfz\" (UniqueName: \"kubernetes.io/projected/432384e3-388a-46fc-be1b-29e415afca79-kube-api-access-wlpfz\") pod \"collect-profiles-29339655-lrtgw\" (UID: \"432384e3-388a-46fc-be1b-29e415afca79\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339655-lrtgw" Oct 13 18:15:00 crc kubenswrapper[4720]: I1013 18:15:00.502345 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29339655-lrtgw" Oct 13 18:15:01 crc kubenswrapper[4720]: I1013 18:15:01.028299 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29339655-lrtgw"] Oct 13 18:15:01 crc kubenswrapper[4720]: I1013 18:15:01.637781 4720 generic.go:334] "Generic (PLEG): container finished" podID="432384e3-388a-46fc-be1b-29e415afca79" containerID="dacca7268e1d8398ec7d8410bdfb8b821be6da26e528694a6f13f58e45626b99" exitCode=0 Oct 13 18:15:01 crc kubenswrapper[4720]: I1013 18:15:01.637909 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29339655-lrtgw" event={"ID":"432384e3-388a-46fc-be1b-29e415afca79","Type":"ContainerDied","Data":"dacca7268e1d8398ec7d8410bdfb8b821be6da26e528694a6f13f58e45626b99"} Oct 13 18:15:01 crc kubenswrapper[4720]: I1013 18:15:01.639323 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29339655-lrtgw" event={"ID":"432384e3-388a-46fc-be1b-29e415afca79","Type":"ContainerStarted","Data":"c5f2912396871eedff7e450f1740991116f9e6cb5c50119e4d044877e0d210ab"} Oct 13 18:15:03 crc kubenswrapper[4720]: I1013 18:15:03.018392 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29339655-lrtgw" Oct 13 18:15:03 crc kubenswrapper[4720]: I1013 18:15:03.203103 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/432384e3-388a-46fc-be1b-29e415afca79-secret-volume\") pod \"432384e3-388a-46fc-be1b-29e415afca79\" (UID: \"432384e3-388a-46fc-be1b-29e415afca79\") " Oct 13 18:15:03 crc kubenswrapper[4720]: I1013 18:15:03.203506 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/432384e3-388a-46fc-be1b-29e415afca79-config-volume\") pod \"432384e3-388a-46fc-be1b-29e415afca79\" (UID: \"432384e3-388a-46fc-be1b-29e415afca79\") " Oct 13 18:15:03 crc kubenswrapper[4720]: I1013 18:15:03.203680 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlpfz\" (UniqueName: \"kubernetes.io/projected/432384e3-388a-46fc-be1b-29e415afca79-kube-api-access-wlpfz\") pod \"432384e3-388a-46fc-be1b-29e415afca79\" (UID: \"432384e3-388a-46fc-be1b-29e415afca79\") " Oct 13 18:15:03 crc kubenswrapper[4720]: I1013 18:15:03.204104 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/432384e3-388a-46fc-be1b-29e415afca79-config-volume" (OuterVolumeSpecName: "config-volume") pod "432384e3-388a-46fc-be1b-29e415afca79" (UID: "432384e3-388a-46fc-be1b-29e415afca79"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 18:15:03 crc kubenswrapper[4720]: I1013 18:15:03.204626 4720 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/432384e3-388a-46fc-be1b-29e415afca79-config-volume\") on node \"crc\" DevicePath \"\"" Oct 13 18:15:03 crc kubenswrapper[4720]: I1013 18:15:03.225457 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/432384e3-388a-46fc-be1b-29e415afca79-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "432384e3-388a-46fc-be1b-29e415afca79" (UID: "432384e3-388a-46fc-be1b-29e415afca79"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 18:15:03 crc kubenswrapper[4720]: I1013 18:15:03.226303 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/432384e3-388a-46fc-be1b-29e415afca79-kube-api-access-wlpfz" (OuterVolumeSpecName: "kube-api-access-wlpfz") pod "432384e3-388a-46fc-be1b-29e415afca79" (UID: "432384e3-388a-46fc-be1b-29e415afca79"). InnerVolumeSpecName "kube-api-access-wlpfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 18:15:03 crc kubenswrapper[4720]: I1013 18:15:03.307627 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlpfz\" (UniqueName: \"kubernetes.io/projected/432384e3-388a-46fc-be1b-29e415afca79-kube-api-access-wlpfz\") on node \"crc\" DevicePath \"\"" Oct 13 18:15:03 crc kubenswrapper[4720]: I1013 18:15:03.307663 4720 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/432384e3-388a-46fc-be1b-29e415afca79-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 13 18:15:03 crc kubenswrapper[4720]: I1013 18:15:03.656946 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29339655-lrtgw" event={"ID":"432384e3-388a-46fc-be1b-29e415afca79","Type":"ContainerDied","Data":"c5f2912396871eedff7e450f1740991116f9e6cb5c50119e4d044877e0d210ab"} Oct 13 18:15:03 crc kubenswrapper[4720]: I1013 18:15:03.657004 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5f2912396871eedff7e450f1740991116f9e6cb5c50119e4d044877e0d210ab" Oct 13 18:15:03 crc kubenswrapper[4720]: I1013 18:15:03.657038 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29339655-lrtgw" Oct 13 18:15:04 crc kubenswrapper[4720]: I1013 18:15:04.097123 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29339610-42824"] Oct 13 18:15:04 crc kubenswrapper[4720]: I1013 18:15:04.111828 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29339610-42824"] Oct 13 18:15:05 crc kubenswrapper[4720]: I1013 18:15:05.191887 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0c11d5a-58cf-4f1e-ada6-0d154e322e44" path="/var/lib/kubelet/pods/e0c11d5a-58cf-4f1e-ada6-0d154e322e44/volumes" Oct 13 18:15:07 crc kubenswrapper[4720]: I1013 18:15:07.168618 4720 scope.go:117] "RemoveContainer" containerID="8955f01d60828c6a158946e7dd0ce8b2360fc11ddc5d461465ae7048137b38b7" Oct 13 18:15:07 crc kubenswrapper[4720]: E1013 18:15:07.169394 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:15:19 crc kubenswrapper[4720]: I1013 18:15:19.171416 4720 scope.go:117] "RemoveContainer" containerID="8955f01d60828c6a158946e7dd0ce8b2360fc11ddc5d461465ae7048137b38b7" Oct 13 18:15:19 crc kubenswrapper[4720]: E1013 18:15:19.172752 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:15:20 crc kubenswrapper[4720]: I1013 18:15:20.556033 4720 scope.go:117] "RemoveContainer" containerID="4a874c56c5fb6521e14e4ce7a7db8ea0b2de796fac747532e158fb5d0afc63dd" Oct 13 18:15:32 crc kubenswrapper[4720]: I1013 18:15:32.168477 4720 scope.go:117] "RemoveContainer" containerID="8955f01d60828c6a158946e7dd0ce8b2360fc11ddc5d461465ae7048137b38b7" Oct 13 18:15:32 crc kubenswrapper[4720]: E1013 18:15:32.169514 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:15:47 crc kubenswrapper[4720]: I1013 18:15:47.169861 4720 scope.go:117] "RemoveContainer" containerID="8955f01d60828c6a158946e7dd0ce8b2360fc11ddc5d461465ae7048137b38b7" Oct 13 18:15:47 crc kubenswrapper[4720]: E1013 18:15:47.171661 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:16:01 crc kubenswrapper[4720]: I1013 18:16:01.168749 4720 scope.go:117] "RemoveContainer" containerID="8955f01d60828c6a158946e7dd0ce8b2360fc11ddc5d461465ae7048137b38b7" Oct 13 18:16:01 crc kubenswrapper[4720]: E1013 18:16:01.169507 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:16:16 crc kubenswrapper[4720]: I1013 18:16:16.168033 4720 scope.go:117] "RemoveContainer" containerID="8955f01d60828c6a158946e7dd0ce8b2360fc11ddc5d461465ae7048137b38b7" Oct 13 18:16:16 crc kubenswrapper[4720]: E1013 18:16:16.168994 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:16:27 crc kubenswrapper[4720]: I1013 18:16:27.169212 4720 scope.go:117] "RemoveContainer" containerID="8955f01d60828c6a158946e7dd0ce8b2360fc11ddc5d461465ae7048137b38b7" Oct 13 18:16:27 crc kubenswrapper[4720]: E1013 18:16:27.171066 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:16:38 crc kubenswrapper[4720]: I1013 18:16:38.168975 4720 scope.go:117] "RemoveContainer" containerID="8955f01d60828c6a158946e7dd0ce8b2360fc11ddc5d461465ae7048137b38b7" Oct 13 18:16:38 crc kubenswrapper[4720]: E1013 18:16:38.171467 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:16:50 crc kubenswrapper[4720]: I1013 18:16:50.168538 4720 scope.go:117] "RemoveContainer" containerID="8955f01d60828c6a158946e7dd0ce8b2360fc11ddc5d461465ae7048137b38b7" Oct 13 18:16:50 crc kubenswrapper[4720]: I1013 18:16:50.824832 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" event={"ID":"ce442c80-fcde-4b79-b6f9-f8f25771dfd4","Type":"ContainerStarted","Data":"764c1c0ec6aa60325a7c805ec7141bde73c6fdd9d9c0a1e82b8d4193f1fa8d1d"} Oct 13 18:19:15 crc kubenswrapper[4720]: I1013 18:19:15.213226 4720 patch_prober.go:28] interesting pod/machine-config-daemon-htwnl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 18:19:15 crc kubenswrapper[4720]: I1013 18:19:15.213874 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 18:19:45 crc kubenswrapper[4720]: I1013 18:19:45.212694 4720 patch_prober.go:28] interesting pod/machine-config-daemon-htwnl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 18:19:45 crc kubenswrapper[4720]: I1013 18:19:45.213465 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 18:20:02 crc kubenswrapper[4720]: I1013 18:20:02.348870 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mddst"] Oct 13 18:20:02 crc kubenswrapper[4720]: E1013 18:20:02.350315 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="432384e3-388a-46fc-be1b-29e415afca79" containerName="collect-profiles" Oct 13 18:20:02 crc kubenswrapper[4720]: I1013 18:20:02.350338 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="432384e3-388a-46fc-be1b-29e415afca79" containerName="collect-profiles" Oct 13 18:20:02 crc kubenswrapper[4720]: I1013 18:20:02.350718 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="432384e3-388a-46fc-be1b-29e415afca79" containerName="collect-profiles" Oct 13 18:20:02 crc kubenswrapper[4720]: I1013 18:20:02.353096 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mddst" Oct 13 18:20:02 crc kubenswrapper[4720]: I1013 18:20:02.380493 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mddst"] Oct 13 18:20:02 crc kubenswrapper[4720]: I1013 18:20:02.404648 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e03e8607-b33e-44a0-93ae-9b77c8725441-catalog-content\") pod \"redhat-marketplace-mddst\" (UID: \"e03e8607-b33e-44a0-93ae-9b77c8725441\") " pod="openshift-marketplace/redhat-marketplace-mddst" Oct 13 18:20:02 crc kubenswrapper[4720]: I1013 18:20:02.404741 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6fxq\" (UniqueName: \"kubernetes.io/projected/e03e8607-b33e-44a0-93ae-9b77c8725441-kube-api-access-b6fxq\") pod \"redhat-marketplace-mddst\" (UID: \"e03e8607-b33e-44a0-93ae-9b77c8725441\") " pod="openshift-marketplace/redhat-marketplace-mddst" Oct 13 18:20:02 crc kubenswrapper[4720]: I1013 18:20:02.404816 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e03e8607-b33e-44a0-93ae-9b77c8725441-utilities\") pod \"redhat-marketplace-mddst\" (UID: \"e03e8607-b33e-44a0-93ae-9b77c8725441\") " pod="openshift-marketplace/redhat-marketplace-mddst" Oct 13 18:20:02 crc kubenswrapper[4720]: I1013 18:20:02.507395 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e03e8607-b33e-44a0-93ae-9b77c8725441-catalog-content\") pod \"redhat-marketplace-mddst\" (UID: \"e03e8607-b33e-44a0-93ae-9b77c8725441\") " pod="openshift-marketplace/redhat-marketplace-mddst" Oct 13 18:20:02 crc kubenswrapper[4720]: I1013 18:20:02.507463 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6fxq\" (UniqueName: \"kubernetes.io/projected/e03e8607-b33e-44a0-93ae-9b77c8725441-kube-api-access-b6fxq\") pod \"redhat-marketplace-mddst\" (UID: \"e03e8607-b33e-44a0-93ae-9b77c8725441\") " pod="openshift-marketplace/redhat-marketplace-mddst" Oct 13 18:20:02 crc kubenswrapper[4720]: I1013 18:20:02.507549 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e03e8607-b33e-44a0-93ae-9b77c8725441-utilities\") pod \"redhat-marketplace-mddst\" (UID: \"e03e8607-b33e-44a0-93ae-9b77c8725441\") " pod="openshift-marketplace/redhat-marketplace-mddst" Oct 13 18:20:02 crc kubenswrapper[4720]: I1013 18:20:02.507963 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e03e8607-b33e-44a0-93ae-9b77c8725441-catalog-content\") pod \"redhat-marketplace-mddst\" (UID: \"e03e8607-b33e-44a0-93ae-9b77c8725441\") " pod="openshift-marketplace/redhat-marketplace-mddst" Oct 13 18:20:02 crc kubenswrapper[4720]: I1013 18:20:02.507975 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e03e8607-b33e-44a0-93ae-9b77c8725441-utilities\") pod \"redhat-marketplace-mddst\" (UID: \"e03e8607-b33e-44a0-93ae-9b77c8725441\") " pod="openshift-marketplace/redhat-marketplace-mddst" Oct 13 18:20:02 crc kubenswrapper[4720]: I1013 18:20:02.530439 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6fxq\" (UniqueName: \"kubernetes.io/projected/e03e8607-b33e-44a0-93ae-9b77c8725441-kube-api-access-b6fxq\") pod \"redhat-marketplace-mddst\" (UID: \"e03e8607-b33e-44a0-93ae-9b77c8725441\") " pod="openshift-marketplace/redhat-marketplace-mddst" Oct 13 18:20:02 crc kubenswrapper[4720]: I1013 18:20:02.689739 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mddst" Oct 13 18:20:03 crc kubenswrapper[4720]: I1013 18:20:03.245539 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mddst"] Oct 13 18:20:03 crc kubenswrapper[4720]: I1013 18:20:03.980499 4720 generic.go:334] "Generic (PLEG): container finished" podID="e03e8607-b33e-44a0-93ae-9b77c8725441" containerID="7a67422966d0040c377e7578c74ca7e09d7815dfabf8162b37905d9a2e6b776a" exitCode=0 Oct 13 18:20:03 crc kubenswrapper[4720]: I1013 18:20:03.980556 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mddst" event={"ID":"e03e8607-b33e-44a0-93ae-9b77c8725441","Type":"ContainerDied","Data":"7a67422966d0040c377e7578c74ca7e09d7815dfabf8162b37905d9a2e6b776a"} Oct 13 18:20:03 crc kubenswrapper[4720]: I1013 18:20:03.980606 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mddst" event={"ID":"e03e8607-b33e-44a0-93ae-9b77c8725441","Type":"ContainerStarted","Data":"4a9139fc30a7c8953b84ca69f994adb0aa991ef0f587beaf88b8e52182021a24"} Oct 13 18:20:03 crc kubenswrapper[4720]: I1013 18:20:03.984085 4720 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 13 18:20:05 crc kubenswrapper[4720]: I1013 18:20:05.010393 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mddst" event={"ID":"e03e8607-b33e-44a0-93ae-9b77c8725441","Type":"ContainerStarted","Data":"8238b372a498edf10234bbfe533c0167e1beba9c7e8c27810b7ab3c2c46d6f9b"} Oct 13 18:20:06 crc kubenswrapper[4720]: I1013 18:20:06.037215 4720 generic.go:334] "Generic (PLEG): container finished" podID="e03e8607-b33e-44a0-93ae-9b77c8725441" containerID="8238b372a498edf10234bbfe533c0167e1beba9c7e8c27810b7ab3c2c46d6f9b" exitCode=0 Oct 13 18:20:06 crc kubenswrapper[4720]: I1013 18:20:06.037267 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mddst" event={"ID":"e03e8607-b33e-44a0-93ae-9b77c8725441","Type":"ContainerDied","Data":"8238b372a498edf10234bbfe533c0167e1beba9c7e8c27810b7ab3c2c46d6f9b"} Oct 13 18:20:07 crc kubenswrapper[4720]: I1013 18:20:07.052044 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mddst" event={"ID":"e03e8607-b33e-44a0-93ae-9b77c8725441","Type":"ContainerStarted","Data":"3c4e1251e301b5aa3ebfe0ef2652f186367226d582390d20606500a00cdb361d"} Oct 13 18:20:07 crc kubenswrapper[4720]: I1013 18:20:07.094503 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mddst" podStartSLOduration=2.554063973 podStartE2EDuration="5.094485772s" podCreationTimestamp="2025-10-13 18:20:02 +0000 UTC" firstStartedPulling="2025-10-13 18:20:03.983855731 +0000 UTC m=+3349.441105863" lastFinishedPulling="2025-10-13 18:20:06.52427752 +0000 UTC m=+3351.981527662" observedRunningTime="2025-10-13 18:20:07.081794613 +0000 UTC m=+3352.539044745" watchObservedRunningTime="2025-10-13 18:20:07.094485772 +0000 UTC m=+3352.551735904" Oct 13 18:20:12 crc kubenswrapper[4720]: I1013 18:20:12.690485 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mddst" Oct 13 18:20:12 crc kubenswrapper[4720]: I1013 18:20:12.690984 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mddst" Oct 13 18:20:12 crc kubenswrapper[4720]: I1013 18:20:12.774263 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mddst" Oct 13 18:20:13 crc kubenswrapper[4720]: I1013 18:20:13.203042 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mddst" Oct 13 18:20:13 crc kubenswrapper[4720]: I1013 18:20:13.265877 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mddst"] Oct 13 18:20:15 crc kubenswrapper[4720]: I1013 18:20:15.159439 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mddst" podUID="e03e8607-b33e-44a0-93ae-9b77c8725441" containerName="registry-server" containerID="cri-o://3c4e1251e301b5aa3ebfe0ef2652f186367226d582390d20606500a00cdb361d" gracePeriod=2 Oct 13 18:20:15 crc kubenswrapper[4720]: I1013 18:20:15.212844 4720 patch_prober.go:28] interesting pod/machine-config-daemon-htwnl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 18:20:15 crc kubenswrapper[4720]: I1013 18:20:15.212900 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 18:20:15 crc kubenswrapper[4720]: I1013 18:20:15.212939 4720 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" Oct 13 18:20:15 crc kubenswrapper[4720]: I1013 18:20:15.213580 4720 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"764c1c0ec6aa60325a7c805ec7141bde73c6fdd9d9c0a1e82b8d4193f1fa8d1d"} pod="openshift-machine-config-operator/machine-config-daemon-htwnl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 13 18:20:15 crc kubenswrapper[4720]: I1013 18:20:15.213647 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerName="machine-config-daemon" containerID="cri-o://764c1c0ec6aa60325a7c805ec7141bde73c6fdd9d9c0a1e82b8d4193f1fa8d1d" gracePeriod=600 Oct 13 18:20:15 crc kubenswrapper[4720]: I1013 18:20:15.746354 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mddst" Oct 13 18:20:15 crc kubenswrapper[4720]: I1013 18:20:15.871811 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e03e8607-b33e-44a0-93ae-9b77c8725441-utilities\") pod \"e03e8607-b33e-44a0-93ae-9b77c8725441\" (UID: \"e03e8607-b33e-44a0-93ae-9b77c8725441\") " Oct 13 18:20:15 crc kubenswrapper[4720]: I1013 18:20:15.871950 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6fxq\" (UniqueName: \"kubernetes.io/projected/e03e8607-b33e-44a0-93ae-9b77c8725441-kube-api-access-b6fxq\") pod \"e03e8607-b33e-44a0-93ae-9b77c8725441\" (UID: \"e03e8607-b33e-44a0-93ae-9b77c8725441\") " Oct 13 18:20:15 crc kubenswrapper[4720]: I1013 18:20:15.872006 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e03e8607-b33e-44a0-93ae-9b77c8725441-catalog-content\") pod \"e03e8607-b33e-44a0-93ae-9b77c8725441\" (UID: \"e03e8607-b33e-44a0-93ae-9b77c8725441\") " Oct 13 18:20:15 crc kubenswrapper[4720]: I1013 18:20:15.874896 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e03e8607-b33e-44a0-93ae-9b77c8725441-utilities" (OuterVolumeSpecName: "utilities") pod "e03e8607-b33e-44a0-93ae-9b77c8725441" (UID: "e03e8607-b33e-44a0-93ae-9b77c8725441"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 18:20:15 crc kubenswrapper[4720]: I1013 18:20:15.880950 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e03e8607-b33e-44a0-93ae-9b77c8725441-kube-api-access-b6fxq" (OuterVolumeSpecName: "kube-api-access-b6fxq") pod "e03e8607-b33e-44a0-93ae-9b77c8725441" (UID: "e03e8607-b33e-44a0-93ae-9b77c8725441"). InnerVolumeSpecName "kube-api-access-b6fxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 18:20:15 crc kubenswrapper[4720]: I1013 18:20:15.901590 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e03e8607-b33e-44a0-93ae-9b77c8725441-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e03e8607-b33e-44a0-93ae-9b77c8725441" (UID: "e03e8607-b33e-44a0-93ae-9b77c8725441"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 18:20:15 crc kubenswrapper[4720]: I1013 18:20:15.973918 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e03e8607-b33e-44a0-93ae-9b77c8725441-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 18:20:15 crc kubenswrapper[4720]: I1013 18:20:15.974295 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6fxq\" (UniqueName: \"kubernetes.io/projected/e03e8607-b33e-44a0-93ae-9b77c8725441-kube-api-access-b6fxq\") on node \"crc\" DevicePath \"\"" Oct 13 18:20:15 crc kubenswrapper[4720]: I1013 18:20:15.974309 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e03e8607-b33e-44a0-93ae-9b77c8725441-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 18:20:16 crc kubenswrapper[4720]: I1013 18:20:16.175609 4720 generic.go:334] "Generic (PLEG): container finished" podID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerID="764c1c0ec6aa60325a7c805ec7141bde73c6fdd9d9c0a1e82b8d4193f1fa8d1d" exitCode=0 Oct 13 18:20:16 crc kubenswrapper[4720]: I1013 18:20:16.175730 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" event={"ID":"ce442c80-fcde-4b79-b6f9-f8f25771dfd4","Type":"ContainerDied","Data":"764c1c0ec6aa60325a7c805ec7141bde73c6fdd9d9c0a1e82b8d4193f1fa8d1d"} Oct 13 18:20:16 crc kubenswrapper[4720]: I1013 18:20:16.175773 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" event={"ID":"ce442c80-fcde-4b79-b6f9-f8f25771dfd4","Type":"ContainerStarted","Data":"79ca5c733b5eaab4a6d8e4dab434c0461ff00748dd7441ce45a3602ece00a48e"} Oct 13 18:20:16 crc kubenswrapper[4720]: I1013 18:20:16.175801 4720 scope.go:117] "RemoveContainer" containerID="8955f01d60828c6a158946e7dd0ce8b2360fc11ddc5d461465ae7048137b38b7" Oct 13 18:20:16 crc kubenswrapper[4720]: I1013 18:20:16.180680 4720 generic.go:334] "Generic (PLEG): container finished" podID="e03e8607-b33e-44a0-93ae-9b77c8725441" containerID="3c4e1251e301b5aa3ebfe0ef2652f186367226d582390d20606500a00cdb361d" exitCode=0 Oct 13 18:20:16 crc kubenswrapper[4720]: I1013 18:20:16.180723 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mddst" event={"ID":"e03e8607-b33e-44a0-93ae-9b77c8725441","Type":"ContainerDied","Data":"3c4e1251e301b5aa3ebfe0ef2652f186367226d582390d20606500a00cdb361d"} Oct 13 18:20:16 crc kubenswrapper[4720]: I1013 18:20:16.180771 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mddst" event={"ID":"e03e8607-b33e-44a0-93ae-9b77c8725441","Type":"ContainerDied","Data":"4a9139fc30a7c8953b84ca69f994adb0aa991ef0f587beaf88b8e52182021a24"} Oct 13 18:20:16 crc kubenswrapper[4720]: I1013 18:20:16.180829 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mddst" Oct 13 18:20:16 crc kubenswrapper[4720]: I1013 18:20:16.230569 4720 scope.go:117] "RemoveContainer" containerID="3c4e1251e301b5aa3ebfe0ef2652f186367226d582390d20606500a00cdb361d" Oct 13 18:20:16 crc kubenswrapper[4720]: I1013 18:20:16.242818 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mddst"] Oct 13 18:20:16 crc kubenswrapper[4720]: I1013 18:20:16.254021 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mddst"] Oct 13 18:20:16 crc kubenswrapper[4720]: I1013 18:20:16.257656 4720 scope.go:117] "RemoveContainer" containerID="8238b372a498edf10234bbfe533c0167e1beba9c7e8c27810b7ab3c2c46d6f9b" Oct 13 18:20:16 crc kubenswrapper[4720]: I1013 18:20:16.282155 4720 scope.go:117] "RemoveContainer" containerID="7a67422966d0040c377e7578c74ca7e09d7815dfabf8162b37905d9a2e6b776a" Oct 13 18:20:16 crc kubenswrapper[4720]: I1013 18:20:16.373869 4720 scope.go:117] "RemoveContainer" containerID="3c4e1251e301b5aa3ebfe0ef2652f186367226d582390d20606500a00cdb361d" Oct 13 18:20:16 crc kubenswrapper[4720]: E1013 18:20:16.374419 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c4e1251e301b5aa3ebfe0ef2652f186367226d582390d20606500a00cdb361d\": container with ID starting with 3c4e1251e301b5aa3ebfe0ef2652f186367226d582390d20606500a00cdb361d not found: ID does not exist" containerID="3c4e1251e301b5aa3ebfe0ef2652f186367226d582390d20606500a00cdb361d" Oct 13 18:20:16 crc kubenswrapper[4720]: I1013 18:20:16.374467 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c4e1251e301b5aa3ebfe0ef2652f186367226d582390d20606500a00cdb361d"} err="failed to get container status \"3c4e1251e301b5aa3ebfe0ef2652f186367226d582390d20606500a00cdb361d\": rpc error: code = NotFound desc = could not find container \"3c4e1251e301b5aa3ebfe0ef2652f186367226d582390d20606500a00cdb361d\": container with ID starting with 3c4e1251e301b5aa3ebfe0ef2652f186367226d582390d20606500a00cdb361d not found: ID does not exist" Oct 13 18:20:16 crc kubenswrapper[4720]: I1013 18:20:16.374499 4720 scope.go:117] "RemoveContainer" containerID="8238b372a498edf10234bbfe533c0167e1beba9c7e8c27810b7ab3c2c46d6f9b" Oct 13 18:20:16 crc kubenswrapper[4720]: E1013 18:20:16.374927 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8238b372a498edf10234bbfe533c0167e1beba9c7e8c27810b7ab3c2c46d6f9b\": container with ID starting with 8238b372a498edf10234bbfe533c0167e1beba9c7e8c27810b7ab3c2c46d6f9b not found: ID does not exist" containerID="8238b372a498edf10234bbfe533c0167e1beba9c7e8c27810b7ab3c2c46d6f9b" Oct 13 18:20:16 crc kubenswrapper[4720]: I1013 18:20:16.374968 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8238b372a498edf10234bbfe533c0167e1beba9c7e8c27810b7ab3c2c46d6f9b"} err="failed to get container status \"8238b372a498edf10234bbfe533c0167e1beba9c7e8c27810b7ab3c2c46d6f9b\": rpc error: code = NotFound desc = could not find container \"8238b372a498edf10234bbfe533c0167e1beba9c7e8c27810b7ab3c2c46d6f9b\": container with ID starting with 8238b372a498edf10234bbfe533c0167e1beba9c7e8c27810b7ab3c2c46d6f9b not found: ID does not exist" Oct 13 18:20:16 crc kubenswrapper[4720]: I1013 18:20:16.374994 4720 scope.go:117] "RemoveContainer" containerID="7a67422966d0040c377e7578c74ca7e09d7815dfabf8162b37905d9a2e6b776a" Oct 13 18:20:16 crc kubenswrapper[4720]: E1013 18:20:16.375456 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a67422966d0040c377e7578c74ca7e09d7815dfabf8162b37905d9a2e6b776a\": container with ID starting with 7a67422966d0040c377e7578c74ca7e09d7815dfabf8162b37905d9a2e6b776a not found: ID does not exist" containerID="7a67422966d0040c377e7578c74ca7e09d7815dfabf8162b37905d9a2e6b776a" Oct 13 18:20:16 crc kubenswrapper[4720]: I1013 18:20:16.375495 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a67422966d0040c377e7578c74ca7e09d7815dfabf8162b37905d9a2e6b776a"} err="failed to get container status \"7a67422966d0040c377e7578c74ca7e09d7815dfabf8162b37905d9a2e6b776a\": rpc error: code = NotFound desc = could not find container \"7a67422966d0040c377e7578c74ca7e09d7815dfabf8162b37905d9a2e6b776a\": container with ID starting with 7a67422966d0040c377e7578c74ca7e09d7815dfabf8162b37905d9a2e6b776a not found: ID does not exist" Oct 13 18:20:17 crc kubenswrapper[4720]: I1013 18:20:17.178648 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e03e8607-b33e-44a0-93ae-9b77c8725441" path="/var/lib/kubelet/pods/e03e8607-b33e-44a0-93ae-9b77c8725441/volumes" Oct 13 18:20:32 crc kubenswrapper[4720]: I1013 18:20:32.840977 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lzgqn"] Oct 13 18:20:32 crc kubenswrapper[4720]: E1013 18:20:32.842337 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e03e8607-b33e-44a0-93ae-9b77c8725441" containerName="extract-content" Oct 13 18:20:32 crc kubenswrapper[4720]: I1013 18:20:32.842360 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="e03e8607-b33e-44a0-93ae-9b77c8725441" containerName="extract-content" Oct 13 18:20:32 crc kubenswrapper[4720]: E1013 18:20:32.842407 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e03e8607-b33e-44a0-93ae-9b77c8725441" containerName="extract-utilities" Oct 13 18:20:32 crc kubenswrapper[4720]: I1013 18:20:32.842436 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="e03e8607-b33e-44a0-93ae-9b77c8725441" containerName="extract-utilities" Oct 13 18:20:32 crc kubenswrapper[4720]: E1013 18:20:32.842479 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e03e8607-b33e-44a0-93ae-9b77c8725441" containerName="registry-server" Oct 13 18:20:32 crc kubenswrapper[4720]: I1013 18:20:32.842499 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="e03e8607-b33e-44a0-93ae-9b77c8725441" containerName="registry-server" Oct 13 18:20:32 crc kubenswrapper[4720]: I1013 18:20:32.842897 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="e03e8607-b33e-44a0-93ae-9b77c8725441" containerName="registry-server" Oct 13 18:20:32 crc kubenswrapper[4720]: I1013 18:20:32.845520 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lzgqn" Oct 13 18:20:32 crc kubenswrapper[4720]: I1013 18:20:32.852707 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lzgqn"] Oct 13 18:20:32 crc kubenswrapper[4720]: I1013 18:20:32.908745 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfj2h\" (UniqueName: \"kubernetes.io/projected/3ee5462b-9be3-49b6-ae7c-9bcf4b0a352d-kube-api-access-jfj2h\") pod \"community-operators-lzgqn\" (UID: \"3ee5462b-9be3-49b6-ae7c-9bcf4b0a352d\") " pod="openshift-marketplace/community-operators-lzgqn" Oct 13 18:20:32 crc kubenswrapper[4720]: I1013 18:20:32.908844 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ee5462b-9be3-49b6-ae7c-9bcf4b0a352d-utilities\") pod \"community-operators-lzgqn\" (UID: \"3ee5462b-9be3-49b6-ae7c-9bcf4b0a352d\") " pod="openshift-marketplace/community-operators-lzgqn" Oct 13 18:20:32 crc kubenswrapper[4720]: I1013 18:20:32.908977 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ee5462b-9be3-49b6-ae7c-9bcf4b0a352d-catalog-content\") pod \"community-operators-lzgqn\" (UID: \"3ee5462b-9be3-49b6-ae7c-9bcf4b0a352d\") " pod="openshift-marketplace/community-operators-lzgqn" Oct 13 18:20:33 crc kubenswrapper[4720]: I1013 18:20:33.011042 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ee5462b-9be3-49b6-ae7c-9bcf4b0a352d-catalog-content\") pod \"community-operators-lzgqn\" (UID: \"3ee5462b-9be3-49b6-ae7c-9bcf4b0a352d\") " pod="openshift-marketplace/community-operators-lzgqn" Oct 13 18:20:33 crc kubenswrapper[4720]: I1013 18:20:33.011537 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfj2h\" (UniqueName: \"kubernetes.io/projected/3ee5462b-9be3-49b6-ae7c-9bcf4b0a352d-kube-api-access-jfj2h\") pod \"community-operators-lzgqn\" (UID: \"3ee5462b-9be3-49b6-ae7c-9bcf4b0a352d\") " pod="openshift-marketplace/community-operators-lzgqn" Oct 13 18:20:33 crc kubenswrapper[4720]: I1013 18:20:33.011575 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ee5462b-9be3-49b6-ae7c-9bcf4b0a352d-utilities\") pod \"community-operators-lzgqn\" (UID: \"3ee5462b-9be3-49b6-ae7c-9bcf4b0a352d\") " pod="openshift-marketplace/community-operators-lzgqn" Oct 13 18:20:33 crc kubenswrapper[4720]: I1013 18:20:33.012097 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ee5462b-9be3-49b6-ae7c-9bcf4b0a352d-utilities\") pod \"community-operators-lzgqn\" (UID: \"3ee5462b-9be3-49b6-ae7c-9bcf4b0a352d\") " pod="openshift-marketplace/community-operators-lzgqn" Oct 13 18:20:33 crc kubenswrapper[4720]: I1013 18:20:33.012100 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ee5462b-9be3-49b6-ae7c-9bcf4b0a352d-catalog-content\") pod \"community-operators-lzgqn\" (UID: \"3ee5462b-9be3-49b6-ae7c-9bcf4b0a352d\") " pod="openshift-marketplace/community-operators-lzgqn" Oct 13 18:20:33 crc kubenswrapper[4720]: I1013 18:20:33.038877 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfj2h\" (UniqueName: \"kubernetes.io/projected/3ee5462b-9be3-49b6-ae7c-9bcf4b0a352d-kube-api-access-jfj2h\") pod \"community-operators-lzgqn\" (UID: \"3ee5462b-9be3-49b6-ae7c-9bcf4b0a352d\") " pod="openshift-marketplace/community-operators-lzgqn" Oct 13 18:20:33 crc kubenswrapper[4720]: I1013 18:20:33.173958 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lzgqn" Oct 13 18:20:33 crc kubenswrapper[4720]: I1013 18:20:33.722875 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lzgqn"] Oct 13 18:20:34 crc kubenswrapper[4720]: I1013 18:20:34.404466 4720 generic.go:334] "Generic (PLEG): container finished" podID="3ee5462b-9be3-49b6-ae7c-9bcf4b0a352d" containerID="92171cd2a3aa20d6233a7e30e53df761aef1134bc9c4e5698ad2814db51626ed" exitCode=0 Oct 13 18:20:34 crc kubenswrapper[4720]: I1013 18:20:34.404539 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lzgqn" event={"ID":"3ee5462b-9be3-49b6-ae7c-9bcf4b0a352d","Type":"ContainerDied","Data":"92171cd2a3aa20d6233a7e30e53df761aef1134bc9c4e5698ad2814db51626ed"} Oct 13 18:20:34 crc kubenswrapper[4720]: I1013 18:20:34.404729 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lzgqn" event={"ID":"3ee5462b-9be3-49b6-ae7c-9bcf4b0a352d","Type":"ContainerStarted","Data":"8d62ceb36bf87f29a3a595e051e2935bbccf50dcbe88869978d4386f8065f892"} Oct 13 18:20:36 crc kubenswrapper[4720]: I1013 18:20:36.432333 4720 generic.go:334] "Generic (PLEG): container finished" podID="3ee5462b-9be3-49b6-ae7c-9bcf4b0a352d" containerID="613f578fe5238f0b7015762fb642c89ed8ff0fd6ea326b9deaed6e3bf9fa3504" exitCode=0 Oct 13 18:20:36 crc kubenswrapper[4720]: I1013 18:20:36.432388 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lzgqn" event={"ID":"3ee5462b-9be3-49b6-ae7c-9bcf4b0a352d","Type":"ContainerDied","Data":"613f578fe5238f0b7015762fb642c89ed8ff0fd6ea326b9deaed6e3bf9fa3504"} Oct 13 18:20:37 crc kubenswrapper[4720]: I1013 18:20:37.447134 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lzgqn" event={"ID":"3ee5462b-9be3-49b6-ae7c-9bcf4b0a352d","Type":"ContainerStarted","Data":"4aeaf9655e3a285565149662892855bd0a8144fe5fac8e0378456f0d9dc492d9"} Oct 13 18:20:37 crc kubenswrapper[4720]: I1013 18:20:37.467296 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lzgqn" podStartSLOduration=3.013477124 podStartE2EDuration="5.467277614s" podCreationTimestamp="2025-10-13 18:20:32 +0000 UTC" firstStartedPulling="2025-10-13 18:20:34.406804821 +0000 UTC m=+3379.864054953" lastFinishedPulling="2025-10-13 18:20:36.860605311 +0000 UTC m=+3382.317855443" observedRunningTime="2025-10-13 18:20:37.463175558 +0000 UTC m=+3382.920425780" watchObservedRunningTime="2025-10-13 18:20:37.467277614 +0000 UTC m=+3382.924527746" Oct 13 18:20:43 crc kubenswrapper[4720]: I1013 18:20:43.189396 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lzgqn" Oct 13 18:20:43 crc kubenswrapper[4720]: I1013 18:20:43.190009 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lzgqn" Oct 13 18:20:43 crc kubenswrapper[4720]: I1013 18:20:43.255023 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lzgqn" Oct 13 18:20:43 crc kubenswrapper[4720]: I1013 18:20:43.575284 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lzgqn" Oct 13 18:20:44 crc kubenswrapper[4720]: I1013 18:20:44.199011 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lzgqn"] Oct 13 18:20:45 crc kubenswrapper[4720]: I1013 18:20:45.528596 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lzgqn" podUID="3ee5462b-9be3-49b6-ae7c-9bcf4b0a352d" containerName="registry-server" containerID="cri-o://4aeaf9655e3a285565149662892855bd0a8144fe5fac8e0378456f0d9dc492d9" gracePeriod=2 Oct 13 18:20:46 crc kubenswrapper[4720]: I1013 18:20:46.097640 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lzgqn" Oct 13 18:20:46 crc kubenswrapper[4720]: I1013 18:20:46.269853 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfj2h\" (UniqueName: \"kubernetes.io/projected/3ee5462b-9be3-49b6-ae7c-9bcf4b0a352d-kube-api-access-jfj2h\") pod \"3ee5462b-9be3-49b6-ae7c-9bcf4b0a352d\" (UID: \"3ee5462b-9be3-49b6-ae7c-9bcf4b0a352d\") " Oct 13 18:20:46 crc kubenswrapper[4720]: I1013 18:20:46.269929 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ee5462b-9be3-49b6-ae7c-9bcf4b0a352d-utilities\") pod \"3ee5462b-9be3-49b6-ae7c-9bcf4b0a352d\" (UID: \"3ee5462b-9be3-49b6-ae7c-9bcf4b0a352d\") " Oct 13 18:20:46 crc kubenswrapper[4720]: I1013 18:20:46.270010 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ee5462b-9be3-49b6-ae7c-9bcf4b0a352d-catalog-content\") pod \"3ee5462b-9be3-49b6-ae7c-9bcf4b0a352d\" (UID: \"3ee5462b-9be3-49b6-ae7c-9bcf4b0a352d\") " Oct 13 18:20:46 crc kubenswrapper[4720]: I1013 18:20:46.283553 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ee5462b-9be3-49b6-ae7c-9bcf4b0a352d-utilities" (OuterVolumeSpecName: "utilities") pod "3ee5462b-9be3-49b6-ae7c-9bcf4b0a352d" (UID: "3ee5462b-9be3-49b6-ae7c-9bcf4b0a352d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 18:20:46 crc kubenswrapper[4720]: I1013 18:20:46.287520 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ee5462b-9be3-49b6-ae7c-9bcf4b0a352d-kube-api-access-jfj2h" (OuterVolumeSpecName: "kube-api-access-jfj2h") pod "3ee5462b-9be3-49b6-ae7c-9bcf4b0a352d" (UID: "3ee5462b-9be3-49b6-ae7c-9bcf4b0a352d"). InnerVolumeSpecName "kube-api-access-jfj2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 18:20:46 crc kubenswrapper[4720]: I1013 18:20:46.338415 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ee5462b-9be3-49b6-ae7c-9bcf4b0a352d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3ee5462b-9be3-49b6-ae7c-9bcf4b0a352d" (UID: "3ee5462b-9be3-49b6-ae7c-9bcf4b0a352d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 18:20:46 crc kubenswrapper[4720]: I1013 18:20:46.372478 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfj2h\" (UniqueName: \"kubernetes.io/projected/3ee5462b-9be3-49b6-ae7c-9bcf4b0a352d-kube-api-access-jfj2h\") on node \"crc\" DevicePath \"\"" Oct 13 18:20:46 crc kubenswrapper[4720]: I1013 18:20:46.372504 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ee5462b-9be3-49b6-ae7c-9bcf4b0a352d-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 18:20:46 crc kubenswrapper[4720]: I1013 18:20:46.372515 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ee5462b-9be3-49b6-ae7c-9bcf4b0a352d-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 18:20:46 crc kubenswrapper[4720]: I1013 18:20:46.539697 4720 generic.go:334] "Generic (PLEG): container finished" podID="3ee5462b-9be3-49b6-ae7c-9bcf4b0a352d" containerID="4aeaf9655e3a285565149662892855bd0a8144fe5fac8e0378456f0d9dc492d9" exitCode=0 Oct 13 18:20:46 crc kubenswrapper[4720]: I1013 18:20:46.539741 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lzgqn" event={"ID":"3ee5462b-9be3-49b6-ae7c-9bcf4b0a352d","Type":"ContainerDied","Data":"4aeaf9655e3a285565149662892855bd0a8144fe5fac8e0378456f0d9dc492d9"} Oct 13 18:20:46 crc kubenswrapper[4720]: I1013 18:20:46.539772 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lzgqn" event={"ID":"3ee5462b-9be3-49b6-ae7c-9bcf4b0a352d","Type":"ContainerDied","Data":"8d62ceb36bf87f29a3a595e051e2935bbccf50dcbe88869978d4386f8065f892"} Oct 13 18:20:46 crc kubenswrapper[4720]: I1013 18:20:46.539793 4720 scope.go:117] "RemoveContainer" containerID="4aeaf9655e3a285565149662892855bd0a8144fe5fac8e0378456f0d9dc492d9" Oct 13 18:20:46 crc kubenswrapper[4720]: I1013 18:20:46.539799 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lzgqn" Oct 13 18:20:46 crc kubenswrapper[4720]: I1013 18:20:46.564148 4720 scope.go:117] "RemoveContainer" containerID="613f578fe5238f0b7015762fb642c89ed8ff0fd6ea326b9deaed6e3bf9fa3504" Oct 13 18:20:46 crc kubenswrapper[4720]: I1013 18:20:46.580440 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lzgqn"] Oct 13 18:20:46 crc kubenswrapper[4720]: I1013 18:20:46.589616 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lzgqn"] Oct 13 18:20:46 crc kubenswrapper[4720]: I1013 18:20:46.603561 4720 scope.go:117] "RemoveContainer" containerID="92171cd2a3aa20d6233a7e30e53df761aef1134bc9c4e5698ad2814db51626ed" Oct 13 18:20:46 crc kubenswrapper[4720]: I1013 18:20:46.655902 4720 scope.go:117] "RemoveContainer" containerID="4aeaf9655e3a285565149662892855bd0a8144fe5fac8e0378456f0d9dc492d9" Oct 13 18:20:46 crc kubenswrapper[4720]: E1013 18:20:46.656434 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4aeaf9655e3a285565149662892855bd0a8144fe5fac8e0378456f0d9dc492d9\": container with ID starting with 4aeaf9655e3a285565149662892855bd0a8144fe5fac8e0378456f0d9dc492d9 not found: ID does not exist" containerID="4aeaf9655e3a285565149662892855bd0a8144fe5fac8e0378456f0d9dc492d9" Oct 13 18:20:46 crc kubenswrapper[4720]: I1013 18:20:46.656483 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4aeaf9655e3a285565149662892855bd0a8144fe5fac8e0378456f0d9dc492d9"} err="failed to get container status \"4aeaf9655e3a285565149662892855bd0a8144fe5fac8e0378456f0d9dc492d9\": rpc error: code = NotFound desc = could not find container \"4aeaf9655e3a285565149662892855bd0a8144fe5fac8e0378456f0d9dc492d9\": container with ID starting with 4aeaf9655e3a285565149662892855bd0a8144fe5fac8e0378456f0d9dc492d9 not found: ID does not exist" Oct 13 18:20:46 crc kubenswrapper[4720]: I1013 18:20:46.656512 4720 scope.go:117] "RemoveContainer" containerID="613f578fe5238f0b7015762fb642c89ed8ff0fd6ea326b9deaed6e3bf9fa3504" Oct 13 18:20:46 crc kubenswrapper[4720]: E1013 18:20:46.656997 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"613f578fe5238f0b7015762fb642c89ed8ff0fd6ea326b9deaed6e3bf9fa3504\": container with ID starting with 613f578fe5238f0b7015762fb642c89ed8ff0fd6ea326b9deaed6e3bf9fa3504 not found: ID does not exist" containerID="613f578fe5238f0b7015762fb642c89ed8ff0fd6ea326b9deaed6e3bf9fa3504" Oct 13 18:20:46 crc kubenswrapper[4720]: I1013 18:20:46.657029 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"613f578fe5238f0b7015762fb642c89ed8ff0fd6ea326b9deaed6e3bf9fa3504"} err="failed to get container status \"613f578fe5238f0b7015762fb642c89ed8ff0fd6ea326b9deaed6e3bf9fa3504\": rpc error: code = NotFound desc = could not find container \"613f578fe5238f0b7015762fb642c89ed8ff0fd6ea326b9deaed6e3bf9fa3504\": container with ID starting with 613f578fe5238f0b7015762fb642c89ed8ff0fd6ea326b9deaed6e3bf9fa3504 not found: ID does not exist" Oct 13 18:20:46 crc kubenswrapper[4720]: I1013 18:20:46.657050 4720 scope.go:117] "RemoveContainer" containerID="92171cd2a3aa20d6233a7e30e53df761aef1134bc9c4e5698ad2814db51626ed" Oct 13 18:20:46 crc kubenswrapper[4720]: E1013 18:20:46.657384 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92171cd2a3aa20d6233a7e30e53df761aef1134bc9c4e5698ad2814db51626ed\": container with ID starting with 92171cd2a3aa20d6233a7e30e53df761aef1134bc9c4e5698ad2814db51626ed not found: ID does not exist" containerID="92171cd2a3aa20d6233a7e30e53df761aef1134bc9c4e5698ad2814db51626ed" Oct 13 18:20:46 crc kubenswrapper[4720]: I1013 18:20:46.657410 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92171cd2a3aa20d6233a7e30e53df761aef1134bc9c4e5698ad2814db51626ed"} err="failed to get container status \"92171cd2a3aa20d6233a7e30e53df761aef1134bc9c4e5698ad2814db51626ed\": rpc error: code = NotFound desc = could not find container \"92171cd2a3aa20d6233a7e30e53df761aef1134bc9c4e5698ad2814db51626ed\": container with ID starting with 92171cd2a3aa20d6233a7e30e53df761aef1134bc9c4e5698ad2814db51626ed not found: ID does not exist" Oct 13 18:20:47 crc kubenswrapper[4720]: I1013 18:20:47.180011 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ee5462b-9be3-49b6-ae7c-9bcf4b0a352d" path="/var/lib/kubelet/pods/3ee5462b-9be3-49b6-ae7c-9bcf4b0a352d/volumes" Oct 13 18:22:15 crc kubenswrapper[4720]: I1013 18:22:15.213088 4720 patch_prober.go:28] interesting pod/machine-config-daemon-htwnl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 18:22:15 crc kubenswrapper[4720]: I1013 18:22:15.213790 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 18:22:29 crc kubenswrapper[4720]: I1013 18:22:29.715174 4720 generic.go:334] "Generic (PLEG): container finished" podID="ece01f62-fd6d-4c42-9c9a-3bc25feed3cb" containerID="df57570d0bcf4bd175220efc3a65904a5ad2d5d37896900f423d316850240a6a" exitCode=0 Oct 13 18:22:29 crc kubenswrapper[4720]: I1013 18:22:29.715285 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"ece01f62-fd6d-4c42-9c9a-3bc25feed3cb","Type":"ContainerDied","Data":"df57570d0bcf4bd175220efc3a65904a5ad2d5d37896900f423d316850240a6a"} Oct 13 18:22:31 crc kubenswrapper[4720]: I1013 18:22:31.206222 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Oct 13 18:22:31 crc kubenswrapper[4720]: I1013 18:22:31.350600 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/ece01f62-fd6d-4c42-9c9a-3bc25feed3cb-test-operator-ephemeral-temporary\") pod \"ece01f62-fd6d-4c42-9c9a-3bc25feed3cb\" (UID: \"ece01f62-fd6d-4c42-9c9a-3bc25feed3cb\") " Oct 13 18:22:31 crc kubenswrapper[4720]: I1013 18:22:31.350659 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/ece01f62-fd6d-4c42-9c9a-3bc25feed3cb-test-operator-ephemeral-workdir\") pod \"ece01f62-fd6d-4c42-9c9a-3bc25feed3cb\" (UID: \"ece01f62-fd6d-4c42-9c9a-3bc25feed3cb\") " Oct 13 18:22:31 crc kubenswrapper[4720]: I1013 18:22:31.350722 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhfvz\" (UniqueName: \"kubernetes.io/projected/ece01f62-fd6d-4c42-9c9a-3bc25feed3cb-kube-api-access-jhfvz\") pod \"ece01f62-fd6d-4c42-9c9a-3bc25feed3cb\" (UID: \"ece01f62-fd6d-4c42-9c9a-3bc25feed3cb\") " Oct 13 18:22:31 crc kubenswrapper[4720]: I1013 18:22:31.350790 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ece01f62-fd6d-4c42-9c9a-3bc25feed3cb\" (UID: \"ece01f62-fd6d-4c42-9c9a-3bc25feed3cb\") " Oct 13 18:22:31 crc kubenswrapper[4720]: I1013 18:22:31.350819 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ece01f62-fd6d-4c42-9c9a-3bc25feed3cb-openstack-config-secret\") pod \"ece01f62-fd6d-4c42-9c9a-3bc25feed3cb\" (UID: \"ece01f62-fd6d-4c42-9c9a-3bc25feed3cb\") " Oct 13 18:22:31 crc kubenswrapper[4720]: I1013 18:22:31.350873 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ece01f62-fd6d-4c42-9c9a-3bc25feed3cb-openstack-config\") pod \"ece01f62-fd6d-4c42-9c9a-3bc25feed3cb\" (UID: \"ece01f62-fd6d-4c42-9c9a-3bc25feed3cb\") " Oct 13 18:22:31 crc kubenswrapper[4720]: I1013 18:22:31.350931 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ece01f62-fd6d-4c42-9c9a-3bc25feed3cb-ssh-key\") pod \"ece01f62-fd6d-4c42-9c9a-3bc25feed3cb\" (UID: \"ece01f62-fd6d-4c42-9c9a-3bc25feed3cb\") " Oct 13 18:22:31 crc kubenswrapper[4720]: I1013 18:22:31.350962 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ece01f62-fd6d-4c42-9c9a-3bc25feed3cb-config-data\") pod \"ece01f62-fd6d-4c42-9c9a-3bc25feed3cb\" (UID: \"ece01f62-fd6d-4c42-9c9a-3bc25feed3cb\") " Oct 13 18:22:31 crc kubenswrapper[4720]: I1013 18:22:31.350986 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/ece01f62-fd6d-4c42-9c9a-3bc25feed3cb-ca-certs\") pod \"ece01f62-fd6d-4c42-9c9a-3bc25feed3cb\" (UID: \"ece01f62-fd6d-4c42-9c9a-3bc25feed3cb\") " Oct 13 18:22:31 crc kubenswrapper[4720]: I1013 18:22:31.352576 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ece01f62-fd6d-4c42-9c9a-3bc25feed3cb-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "ece01f62-fd6d-4c42-9c9a-3bc25feed3cb" (UID: "ece01f62-fd6d-4c42-9c9a-3bc25feed3cb"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 18:22:31 crc kubenswrapper[4720]: I1013 18:22:31.357199 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ece01f62-fd6d-4c42-9c9a-3bc25feed3cb-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "ece01f62-fd6d-4c42-9c9a-3bc25feed3cb" (UID: "ece01f62-fd6d-4c42-9c9a-3bc25feed3cb"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 18:22:31 crc kubenswrapper[4720]: I1013 18:22:31.389238 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ece01f62-fd6d-4c42-9c9a-3bc25feed3cb-config-data" (OuterVolumeSpecName: "config-data") pod "ece01f62-fd6d-4c42-9c9a-3bc25feed3cb" (UID: "ece01f62-fd6d-4c42-9c9a-3bc25feed3cb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 18:22:31 crc kubenswrapper[4720]: I1013 18:22:31.389413 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "test-operator-logs") pod "ece01f62-fd6d-4c42-9c9a-3bc25feed3cb" (UID: "ece01f62-fd6d-4c42-9c9a-3bc25feed3cb"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 13 18:22:31 crc kubenswrapper[4720]: I1013 18:22:31.389519 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ece01f62-fd6d-4c42-9c9a-3bc25feed3cb-kube-api-access-jhfvz" (OuterVolumeSpecName: "kube-api-access-jhfvz") pod "ece01f62-fd6d-4c42-9c9a-3bc25feed3cb" (UID: "ece01f62-fd6d-4c42-9c9a-3bc25feed3cb"). InnerVolumeSpecName "kube-api-access-jhfvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 18:22:31 crc kubenswrapper[4720]: I1013 18:22:31.403363 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ece01f62-fd6d-4c42-9c9a-3bc25feed3cb-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "ece01f62-fd6d-4c42-9c9a-3bc25feed3cb" (UID: "ece01f62-fd6d-4c42-9c9a-3bc25feed3cb"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 18:22:31 crc kubenswrapper[4720]: I1013 18:22:31.428437 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ece01f62-fd6d-4c42-9c9a-3bc25feed3cb-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ece01f62-fd6d-4c42-9c9a-3bc25feed3cb" (UID: "ece01f62-fd6d-4c42-9c9a-3bc25feed3cb"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 18:22:31 crc kubenswrapper[4720]: I1013 18:22:31.432519 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ece01f62-fd6d-4c42-9c9a-3bc25feed3cb-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "ece01f62-fd6d-4c42-9c9a-3bc25feed3cb" (UID: "ece01f62-fd6d-4c42-9c9a-3bc25feed3cb"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 18:22:31 crc kubenswrapper[4720]: I1013 18:22:31.454717 4720 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/ece01f62-fd6d-4c42-9c9a-3bc25feed3cb-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Oct 13 18:22:31 crc kubenswrapper[4720]: I1013 18:22:31.454761 4720 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/ece01f62-fd6d-4c42-9c9a-3bc25feed3cb-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Oct 13 18:22:31 crc kubenswrapper[4720]: I1013 18:22:31.454777 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhfvz\" (UniqueName: \"kubernetes.io/projected/ece01f62-fd6d-4c42-9c9a-3bc25feed3cb-kube-api-access-jhfvz\") on node \"crc\" DevicePath \"\"" Oct 13 18:22:31 crc kubenswrapper[4720]: I1013 18:22:31.454821 4720 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Oct 13 18:22:31 crc kubenswrapper[4720]: I1013 18:22:31.454839 4720 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ece01f62-fd6d-4c42-9c9a-3bc25feed3cb-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Oct 13 18:22:31 crc kubenswrapper[4720]: I1013 18:22:31.454851 4720 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ece01f62-fd6d-4c42-9c9a-3bc25feed3cb-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 13 18:22:31 crc kubenswrapper[4720]: I1013 18:22:31.454865 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ece01f62-fd6d-4c42-9c9a-3bc25feed3cb-config-data\") on node \"crc\" DevicePath \"\"" Oct 13 18:22:31 crc kubenswrapper[4720]: I1013 18:22:31.454878 4720 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/ece01f62-fd6d-4c42-9c9a-3bc25feed3cb-ca-certs\") on node \"crc\" DevicePath \"\"" Oct 13 18:22:31 crc kubenswrapper[4720]: I1013 18:22:31.458058 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ece01f62-fd6d-4c42-9c9a-3bc25feed3cb-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "ece01f62-fd6d-4c42-9c9a-3bc25feed3cb" (UID: "ece01f62-fd6d-4c42-9c9a-3bc25feed3cb"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 18:22:31 crc kubenswrapper[4720]: I1013 18:22:31.486273 4720 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Oct 13 18:22:31 crc kubenswrapper[4720]: I1013 18:22:31.557148 4720 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Oct 13 18:22:31 crc kubenswrapper[4720]: I1013 18:22:31.557228 4720 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ece01f62-fd6d-4c42-9c9a-3bc25feed3cb-openstack-config\") on node \"crc\" DevicePath \"\"" Oct 13 18:22:31 crc kubenswrapper[4720]: I1013 18:22:31.747729 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"ece01f62-fd6d-4c42-9c9a-3bc25feed3cb","Type":"ContainerDied","Data":"132075cdd19a221aa1736d168c3ff9eb35efd95473170c6892978258e1216b55"} Oct 13 18:22:31 crc kubenswrapper[4720]: I1013 18:22:31.748276 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="132075cdd19a221aa1736d168c3ff9eb35efd95473170c6892978258e1216b55" Oct 13 18:22:31 crc kubenswrapper[4720]: I1013 18:22:31.747891 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Oct 13 18:22:35 crc kubenswrapper[4720]: I1013 18:22:35.221775 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Oct 13 18:22:35 crc kubenswrapper[4720]: E1013 18:22:35.223172 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ece01f62-fd6d-4c42-9c9a-3bc25feed3cb" containerName="tempest-tests-tempest-tests-runner" Oct 13 18:22:35 crc kubenswrapper[4720]: I1013 18:22:35.223223 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="ece01f62-fd6d-4c42-9c9a-3bc25feed3cb" containerName="tempest-tests-tempest-tests-runner" Oct 13 18:22:35 crc kubenswrapper[4720]: E1013 18:22:35.223246 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ee5462b-9be3-49b6-ae7c-9bcf4b0a352d" containerName="extract-content" Oct 13 18:22:35 crc kubenswrapper[4720]: I1013 18:22:35.223259 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ee5462b-9be3-49b6-ae7c-9bcf4b0a352d" containerName="extract-content" Oct 13 18:22:35 crc kubenswrapper[4720]: E1013 18:22:35.223295 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ee5462b-9be3-49b6-ae7c-9bcf4b0a352d" containerName="extract-utilities" Oct 13 18:22:35 crc kubenswrapper[4720]: I1013 18:22:35.223308 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ee5462b-9be3-49b6-ae7c-9bcf4b0a352d" containerName="extract-utilities" Oct 13 18:22:35 crc kubenswrapper[4720]: E1013 18:22:35.223331 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ee5462b-9be3-49b6-ae7c-9bcf4b0a352d" containerName="registry-server" Oct 13 18:22:35 crc kubenswrapper[4720]: I1013 18:22:35.223343 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ee5462b-9be3-49b6-ae7c-9bcf4b0a352d" containerName="registry-server" Oct 13 18:22:35 crc kubenswrapper[4720]: I1013 18:22:35.223722 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ee5462b-9be3-49b6-ae7c-9bcf4b0a352d" containerName="registry-server" Oct 13 18:22:35 crc kubenswrapper[4720]: I1013 18:22:35.223796 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="ece01f62-fd6d-4c42-9c9a-3bc25feed3cb" containerName="tempest-tests-tempest-tests-runner" Oct 13 18:22:35 crc kubenswrapper[4720]: I1013 18:22:35.224887 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 13 18:22:35 crc kubenswrapper[4720]: I1013 18:22:35.232129 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Oct 13 18:22:35 crc kubenswrapper[4720]: I1013 18:22:35.258994 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-jr545" Oct 13 18:22:35 crc kubenswrapper[4720]: I1013 18:22:35.357754 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"82b017de-6691-4cf4-941f-9e0334669ced\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 13 18:22:35 crc kubenswrapper[4720]: I1013 18:22:35.357825 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvttr\" (UniqueName: \"kubernetes.io/projected/82b017de-6691-4cf4-941f-9e0334669ced-kube-api-access-mvttr\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"82b017de-6691-4cf4-941f-9e0334669ced\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 13 18:22:35 crc kubenswrapper[4720]: I1013 18:22:35.460849 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"82b017de-6691-4cf4-941f-9e0334669ced\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 13 18:22:35 crc kubenswrapper[4720]: I1013 18:22:35.460932 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvttr\" (UniqueName: \"kubernetes.io/projected/82b017de-6691-4cf4-941f-9e0334669ced-kube-api-access-mvttr\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"82b017de-6691-4cf4-941f-9e0334669ced\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 13 18:22:35 crc kubenswrapper[4720]: I1013 18:22:35.462016 4720 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"82b017de-6691-4cf4-941f-9e0334669ced\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 13 18:22:35 crc kubenswrapper[4720]: I1013 18:22:35.495594 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvttr\" (UniqueName: \"kubernetes.io/projected/82b017de-6691-4cf4-941f-9e0334669ced-kube-api-access-mvttr\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"82b017de-6691-4cf4-941f-9e0334669ced\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 13 18:22:35 crc kubenswrapper[4720]: I1013 18:22:35.510250 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"82b017de-6691-4cf4-941f-9e0334669ced\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 13 18:22:35 crc kubenswrapper[4720]: I1013 18:22:35.588170 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 13 18:22:36 crc kubenswrapper[4720]: I1013 18:22:36.117665 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Oct 13 18:22:36 crc kubenswrapper[4720]: I1013 18:22:36.807017 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"82b017de-6691-4cf4-941f-9e0334669ced","Type":"ContainerStarted","Data":"b164cb6993c4aadd70764bff5bf174338c19b4a00cbc858c4a44aef30853195f"} Oct 13 18:22:37 crc kubenswrapper[4720]: I1013 18:22:37.819839 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"82b017de-6691-4cf4-941f-9e0334669ced","Type":"ContainerStarted","Data":"3b86f7c5e62e6b3301ef2d73796abb6253c2ad88618f5461abbf05c95cecf323"} Oct 13 18:22:37 crc kubenswrapper[4720]: I1013 18:22:37.853851 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.985486539 podStartE2EDuration="2.853821602s" podCreationTimestamp="2025-10-13 18:22:35 +0000 UTC" firstStartedPulling="2025-10-13 18:22:36.131997769 +0000 UTC m=+3501.589247931" lastFinishedPulling="2025-10-13 18:22:37.000332832 +0000 UTC m=+3502.457582994" observedRunningTime="2025-10-13 18:22:37.844928372 +0000 UTC m=+3503.302178554" watchObservedRunningTime="2025-10-13 18:22:37.853821602 +0000 UTC m=+3503.311071764" Oct 13 18:22:45 crc kubenswrapper[4720]: I1013 18:22:45.212733 4720 patch_prober.go:28] interesting pod/machine-config-daemon-htwnl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 18:22:45 crc kubenswrapper[4720]: I1013 18:22:45.214349 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 18:22:54 crc kubenswrapper[4720]: I1013 18:22:54.703091 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-6mndj/must-gather-j95b9"] Oct 13 18:22:54 crc kubenswrapper[4720]: I1013 18:22:54.708226 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6mndj/must-gather-j95b9" Oct 13 18:22:54 crc kubenswrapper[4720]: I1013 18:22:54.711866 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-6mndj/must-gather-j95b9"] Oct 13 18:22:54 crc kubenswrapper[4720]: I1013 18:22:54.712542 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-6mndj"/"openshift-service-ca.crt" Oct 13 18:22:54 crc kubenswrapper[4720]: I1013 18:22:54.713123 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-6mndj"/"kube-root-ca.crt" Oct 13 18:22:54 crc kubenswrapper[4720]: I1013 18:22:54.713395 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-6mndj"/"default-dockercfg-dwqhq" Oct 13 18:22:54 crc kubenswrapper[4720]: I1013 18:22:54.787073 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bee557f2-d5ec-4166-b78b-ab8b71c413b0-must-gather-output\") pod \"must-gather-j95b9\" (UID: \"bee557f2-d5ec-4166-b78b-ab8b71c413b0\") " pod="openshift-must-gather-6mndj/must-gather-j95b9" Oct 13 18:22:54 crc kubenswrapper[4720]: I1013 18:22:54.787146 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k67p2\" (UniqueName: \"kubernetes.io/projected/bee557f2-d5ec-4166-b78b-ab8b71c413b0-kube-api-access-k67p2\") pod \"must-gather-j95b9\" (UID: \"bee557f2-d5ec-4166-b78b-ab8b71c413b0\") " pod="openshift-must-gather-6mndj/must-gather-j95b9" Oct 13 18:22:54 crc kubenswrapper[4720]: I1013 18:22:54.889139 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bee557f2-d5ec-4166-b78b-ab8b71c413b0-must-gather-output\") pod \"must-gather-j95b9\" (UID: \"bee557f2-d5ec-4166-b78b-ab8b71c413b0\") " pod="openshift-must-gather-6mndj/must-gather-j95b9" Oct 13 18:22:54 crc kubenswrapper[4720]: I1013 18:22:54.889248 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k67p2\" (UniqueName: \"kubernetes.io/projected/bee557f2-d5ec-4166-b78b-ab8b71c413b0-kube-api-access-k67p2\") pod \"must-gather-j95b9\" (UID: \"bee557f2-d5ec-4166-b78b-ab8b71c413b0\") " pod="openshift-must-gather-6mndj/must-gather-j95b9" Oct 13 18:22:54 crc kubenswrapper[4720]: I1013 18:22:54.889782 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bee557f2-d5ec-4166-b78b-ab8b71c413b0-must-gather-output\") pod \"must-gather-j95b9\" (UID: \"bee557f2-d5ec-4166-b78b-ab8b71c413b0\") " pod="openshift-must-gather-6mndj/must-gather-j95b9" Oct 13 18:22:54 crc kubenswrapper[4720]: I1013 18:22:54.912049 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k67p2\" (UniqueName: \"kubernetes.io/projected/bee557f2-d5ec-4166-b78b-ab8b71c413b0-kube-api-access-k67p2\") pod \"must-gather-j95b9\" (UID: \"bee557f2-d5ec-4166-b78b-ab8b71c413b0\") " pod="openshift-must-gather-6mndj/must-gather-j95b9" Oct 13 18:22:55 crc kubenswrapper[4720]: I1013 18:22:55.026914 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6mndj/must-gather-j95b9" Oct 13 18:22:55 crc kubenswrapper[4720]: I1013 18:22:55.372121 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-6mndj/must-gather-j95b9"] Oct 13 18:22:56 crc kubenswrapper[4720]: I1013 18:22:56.033854 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6mndj/must-gather-j95b9" event={"ID":"bee557f2-d5ec-4166-b78b-ab8b71c413b0","Type":"ContainerStarted","Data":"5b001a8c5b1f028b0026dd2e79d8a71f53af47d9e0fcf78cb1923eb74ca754dd"} Oct 13 18:23:00 crc kubenswrapper[4720]: I1013 18:23:00.078068 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6mndj/must-gather-j95b9" event={"ID":"bee557f2-d5ec-4166-b78b-ab8b71c413b0","Type":"ContainerStarted","Data":"54f0dfe7f7e5ea87d39379885aa932eba4266594432b4f7fa9bcf29d05b4c2a3"} Oct 13 18:23:00 crc kubenswrapper[4720]: I1013 18:23:00.078569 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6mndj/must-gather-j95b9" event={"ID":"bee557f2-d5ec-4166-b78b-ab8b71c413b0","Type":"ContainerStarted","Data":"376529539af074bee8515187f674d0998b19aa0f9912f033f3e24eee029da047"} Oct 13 18:23:00 crc kubenswrapper[4720]: I1013 18:23:00.110310 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-6mndj/must-gather-j95b9" podStartSLOduration=2.133983022 podStartE2EDuration="6.110291118s" podCreationTimestamp="2025-10-13 18:22:54 +0000 UTC" firstStartedPulling="2025-10-13 18:22:55.394846552 +0000 UTC m=+3520.852096684" lastFinishedPulling="2025-10-13 18:22:59.371154608 +0000 UTC m=+3524.828404780" observedRunningTime="2025-10-13 18:23:00.106659794 +0000 UTC m=+3525.563909926" watchObservedRunningTime="2025-10-13 18:23:00.110291118 +0000 UTC m=+3525.567541260" Oct 13 18:23:03 crc kubenswrapper[4720]: I1013 18:23:03.227290 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-6mndj/crc-debug-cjwdk"] Oct 13 18:23:03 crc kubenswrapper[4720]: I1013 18:23:03.229732 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6mndj/crc-debug-cjwdk" Oct 13 18:23:03 crc kubenswrapper[4720]: I1013 18:23:03.344732 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/77788e20-57ef-4177-88d0-ba73282fb72c-host\") pod \"crc-debug-cjwdk\" (UID: \"77788e20-57ef-4177-88d0-ba73282fb72c\") " pod="openshift-must-gather-6mndj/crc-debug-cjwdk" Oct 13 18:23:03 crc kubenswrapper[4720]: I1013 18:23:03.344872 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln84c\" (UniqueName: \"kubernetes.io/projected/77788e20-57ef-4177-88d0-ba73282fb72c-kube-api-access-ln84c\") pod \"crc-debug-cjwdk\" (UID: \"77788e20-57ef-4177-88d0-ba73282fb72c\") " pod="openshift-must-gather-6mndj/crc-debug-cjwdk" Oct 13 18:23:03 crc kubenswrapper[4720]: I1013 18:23:03.446676 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/77788e20-57ef-4177-88d0-ba73282fb72c-host\") pod \"crc-debug-cjwdk\" (UID: \"77788e20-57ef-4177-88d0-ba73282fb72c\") " pod="openshift-must-gather-6mndj/crc-debug-cjwdk" Oct 13 18:23:03 crc kubenswrapper[4720]: I1013 18:23:03.446842 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/77788e20-57ef-4177-88d0-ba73282fb72c-host\") pod \"crc-debug-cjwdk\" (UID: \"77788e20-57ef-4177-88d0-ba73282fb72c\") " pod="openshift-must-gather-6mndj/crc-debug-cjwdk" Oct 13 18:23:03 crc kubenswrapper[4720]: I1013 18:23:03.446950 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ln84c\" (UniqueName: \"kubernetes.io/projected/77788e20-57ef-4177-88d0-ba73282fb72c-kube-api-access-ln84c\") pod \"crc-debug-cjwdk\" (UID: \"77788e20-57ef-4177-88d0-ba73282fb72c\") " pod="openshift-must-gather-6mndj/crc-debug-cjwdk" Oct 13 18:23:03 crc kubenswrapper[4720]: I1013 18:23:03.472632 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln84c\" (UniqueName: \"kubernetes.io/projected/77788e20-57ef-4177-88d0-ba73282fb72c-kube-api-access-ln84c\") pod \"crc-debug-cjwdk\" (UID: \"77788e20-57ef-4177-88d0-ba73282fb72c\") " pod="openshift-must-gather-6mndj/crc-debug-cjwdk" Oct 13 18:23:03 crc kubenswrapper[4720]: I1013 18:23:03.554225 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6mndj/crc-debug-cjwdk" Oct 13 18:23:03 crc kubenswrapper[4720]: W1013 18:23:03.591799 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod77788e20_57ef_4177_88d0_ba73282fb72c.slice/crio-af6ccb61db56bcfd7a4ffd7eb279080cea53d332e96500ec3b555369fd108582 WatchSource:0}: Error finding container af6ccb61db56bcfd7a4ffd7eb279080cea53d332e96500ec3b555369fd108582: Status 404 returned error can't find the container with id af6ccb61db56bcfd7a4ffd7eb279080cea53d332e96500ec3b555369fd108582 Oct 13 18:23:04 crc kubenswrapper[4720]: I1013 18:23:04.145926 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6mndj/crc-debug-cjwdk" event={"ID":"77788e20-57ef-4177-88d0-ba73282fb72c","Type":"ContainerStarted","Data":"af6ccb61db56bcfd7a4ffd7eb279080cea53d332e96500ec3b555369fd108582"} Oct 13 18:23:15 crc kubenswrapper[4720]: I1013 18:23:15.216925 4720 patch_prober.go:28] interesting pod/machine-config-daemon-htwnl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 18:23:15 crc kubenswrapper[4720]: I1013 18:23:15.217514 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 18:23:15 crc kubenswrapper[4720]: I1013 18:23:15.217551 4720 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" Oct 13 18:23:15 crc kubenswrapper[4720]: I1013 18:23:15.218173 4720 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"79ca5c733b5eaab4a6d8e4dab434c0461ff00748dd7441ce45a3602ece00a48e"} pod="openshift-machine-config-operator/machine-config-daemon-htwnl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 13 18:23:15 crc kubenswrapper[4720]: I1013 18:23:15.218243 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerName="machine-config-daemon" containerID="cri-o://79ca5c733b5eaab4a6d8e4dab434c0461ff00748dd7441ce45a3602ece00a48e" gracePeriod=600 Oct 13 18:23:15 crc kubenswrapper[4720]: I1013 18:23:15.260169 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6mndj/crc-debug-cjwdk" event={"ID":"77788e20-57ef-4177-88d0-ba73282fb72c","Type":"ContainerStarted","Data":"527cfb4423ed1554b6b0b6829d36b20153feb18c653c9569be9626a1edb24ca8"} Oct 13 18:23:15 crc kubenswrapper[4720]: I1013 18:23:15.281283 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-6mndj/crc-debug-cjwdk" podStartSLOduration=1.385431116 podStartE2EDuration="12.281235355s" podCreationTimestamp="2025-10-13 18:23:03 +0000 UTC" firstStartedPulling="2025-10-13 18:23:03.594111005 +0000 UTC m=+3529.051361137" lastFinishedPulling="2025-10-13 18:23:14.489915244 +0000 UTC m=+3539.947165376" observedRunningTime="2025-10-13 18:23:15.276620996 +0000 UTC m=+3540.733871138" watchObservedRunningTime="2025-10-13 18:23:15.281235355 +0000 UTC m=+3540.738485487" Oct 13 18:23:15 crc kubenswrapper[4720]: E1013 18:23:15.354307 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:23:16 crc kubenswrapper[4720]: I1013 18:23:16.272126 4720 generic.go:334] "Generic (PLEG): container finished" podID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerID="79ca5c733b5eaab4a6d8e4dab434c0461ff00748dd7441ce45a3602ece00a48e" exitCode=0 Oct 13 18:23:16 crc kubenswrapper[4720]: I1013 18:23:16.272180 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" event={"ID":"ce442c80-fcde-4b79-b6f9-f8f25771dfd4","Type":"ContainerDied","Data":"79ca5c733b5eaab4a6d8e4dab434c0461ff00748dd7441ce45a3602ece00a48e"} Oct 13 18:23:16 crc kubenswrapper[4720]: I1013 18:23:16.273284 4720 scope.go:117] "RemoveContainer" containerID="764c1c0ec6aa60325a7c805ec7141bde73c6fdd9d9c0a1e82b8d4193f1fa8d1d" Oct 13 18:23:16 crc kubenswrapper[4720]: I1013 18:23:16.274016 4720 scope.go:117] "RemoveContainer" containerID="79ca5c733b5eaab4a6d8e4dab434c0461ff00748dd7441ce45a3602ece00a48e" Oct 13 18:23:16 crc kubenswrapper[4720]: E1013 18:23:16.274300 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:23:25 crc kubenswrapper[4720]: I1013 18:23:25.113734 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mft5s"] Oct 13 18:23:25 crc kubenswrapper[4720]: I1013 18:23:25.116457 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mft5s" Oct 13 18:23:25 crc kubenswrapper[4720]: I1013 18:23:25.124618 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mft5s"] Oct 13 18:23:25 crc kubenswrapper[4720]: I1013 18:23:25.261229 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bba1c75-d34a-4beb-8f09-db8bdb533d73-catalog-content\") pod \"certified-operators-mft5s\" (UID: \"1bba1c75-d34a-4beb-8f09-db8bdb533d73\") " pod="openshift-marketplace/certified-operators-mft5s" Oct 13 18:23:25 crc kubenswrapper[4720]: I1013 18:23:25.261340 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqkk4\" (UniqueName: \"kubernetes.io/projected/1bba1c75-d34a-4beb-8f09-db8bdb533d73-kube-api-access-rqkk4\") pod \"certified-operators-mft5s\" (UID: \"1bba1c75-d34a-4beb-8f09-db8bdb533d73\") " pod="openshift-marketplace/certified-operators-mft5s" Oct 13 18:23:25 crc kubenswrapper[4720]: I1013 18:23:25.261372 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bba1c75-d34a-4beb-8f09-db8bdb533d73-utilities\") pod \"certified-operators-mft5s\" (UID: \"1bba1c75-d34a-4beb-8f09-db8bdb533d73\") " pod="openshift-marketplace/certified-operators-mft5s" Oct 13 18:23:25 crc kubenswrapper[4720]: I1013 18:23:25.363952 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bba1c75-d34a-4beb-8f09-db8bdb533d73-catalog-content\") pod \"certified-operators-mft5s\" (UID: \"1bba1c75-d34a-4beb-8f09-db8bdb533d73\") " pod="openshift-marketplace/certified-operators-mft5s" Oct 13 18:23:25 crc kubenswrapper[4720]: I1013 18:23:25.364076 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqkk4\" (UniqueName: \"kubernetes.io/projected/1bba1c75-d34a-4beb-8f09-db8bdb533d73-kube-api-access-rqkk4\") pod \"certified-operators-mft5s\" (UID: \"1bba1c75-d34a-4beb-8f09-db8bdb533d73\") " pod="openshift-marketplace/certified-operators-mft5s" Oct 13 18:23:25 crc kubenswrapper[4720]: I1013 18:23:25.364103 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bba1c75-d34a-4beb-8f09-db8bdb533d73-utilities\") pod \"certified-operators-mft5s\" (UID: \"1bba1c75-d34a-4beb-8f09-db8bdb533d73\") " pod="openshift-marketplace/certified-operators-mft5s" Oct 13 18:23:25 crc kubenswrapper[4720]: I1013 18:23:25.364707 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bba1c75-d34a-4beb-8f09-db8bdb533d73-utilities\") pod \"certified-operators-mft5s\" (UID: \"1bba1c75-d34a-4beb-8f09-db8bdb533d73\") " pod="openshift-marketplace/certified-operators-mft5s" Oct 13 18:23:25 crc kubenswrapper[4720]: I1013 18:23:25.364976 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bba1c75-d34a-4beb-8f09-db8bdb533d73-catalog-content\") pod \"certified-operators-mft5s\" (UID: \"1bba1c75-d34a-4beb-8f09-db8bdb533d73\") " pod="openshift-marketplace/certified-operators-mft5s" Oct 13 18:23:25 crc kubenswrapper[4720]: I1013 18:23:25.383995 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqkk4\" (UniqueName: \"kubernetes.io/projected/1bba1c75-d34a-4beb-8f09-db8bdb533d73-kube-api-access-rqkk4\") pod \"certified-operators-mft5s\" (UID: \"1bba1c75-d34a-4beb-8f09-db8bdb533d73\") " pod="openshift-marketplace/certified-operators-mft5s" Oct 13 18:23:26 crc kubenswrapper[4720]: I1013 18:23:26.429963 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mft5s" Oct 13 18:23:27 crc kubenswrapper[4720]: I1013 18:23:27.031094 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mft5s"] Oct 13 18:23:27 crc kubenswrapper[4720]: W1013 18:23:27.048061 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1bba1c75_d34a_4beb_8f09_db8bdb533d73.slice/crio-94d2bf980e4b8c3d5455ae45364c94d9fb1d70e85ccc2583ae5d32f70436d69f WatchSource:0}: Error finding container 94d2bf980e4b8c3d5455ae45364c94d9fb1d70e85ccc2583ae5d32f70436d69f: Status 404 returned error can't find the container with id 94d2bf980e4b8c3d5455ae45364c94d9fb1d70e85ccc2583ae5d32f70436d69f Oct 13 18:23:27 crc kubenswrapper[4720]: I1013 18:23:27.379871 4720 generic.go:334] "Generic (PLEG): container finished" podID="1bba1c75-d34a-4beb-8f09-db8bdb533d73" containerID="3deaa45539008f46864ec238f984f1f505f57fd847f84a88e470dbb4edc676c0" exitCode=0 Oct 13 18:23:27 crc kubenswrapper[4720]: I1013 18:23:27.379911 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mft5s" event={"ID":"1bba1c75-d34a-4beb-8f09-db8bdb533d73","Type":"ContainerDied","Data":"3deaa45539008f46864ec238f984f1f505f57fd847f84a88e470dbb4edc676c0"} Oct 13 18:23:27 crc kubenswrapper[4720]: I1013 18:23:27.379934 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mft5s" event={"ID":"1bba1c75-d34a-4beb-8f09-db8bdb533d73","Type":"ContainerStarted","Data":"94d2bf980e4b8c3d5455ae45364c94d9fb1d70e85ccc2583ae5d32f70436d69f"} Oct 13 18:23:28 crc kubenswrapper[4720]: I1013 18:23:28.168968 4720 scope.go:117] "RemoveContainer" containerID="79ca5c733b5eaab4a6d8e4dab434c0461ff00748dd7441ce45a3602ece00a48e" Oct 13 18:23:28 crc kubenswrapper[4720]: E1013 18:23:28.169612 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:23:28 crc kubenswrapper[4720]: I1013 18:23:28.390170 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mft5s" event={"ID":"1bba1c75-d34a-4beb-8f09-db8bdb533d73","Type":"ContainerStarted","Data":"72c39c2581a07877fdc5a4442e631317a1e4b32e22bc88f22b041645b5b56ca3"} Oct 13 18:23:30 crc kubenswrapper[4720]: I1013 18:23:30.407307 4720 generic.go:334] "Generic (PLEG): container finished" podID="1bba1c75-d34a-4beb-8f09-db8bdb533d73" containerID="72c39c2581a07877fdc5a4442e631317a1e4b32e22bc88f22b041645b5b56ca3" exitCode=0 Oct 13 18:23:30 crc kubenswrapper[4720]: I1013 18:23:30.407555 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mft5s" event={"ID":"1bba1c75-d34a-4beb-8f09-db8bdb533d73","Type":"ContainerDied","Data":"72c39c2581a07877fdc5a4442e631317a1e4b32e22bc88f22b041645b5b56ca3"} Oct 13 18:23:31 crc kubenswrapper[4720]: I1013 18:23:31.426922 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mft5s" event={"ID":"1bba1c75-d34a-4beb-8f09-db8bdb533d73","Type":"ContainerStarted","Data":"f36e020ff3806845f3b066b9c91a4cc8741c8fe5c9132229968e2e4799016264"} Oct 13 18:23:31 crc kubenswrapper[4720]: I1013 18:23:31.446276 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mft5s" podStartSLOduration=3.016246826 podStartE2EDuration="6.446258229s" podCreationTimestamp="2025-10-13 18:23:25 +0000 UTC" firstStartedPulling="2025-10-13 18:23:27.381998378 +0000 UTC m=+3552.839248510" lastFinishedPulling="2025-10-13 18:23:30.812009781 +0000 UTC m=+3556.269259913" observedRunningTime="2025-10-13 18:23:31.444159945 +0000 UTC m=+3556.901410087" watchObservedRunningTime="2025-10-13 18:23:31.446258229 +0000 UTC m=+3556.903508361" Oct 13 18:23:36 crc kubenswrapper[4720]: I1013 18:23:36.430784 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mft5s" Oct 13 18:23:36 crc kubenswrapper[4720]: I1013 18:23:36.431344 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mft5s" Oct 13 18:23:36 crc kubenswrapper[4720]: I1013 18:23:36.479920 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mft5s" Oct 13 18:23:36 crc kubenswrapper[4720]: I1013 18:23:36.529437 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mft5s" Oct 13 18:23:36 crc kubenswrapper[4720]: I1013 18:23:36.714337 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mft5s"] Oct 13 18:23:38 crc kubenswrapper[4720]: I1013 18:23:38.485995 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mft5s" podUID="1bba1c75-d34a-4beb-8f09-db8bdb533d73" containerName="registry-server" containerID="cri-o://f36e020ff3806845f3b066b9c91a4cc8741c8fe5c9132229968e2e4799016264" gracePeriod=2 Oct 13 18:23:38 crc kubenswrapper[4720]: I1013 18:23:38.992249 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mft5s" Oct 13 18:23:39 crc kubenswrapper[4720]: I1013 18:23:39.106623 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bba1c75-d34a-4beb-8f09-db8bdb533d73-utilities\") pod \"1bba1c75-d34a-4beb-8f09-db8bdb533d73\" (UID: \"1bba1c75-d34a-4beb-8f09-db8bdb533d73\") " Oct 13 18:23:39 crc kubenswrapper[4720]: I1013 18:23:39.106737 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqkk4\" (UniqueName: \"kubernetes.io/projected/1bba1c75-d34a-4beb-8f09-db8bdb533d73-kube-api-access-rqkk4\") pod \"1bba1c75-d34a-4beb-8f09-db8bdb533d73\" (UID: \"1bba1c75-d34a-4beb-8f09-db8bdb533d73\") " Oct 13 18:23:39 crc kubenswrapper[4720]: I1013 18:23:39.106793 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bba1c75-d34a-4beb-8f09-db8bdb533d73-catalog-content\") pod \"1bba1c75-d34a-4beb-8f09-db8bdb533d73\" (UID: \"1bba1c75-d34a-4beb-8f09-db8bdb533d73\") " Oct 13 18:23:39 crc kubenswrapper[4720]: I1013 18:23:39.107788 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bba1c75-d34a-4beb-8f09-db8bdb533d73-utilities" (OuterVolumeSpecName: "utilities") pod "1bba1c75-d34a-4beb-8f09-db8bdb533d73" (UID: "1bba1c75-d34a-4beb-8f09-db8bdb533d73"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 18:23:39 crc kubenswrapper[4720]: I1013 18:23:39.119061 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bba1c75-d34a-4beb-8f09-db8bdb533d73-kube-api-access-rqkk4" (OuterVolumeSpecName: "kube-api-access-rqkk4") pod "1bba1c75-d34a-4beb-8f09-db8bdb533d73" (UID: "1bba1c75-d34a-4beb-8f09-db8bdb533d73"). InnerVolumeSpecName "kube-api-access-rqkk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 18:23:39 crc kubenswrapper[4720]: I1013 18:23:39.164445 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bba1c75-d34a-4beb-8f09-db8bdb533d73-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1bba1c75-d34a-4beb-8f09-db8bdb533d73" (UID: "1bba1c75-d34a-4beb-8f09-db8bdb533d73"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 18:23:39 crc kubenswrapper[4720]: I1013 18:23:39.209206 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqkk4\" (UniqueName: \"kubernetes.io/projected/1bba1c75-d34a-4beb-8f09-db8bdb533d73-kube-api-access-rqkk4\") on node \"crc\" DevicePath \"\"" Oct 13 18:23:39 crc kubenswrapper[4720]: I1013 18:23:39.209241 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bba1c75-d34a-4beb-8f09-db8bdb533d73-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 18:23:39 crc kubenswrapper[4720]: I1013 18:23:39.209251 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bba1c75-d34a-4beb-8f09-db8bdb533d73-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 18:23:39 crc kubenswrapper[4720]: I1013 18:23:39.496869 4720 generic.go:334] "Generic (PLEG): container finished" podID="1bba1c75-d34a-4beb-8f09-db8bdb533d73" containerID="f36e020ff3806845f3b066b9c91a4cc8741c8fe5c9132229968e2e4799016264" exitCode=0 Oct 13 18:23:39 crc kubenswrapper[4720]: I1013 18:23:39.496942 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mft5s" event={"ID":"1bba1c75-d34a-4beb-8f09-db8bdb533d73","Type":"ContainerDied","Data":"f36e020ff3806845f3b066b9c91a4cc8741c8fe5c9132229968e2e4799016264"} Oct 13 18:23:39 crc kubenswrapper[4720]: I1013 18:23:39.496949 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mft5s" Oct 13 18:23:39 crc kubenswrapper[4720]: I1013 18:23:39.497002 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mft5s" event={"ID":"1bba1c75-d34a-4beb-8f09-db8bdb533d73","Type":"ContainerDied","Data":"94d2bf980e4b8c3d5455ae45364c94d9fb1d70e85ccc2583ae5d32f70436d69f"} Oct 13 18:23:39 crc kubenswrapper[4720]: I1013 18:23:39.497024 4720 scope.go:117] "RemoveContainer" containerID="f36e020ff3806845f3b066b9c91a4cc8741c8fe5c9132229968e2e4799016264" Oct 13 18:23:39 crc kubenswrapper[4720]: I1013 18:23:39.518047 4720 scope.go:117] "RemoveContainer" containerID="72c39c2581a07877fdc5a4442e631317a1e4b32e22bc88f22b041645b5b56ca3" Oct 13 18:23:39 crc kubenswrapper[4720]: I1013 18:23:39.522478 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mft5s"] Oct 13 18:23:39 crc kubenswrapper[4720]: I1013 18:23:39.530404 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mft5s"] Oct 13 18:23:39 crc kubenswrapper[4720]: I1013 18:23:39.551694 4720 scope.go:117] "RemoveContainer" containerID="3deaa45539008f46864ec238f984f1f505f57fd847f84a88e470dbb4edc676c0" Oct 13 18:23:39 crc kubenswrapper[4720]: I1013 18:23:39.583847 4720 scope.go:117] "RemoveContainer" containerID="f36e020ff3806845f3b066b9c91a4cc8741c8fe5c9132229968e2e4799016264" Oct 13 18:23:39 crc kubenswrapper[4720]: E1013 18:23:39.584379 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f36e020ff3806845f3b066b9c91a4cc8741c8fe5c9132229968e2e4799016264\": container with ID starting with f36e020ff3806845f3b066b9c91a4cc8741c8fe5c9132229968e2e4799016264 not found: ID does not exist" containerID="f36e020ff3806845f3b066b9c91a4cc8741c8fe5c9132229968e2e4799016264" Oct 13 18:23:39 crc kubenswrapper[4720]: I1013 18:23:39.584428 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f36e020ff3806845f3b066b9c91a4cc8741c8fe5c9132229968e2e4799016264"} err="failed to get container status \"f36e020ff3806845f3b066b9c91a4cc8741c8fe5c9132229968e2e4799016264\": rpc error: code = NotFound desc = could not find container \"f36e020ff3806845f3b066b9c91a4cc8741c8fe5c9132229968e2e4799016264\": container with ID starting with f36e020ff3806845f3b066b9c91a4cc8741c8fe5c9132229968e2e4799016264 not found: ID does not exist" Oct 13 18:23:39 crc kubenswrapper[4720]: I1013 18:23:39.584459 4720 scope.go:117] "RemoveContainer" containerID="72c39c2581a07877fdc5a4442e631317a1e4b32e22bc88f22b041645b5b56ca3" Oct 13 18:23:39 crc kubenswrapper[4720]: E1013 18:23:39.584982 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72c39c2581a07877fdc5a4442e631317a1e4b32e22bc88f22b041645b5b56ca3\": container with ID starting with 72c39c2581a07877fdc5a4442e631317a1e4b32e22bc88f22b041645b5b56ca3 not found: ID does not exist" containerID="72c39c2581a07877fdc5a4442e631317a1e4b32e22bc88f22b041645b5b56ca3" Oct 13 18:23:39 crc kubenswrapper[4720]: I1013 18:23:39.585010 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72c39c2581a07877fdc5a4442e631317a1e4b32e22bc88f22b041645b5b56ca3"} err="failed to get container status \"72c39c2581a07877fdc5a4442e631317a1e4b32e22bc88f22b041645b5b56ca3\": rpc error: code = NotFound desc = could not find container \"72c39c2581a07877fdc5a4442e631317a1e4b32e22bc88f22b041645b5b56ca3\": container with ID starting with 72c39c2581a07877fdc5a4442e631317a1e4b32e22bc88f22b041645b5b56ca3 not found: ID does not exist" Oct 13 18:23:39 crc kubenswrapper[4720]: I1013 18:23:39.585029 4720 scope.go:117] "RemoveContainer" containerID="3deaa45539008f46864ec238f984f1f505f57fd847f84a88e470dbb4edc676c0" Oct 13 18:23:39 crc kubenswrapper[4720]: E1013 18:23:39.585447 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3deaa45539008f46864ec238f984f1f505f57fd847f84a88e470dbb4edc676c0\": container with ID starting with 3deaa45539008f46864ec238f984f1f505f57fd847f84a88e470dbb4edc676c0 not found: ID does not exist" containerID="3deaa45539008f46864ec238f984f1f505f57fd847f84a88e470dbb4edc676c0" Oct 13 18:23:39 crc kubenswrapper[4720]: I1013 18:23:39.585476 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3deaa45539008f46864ec238f984f1f505f57fd847f84a88e470dbb4edc676c0"} err="failed to get container status \"3deaa45539008f46864ec238f984f1f505f57fd847f84a88e470dbb4edc676c0\": rpc error: code = NotFound desc = could not find container \"3deaa45539008f46864ec238f984f1f505f57fd847f84a88e470dbb4edc676c0\": container with ID starting with 3deaa45539008f46864ec238f984f1f505f57fd847f84a88e470dbb4edc676c0 not found: ID does not exist" Oct 13 18:23:40 crc kubenswrapper[4720]: I1013 18:23:40.169400 4720 scope.go:117] "RemoveContainer" containerID="79ca5c733b5eaab4a6d8e4dab434c0461ff00748dd7441ce45a3602ece00a48e" Oct 13 18:23:40 crc kubenswrapper[4720]: E1013 18:23:40.169888 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:23:41 crc kubenswrapper[4720]: I1013 18:23:41.182002 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bba1c75-d34a-4beb-8f09-db8bdb533d73" path="/var/lib/kubelet/pods/1bba1c75-d34a-4beb-8f09-db8bdb533d73/volumes" Oct 13 18:23:51 crc kubenswrapper[4720]: I1013 18:23:51.168413 4720 scope.go:117] "RemoveContainer" containerID="79ca5c733b5eaab4a6d8e4dab434c0461ff00748dd7441ce45a3602ece00a48e" Oct 13 18:23:51 crc kubenswrapper[4720]: E1013 18:23:51.169196 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:23:51 crc kubenswrapper[4720]: I1013 18:23:51.613250 4720 generic.go:334] "Generic (PLEG): container finished" podID="77788e20-57ef-4177-88d0-ba73282fb72c" containerID="527cfb4423ed1554b6b0b6829d36b20153feb18c653c9569be9626a1edb24ca8" exitCode=0 Oct 13 18:23:51 crc kubenswrapper[4720]: I1013 18:23:51.613296 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6mndj/crc-debug-cjwdk" event={"ID":"77788e20-57ef-4177-88d0-ba73282fb72c","Type":"ContainerDied","Data":"527cfb4423ed1554b6b0b6829d36b20153feb18c653c9569be9626a1edb24ca8"} Oct 13 18:23:52 crc kubenswrapper[4720]: I1013 18:23:52.708000 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6mndj/crc-debug-cjwdk" Oct 13 18:23:52 crc kubenswrapper[4720]: I1013 18:23:52.736771 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-6mndj/crc-debug-cjwdk"] Oct 13 18:23:52 crc kubenswrapper[4720]: I1013 18:23:52.744347 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-6mndj/crc-debug-cjwdk"] Oct 13 18:23:52 crc kubenswrapper[4720]: I1013 18:23:52.880821 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ln84c\" (UniqueName: \"kubernetes.io/projected/77788e20-57ef-4177-88d0-ba73282fb72c-kube-api-access-ln84c\") pod \"77788e20-57ef-4177-88d0-ba73282fb72c\" (UID: \"77788e20-57ef-4177-88d0-ba73282fb72c\") " Oct 13 18:23:52 crc kubenswrapper[4720]: I1013 18:23:52.881075 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/77788e20-57ef-4177-88d0-ba73282fb72c-host\") pod \"77788e20-57ef-4177-88d0-ba73282fb72c\" (UID: \"77788e20-57ef-4177-88d0-ba73282fb72c\") " Oct 13 18:23:52 crc kubenswrapper[4720]: I1013 18:23:52.881133 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77788e20-57ef-4177-88d0-ba73282fb72c-host" (OuterVolumeSpecName: "host") pod "77788e20-57ef-4177-88d0-ba73282fb72c" (UID: "77788e20-57ef-4177-88d0-ba73282fb72c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 18:23:52 crc kubenswrapper[4720]: I1013 18:23:52.881544 4720 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/77788e20-57ef-4177-88d0-ba73282fb72c-host\") on node \"crc\" DevicePath \"\"" Oct 13 18:23:52 crc kubenswrapper[4720]: I1013 18:23:52.891383 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77788e20-57ef-4177-88d0-ba73282fb72c-kube-api-access-ln84c" (OuterVolumeSpecName: "kube-api-access-ln84c") pod "77788e20-57ef-4177-88d0-ba73282fb72c" (UID: "77788e20-57ef-4177-88d0-ba73282fb72c"). InnerVolumeSpecName "kube-api-access-ln84c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 18:23:52 crc kubenswrapper[4720]: I1013 18:23:52.983587 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ln84c\" (UniqueName: \"kubernetes.io/projected/77788e20-57ef-4177-88d0-ba73282fb72c-kube-api-access-ln84c\") on node \"crc\" DevicePath \"\"" Oct 13 18:23:53 crc kubenswrapper[4720]: I1013 18:23:53.179615 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77788e20-57ef-4177-88d0-ba73282fb72c" path="/var/lib/kubelet/pods/77788e20-57ef-4177-88d0-ba73282fb72c/volumes" Oct 13 18:23:53 crc kubenswrapper[4720]: I1013 18:23:53.629624 4720 scope.go:117] "RemoveContainer" containerID="527cfb4423ed1554b6b0b6829d36b20153feb18c653c9569be9626a1edb24ca8" Oct 13 18:23:53 crc kubenswrapper[4720]: I1013 18:23:53.629644 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6mndj/crc-debug-cjwdk" Oct 13 18:23:53 crc kubenswrapper[4720]: I1013 18:23:53.962378 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-6mndj/crc-debug-ftkft"] Oct 13 18:23:53 crc kubenswrapper[4720]: E1013 18:23:53.963008 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bba1c75-d34a-4beb-8f09-db8bdb533d73" containerName="extract-utilities" Oct 13 18:23:53 crc kubenswrapper[4720]: I1013 18:23:53.963024 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bba1c75-d34a-4beb-8f09-db8bdb533d73" containerName="extract-utilities" Oct 13 18:23:53 crc kubenswrapper[4720]: E1013 18:23:53.963046 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77788e20-57ef-4177-88d0-ba73282fb72c" containerName="container-00" Oct 13 18:23:53 crc kubenswrapper[4720]: I1013 18:23:53.963054 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="77788e20-57ef-4177-88d0-ba73282fb72c" containerName="container-00" Oct 13 18:23:53 crc kubenswrapper[4720]: E1013 18:23:53.963074 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bba1c75-d34a-4beb-8f09-db8bdb533d73" containerName="extract-content" Oct 13 18:23:53 crc kubenswrapper[4720]: I1013 18:23:53.963083 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bba1c75-d34a-4beb-8f09-db8bdb533d73" containerName="extract-content" Oct 13 18:23:53 crc kubenswrapper[4720]: E1013 18:23:53.963102 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bba1c75-d34a-4beb-8f09-db8bdb533d73" containerName="registry-server" Oct 13 18:23:53 crc kubenswrapper[4720]: I1013 18:23:53.963108 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bba1c75-d34a-4beb-8f09-db8bdb533d73" containerName="registry-server" Oct 13 18:23:53 crc kubenswrapper[4720]: I1013 18:23:53.963317 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="77788e20-57ef-4177-88d0-ba73282fb72c" containerName="container-00" Oct 13 18:23:53 crc kubenswrapper[4720]: I1013 18:23:53.963340 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bba1c75-d34a-4beb-8f09-db8bdb533d73" containerName="registry-server" Oct 13 18:23:53 crc kubenswrapper[4720]: I1013 18:23:53.964152 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6mndj/crc-debug-ftkft" Oct 13 18:23:54 crc kubenswrapper[4720]: I1013 18:23:54.103084 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz8vm\" (UniqueName: \"kubernetes.io/projected/c0cfb9e6-3b10-4944-8106-7e1db3b816cd-kube-api-access-dz8vm\") pod \"crc-debug-ftkft\" (UID: \"c0cfb9e6-3b10-4944-8106-7e1db3b816cd\") " pod="openshift-must-gather-6mndj/crc-debug-ftkft" Oct 13 18:23:54 crc kubenswrapper[4720]: I1013 18:23:54.103312 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c0cfb9e6-3b10-4944-8106-7e1db3b816cd-host\") pod \"crc-debug-ftkft\" (UID: \"c0cfb9e6-3b10-4944-8106-7e1db3b816cd\") " pod="openshift-must-gather-6mndj/crc-debug-ftkft" Oct 13 18:23:54 crc kubenswrapper[4720]: I1013 18:23:54.205408 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c0cfb9e6-3b10-4944-8106-7e1db3b816cd-host\") pod \"crc-debug-ftkft\" (UID: \"c0cfb9e6-3b10-4944-8106-7e1db3b816cd\") " pod="openshift-must-gather-6mndj/crc-debug-ftkft" Oct 13 18:23:54 crc kubenswrapper[4720]: I1013 18:23:54.205558 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dz8vm\" (UniqueName: \"kubernetes.io/projected/c0cfb9e6-3b10-4944-8106-7e1db3b816cd-kube-api-access-dz8vm\") pod \"crc-debug-ftkft\" (UID: \"c0cfb9e6-3b10-4944-8106-7e1db3b816cd\") " pod="openshift-must-gather-6mndj/crc-debug-ftkft" Oct 13 18:23:54 crc kubenswrapper[4720]: I1013 18:23:54.205594 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c0cfb9e6-3b10-4944-8106-7e1db3b816cd-host\") pod \"crc-debug-ftkft\" (UID: \"c0cfb9e6-3b10-4944-8106-7e1db3b816cd\") " pod="openshift-must-gather-6mndj/crc-debug-ftkft" Oct 13 18:23:54 crc kubenswrapper[4720]: I1013 18:23:54.230122 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dz8vm\" (UniqueName: \"kubernetes.io/projected/c0cfb9e6-3b10-4944-8106-7e1db3b816cd-kube-api-access-dz8vm\") pod \"crc-debug-ftkft\" (UID: \"c0cfb9e6-3b10-4944-8106-7e1db3b816cd\") " pod="openshift-must-gather-6mndj/crc-debug-ftkft" Oct 13 18:23:54 crc kubenswrapper[4720]: I1013 18:23:54.281011 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6mndj/crc-debug-ftkft" Oct 13 18:23:54 crc kubenswrapper[4720]: W1013 18:23:54.325712 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0cfb9e6_3b10_4944_8106_7e1db3b816cd.slice/crio-efd6141367375e8266b842d06e8ceac95faff0ae5e6c7b3cc688e9b6f61bef56 WatchSource:0}: Error finding container efd6141367375e8266b842d06e8ceac95faff0ae5e6c7b3cc688e9b6f61bef56: Status 404 returned error can't find the container with id efd6141367375e8266b842d06e8ceac95faff0ae5e6c7b3cc688e9b6f61bef56 Oct 13 18:23:54 crc kubenswrapper[4720]: I1013 18:23:54.640855 4720 generic.go:334] "Generic (PLEG): container finished" podID="c0cfb9e6-3b10-4944-8106-7e1db3b816cd" containerID="43ebc6548a262c11d6d12681b0c45f15509c5d7ae0dddba2e15e95a5f583835c" exitCode=0 Oct 13 18:23:54 crc kubenswrapper[4720]: I1013 18:23:54.640957 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6mndj/crc-debug-ftkft" event={"ID":"c0cfb9e6-3b10-4944-8106-7e1db3b816cd","Type":"ContainerDied","Data":"43ebc6548a262c11d6d12681b0c45f15509c5d7ae0dddba2e15e95a5f583835c"} Oct 13 18:23:54 crc kubenswrapper[4720]: I1013 18:23:54.641306 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6mndj/crc-debug-ftkft" event={"ID":"c0cfb9e6-3b10-4944-8106-7e1db3b816cd","Type":"ContainerStarted","Data":"efd6141367375e8266b842d06e8ceac95faff0ae5e6c7b3cc688e9b6f61bef56"} Oct 13 18:23:55 crc kubenswrapper[4720]: I1013 18:23:55.079421 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-6mndj/crc-debug-ftkft"] Oct 13 18:23:55 crc kubenswrapper[4720]: I1013 18:23:55.086085 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-6mndj/crc-debug-ftkft"] Oct 13 18:23:55 crc kubenswrapper[4720]: I1013 18:23:55.747108 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6mndj/crc-debug-ftkft" Oct 13 18:23:55 crc kubenswrapper[4720]: I1013 18:23:55.832945 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dz8vm\" (UniqueName: \"kubernetes.io/projected/c0cfb9e6-3b10-4944-8106-7e1db3b816cd-kube-api-access-dz8vm\") pod \"c0cfb9e6-3b10-4944-8106-7e1db3b816cd\" (UID: \"c0cfb9e6-3b10-4944-8106-7e1db3b816cd\") " Oct 13 18:23:55 crc kubenswrapper[4720]: I1013 18:23:55.833248 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c0cfb9e6-3b10-4944-8106-7e1db3b816cd-host\") pod \"c0cfb9e6-3b10-4944-8106-7e1db3b816cd\" (UID: \"c0cfb9e6-3b10-4944-8106-7e1db3b816cd\") " Oct 13 18:23:55 crc kubenswrapper[4720]: I1013 18:23:55.833346 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0cfb9e6-3b10-4944-8106-7e1db3b816cd-host" (OuterVolumeSpecName: "host") pod "c0cfb9e6-3b10-4944-8106-7e1db3b816cd" (UID: "c0cfb9e6-3b10-4944-8106-7e1db3b816cd"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 18:23:55 crc kubenswrapper[4720]: I1013 18:23:55.833685 4720 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c0cfb9e6-3b10-4944-8106-7e1db3b816cd-host\") on node \"crc\" DevicePath \"\"" Oct 13 18:23:55 crc kubenswrapper[4720]: I1013 18:23:55.839465 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0cfb9e6-3b10-4944-8106-7e1db3b816cd-kube-api-access-dz8vm" (OuterVolumeSpecName: "kube-api-access-dz8vm") pod "c0cfb9e6-3b10-4944-8106-7e1db3b816cd" (UID: "c0cfb9e6-3b10-4944-8106-7e1db3b816cd"). InnerVolumeSpecName "kube-api-access-dz8vm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 18:23:55 crc kubenswrapper[4720]: I1013 18:23:55.935258 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dz8vm\" (UniqueName: \"kubernetes.io/projected/c0cfb9e6-3b10-4944-8106-7e1db3b816cd-kube-api-access-dz8vm\") on node \"crc\" DevicePath \"\"" Oct 13 18:23:56 crc kubenswrapper[4720]: I1013 18:23:56.339256 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-6mndj/crc-debug-zfwfx"] Oct 13 18:23:56 crc kubenswrapper[4720]: E1013 18:23:56.339958 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0cfb9e6-3b10-4944-8106-7e1db3b816cd" containerName="container-00" Oct 13 18:23:56 crc kubenswrapper[4720]: I1013 18:23:56.339976 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0cfb9e6-3b10-4944-8106-7e1db3b816cd" containerName="container-00" Oct 13 18:23:56 crc kubenswrapper[4720]: I1013 18:23:56.340385 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0cfb9e6-3b10-4944-8106-7e1db3b816cd" containerName="container-00" Oct 13 18:23:56 crc kubenswrapper[4720]: I1013 18:23:56.342650 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6mndj/crc-debug-zfwfx" Oct 13 18:23:56 crc kubenswrapper[4720]: I1013 18:23:56.443560 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fmjx\" (UniqueName: \"kubernetes.io/projected/f17f6046-4914-4528-951a-717dc3b64bc9-kube-api-access-7fmjx\") pod \"crc-debug-zfwfx\" (UID: \"f17f6046-4914-4528-951a-717dc3b64bc9\") " pod="openshift-must-gather-6mndj/crc-debug-zfwfx" Oct 13 18:23:56 crc kubenswrapper[4720]: I1013 18:23:56.443750 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f17f6046-4914-4528-951a-717dc3b64bc9-host\") pod \"crc-debug-zfwfx\" (UID: \"f17f6046-4914-4528-951a-717dc3b64bc9\") " pod="openshift-must-gather-6mndj/crc-debug-zfwfx" Oct 13 18:23:56 crc kubenswrapper[4720]: I1013 18:23:56.545892 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fmjx\" (UniqueName: \"kubernetes.io/projected/f17f6046-4914-4528-951a-717dc3b64bc9-kube-api-access-7fmjx\") pod \"crc-debug-zfwfx\" (UID: \"f17f6046-4914-4528-951a-717dc3b64bc9\") " pod="openshift-must-gather-6mndj/crc-debug-zfwfx" Oct 13 18:23:56 crc kubenswrapper[4720]: I1013 18:23:56.546060 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f17f6046-4914-4528-951a-717dc3b64bc9-host\") pod \"crc-debug-zfwfx\" (UID: \"f17f6046-4914-4528-951a-717dc3b64bc9\") " pod="openshift-must-gather-6mndj/crc-debug-zfwfx" Oct 13 18:23:56 crc kubenswrapper[4720]: I1013 18:23:56.546207 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f17f6046-4914-4528-951a-717dc3b64bc9-host\") pod \"crc-debug-zfwfx\" (UID: \"f17f6046-4914-4528-951a-717dc3b64bc9\") " pod="openshift-must-gather-6mndj/crc-debug-zfwfx" Oct 13 18:23:56 crc kubenswrapper[4720]: I1013 18:23:56.563211 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fmjx\" (UniqueName: \"kubernetes.io/projected/f17f6046-4914-4528-951a-717dc3b64bc9-kube-api-access-7fmjx\") pod \"crc-debug-zfwfx\" (UID: \"f17f6046-4914-4528-951a-717dc3b64bc9\") " pod="openshift-must-gather-6mndj/crc-debug-zfwfx" Oct 13 18:23:56 crc kubenswrapper[4720]: I1013 18:23:56.660790 4720 scope.go:117] "RemoveContainer" containerID="43ebc6548a262c11d6d12681b0c45f15509c5d7ae0dddba2e15e95a5f583835c" Oct 13 18:23:56 crc kubenswrapper[4720]: I1013 18:23:56.660918 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6mndj/crc-debug-ftkft" Oct 13 18:23:56 crc kubenswrapper[4720]: I1013 18:23:56.668420 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6mndj/crc-debug-zfwfx" Oct 13 18:23:57 crc kubenswrapper[4720]: I1013 18:23:57.178881 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0cfb9e6-3b10-4944-8106-7e1db3b816cd" path="/var/lib/kubelet/pods/c0cfb9e6-3b10-4944-8106-7e1db3b816cd/volumes" Oct 13 18:23:57 crc kubenswrapper[4720]: I1013 18:23:57.677485 4720 generic.go:334] "Generic (PLEG): container finished" podID="f17f6046-4914-4528-951a-717dc3b64bc9" containerID="9489f827960d9986eeae94db7420a69e0fc240a18505e1a17db4ec712022b1b7" exitCode=0 Oct 13 18:23:57 crc kubenswrapper[4720]: I1013 18:23:57.677537 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6mndj/crc-debug-zfwfx" event={"ID":"f17f6046-4914-4528-951a-717dc3b64bc9","Type":"ContainerDied","Data":"9489f827960d9986eeae94db7420a69e0fc240a18505e1a17db4ec712022b1b7"} Oct 13 18:23:57 crc kubenswrapper[4720]: I1013 18:23:57.677607 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6mndj/crc-debug-zfwfx" event={"ID":"f17f6046-4914-4528-951a-717dc3b64bc9","Type":"ContainerStarted","Data":"1f3ee905d17f4a2ae34b6770f4efa4da8b370c62fad2e32587daadad2ee9cdc3"} Oct 13 18:23:57 crc kubenswrapper[4720]: I1013 18:23:57.720521 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-6mndj/crc-debug-zfwfx"] Oct 13 18:23:57 crc kubenswrapper[4720]: I1013 18:23:57.728594 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-6mndj/crc-debug-zfwfx"] Oct 13 18:23:58 crc kubenswrapper[4720]: I1013 18:23:58.807300 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6mndj/crc-debug-zfwfx" Oct 13 18:23:58 crc kubenswrapper[4720]: I1013 18:23:58.887172 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fmjx\" (UniqueName: \"kubernetes.io/projected/f17f6046-4914-4528-951a-717dc3b64bc9-kube-api-access-7fmjx\") pod \"f17f6046-4914-4528-951a-717dc3b64bc9\" (UID: \"f17f6046-4914-4528-951a-717dc3b64bc9\") " Oct 13 18:23:58 crc kubenswrapper[4720]: I1013 18:23:58.887347 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f17f6046-4914-4528-951a-717dc3b64bc9-host\") pod \"f17f6046-4914-4528-951a-717dc3b64bc9\" (UID: \"f17f6046-4914-4528-951a-717dc3b64bc9\") " Oct 13 18:23:58 crc kubenswrapper[4720]: I1013 18:23:58.887680 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f17f6046-4914-4528-951a-717dc3b64bc9-host" (OuterVolumeSpecName: "host") pod "f17f6046-4914-4528-951a-717dc3b64bc9" (UID: "f17f6046-4914-4528-951a-717dc3b64bc9"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 18:23:58 crc kubenswrapper[4720]: I1013 18:23:58.887852 4720 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f17f6046-4914-4528-951a-717dc3b64bc9-host\") on node \"crc\" DevicePath \"\"" Oct 13 18:23:58 crc kubenswrapper[4720]: I1013 18:23:58.893657 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f17f6046-4914-4528-951a-717dc3b64bc9-kube-api-access-7fmjx" (OuterVolumeSpecName: "kube-api-access-7fmjx") pod "f17f6046-4914-4528-951a-717dc3b64bc9" (UID: "f17f6046-4914-4528-951a-717dc3b64bc9"). InnerVolumeSpecName "kube-api-access-7fmjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 18:23:58 crc kubenswrapper[4720]: I1013 18:23:58.996762 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fmjx\" (UniqueName: \"kubernetes.io/projected/f17f6046-4914-4528-951a-717dc3b64bc9-kube-api-access-7fmjx\") on node \"crc\" DevicePath \"\"" Oct 13 18:23:59 crc kubenswrapper[4720]: I1013 18:23:59.187831 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f17f6046-4914-4528-951a-717dc3b64bc9" path="/var/lib/kubelet/pods/f17f6046-4914-4528-951a-717dc3b64bc9/volumes" Oct 13 18:23:59 crc kubenswrapper[4720]: I1013 18:23:59.706379 4720 scope.go:117] "RemoveContainer" containerID="9489f827960d9986eeae94db7420a69e0fc240a18505e1a17db4ec712022b1b7" Oct 13 18:23:59 crc kubenswrapper[4720]: I1013 18:23:59.706460 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6mndj/crc-debug-zfwfx" Oct 13 18:24:02 crc kubenswrapper[4720]: I1013 18:24:02.260550 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-77544fcf9d-jwg9p_84cf65bc-1603-4782-9b88-15937c9c7c6f/barbican-api/0.log" Oct 13 18:24:02 crc kubenswrapper[4720]: I1013 18:24:02.409389 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-77544fcf9d-jwg9p_84cf65bc-1603-4782-9b88-15937c9c7c6f/barbican-api-log/0.log" Oct 13 18:24:02 crc kubenswrapper[4720]: I1013 18:24:02.466359 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-846978fd94-gg45m_73bee848-defc-4a29-b1a2-a359359e3c67/barbican-keystone-listener/0.log" Oct 13 18:24:02 crc kubenswrapper[4720]: I1013 18:24:02.511591 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-846978fd94-gg45m_73bee848-defc-4a29-b1a2-a359359e3c67/barbican-keystone-listener-log/0.log" Oct 13 18:24:02 crc kubenswrapper[4720]: I1013 18:24:02.608177 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-77879c6d6f-fqm88_f39030ec-2975-459a-972e-4928cb31e15a/barbican-worker/0.log" Oct 13 18:24:02 crc kubenswrapper[4720]: I1013 18:24:02.643477 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-77879c6d6f-fqm88_f39030ec-2975-459a-972e-4928cb31e15a/barbican-worker-log/0.log" Oct 13 18:24:02 crc kubenswrapper[4720]: I1013 18:24:02.794846 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-gqz6r_d7953eee-f335-4fdc-9834-caa5a4695476/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Oct 13 18:24:02 crc kubenswrapper[4720]: I1013 18:24:02.836360 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_18a26891-fa3b-4433-a74a-592bef9b8241/ceilometer-central-agent/0.log" Oct 13 18:24:02 crc kubenswrapper[4720]: I1013 18:24:02.950353 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_18a26891-fa3b-4433-a74a-592bef9b8241/ceilometer-notification-agent/0.log" Oct 13 18:24:02 crc kubenswrapper[4720]: I1013 18:24:02.990746 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_18a26891-fa3b-4433-a74a-592bef9b8241/proxy-httpd/0.log" Oct 13 18:24:03 crc kubenswrapper[4720]: I1013 18:24:03.027634 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_18a26891-fa3b-4433-a74a-592bef9b8241/sg-core/0.log" Oct 13 18:24:03 crc kubenswrapper[4720]: I1013 18:24:03.177713 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_9ed0bf93-abdc-4a94-bfcb-6293c9e01853/cinder-api/0.log" Oct 13 18:24:03 crc kubenswrapper[4720]: I1013 18:24:03.197124 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_9ed0bf93-abdc-4a94-bfcb-6293c9e01853/cinder-api-log/0.log" Oct 13 18:24:03 crc kubenswrapper[4720]: I1013 18:24:03.368796 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_a68622d1-743e-45e7-a021-6d766840711a/cinder-scheduler/0.log" Oct 13 18:24:03 crc kubenswrapper[4720]: I1013 18:24:03.445281 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_a68622d1-743e-45e7-a021-6d766840711a/probe/0.log" Oct 13 18:24:03 crc kubenswrapper[4720]: I1013 18:24:03.516840 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-vf6s4_2dedc602-7303-4ca5-8d61-143a7975c01c/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Oct 13 18:24:03 crc kubenswrapper[4720]: I1013 18:24:03.658155 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-dtplp_41f1773b-8761-4e7e-bcfe-853ca5977b3b/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 13 18:24:03 crc kubenswrapper[4720]: I1013 18:24:03.698642 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-w24k6_9e9b7c8c-f6f4-448d-af87-164d1f0d008f/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 13 18:24:03 crc kubenswrapper[4720]: I1013 18:24:03.811901 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-wvm8b_323cdd25-bf01-4cf0-8ccc-7dbc90581afd/init/0.log" Oct 13 18:24:04 crc kubenswrapper[4720]: I1013 18:24:04.029750 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-wvm8b_323cdd25-bf01-4cf0-8ccc-7dbc90581afd/dnsmasq-dns/0.log" Oct 13 18:24:04 crc kubenswrapper[4720]: I1013 18:24:04.030592 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-xqf7d_2b7d23a3-f722-47e4-85af-fe733bfc5fdc/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Oct 13 18:24:04 crc kubenswrapper[4720]: I1013 18:24:04.065884 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-wvm8b_323cdd25-bf01-4cf0-8ccc-7dbc90581afd/init/0.log" Oct 13 18:24:04 crc kubenswrapper[4720]: I1013 18:24:04.317542 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_40ee2ba6-d58a-4ee8-b28e-6aeea90de7f6/glance-httpd/0.log" Oct 13 18:24:04 crc kubenswrapper[4720]: I1013 18:24:04.340238 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_40ee2ba6-d58a-4ee8-b28e-6aeea90de7f6/glance-log/0.log" Oct 13 18:24:04 crc kubenswrapper[4720]: I1013 18:24:04.493248 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_65eecc5b-dc6b-482e-bcfe-93915016a1f5/glance-log/0.log" Oct 13 18:24:04 crc kubenswrapper[4720]: I1013 18:24:04.503974 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_65eecc5b-dc6b-482e-bcfe-93915016a1f5/glance-httpd/0.log" Oct 13 18:24:04 crc kubenswrapper[4720]: I1013 18:24:04.641941 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7984dcc5d8-8c2ss_27768d75-429c-45c3-bf03-98527e94fe63/horizon/0.log" Oct 13 18:24:04 crc kubenswrapper[4720]: I1013 18:24:04.761920 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2_b6dc194d-cfc2-4303-ad72-ead87650ea96/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Oct 13 18:24:04 crc kubenswrapper[4720]: I1013 18:24:04.991769 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7984dcc5d8-8c2ss_27768d75-429c-45c3-bf03-98527e94fe63/horizon-log/0.log" Oct 13 18:24:05 crc kubenswrapper[4720]: I1013 18:24:05.055259 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-7nx42_8294bbb9-8b70-411f-af1f-cca84d7c5dbb/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 13 18:24:05 crc kubenswrapper[4720]: I1013 18:24:05.245437 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29339641-fqkjh_f21ba5f0-a0a5-4a29-9025-614d7f33c643/keystone-cron/0.log" Oct 13 18:24:05 crc kubenswrapper[4720]: I1013 18:24:05.290868 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5744dfc665-n6ts6_5de24778-de7e-4b2b-bf60-24ae857c2ed9/keystone-api/0.log" Oct 13 18:24:05 crc kubenswrapper[4720]: I1013 18:24:05.328232 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_3ce4e7b1-7ffc-4444-8e0c-2bc9779d4ef1/kube-state-metrics/0.log" Oct 13 18:24:05 crc kubenswrapper[4720]: I1013 18:24:05.468479 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-ht9ww_0b253b74-8253-44c4-962c-b01331772a19/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Oct 13 18:24:05 crc kubenswrapper[4720]: I1013 18:24:05.793963 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7c7ddff655-r8ln9_782a78a0-312e-4337-9397-9b476f51f7a8/neutron-httpd/0.log" Oct 13 18:24:05 crc kubenswrapper[4720]: I1013 18:24:05.815030 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rcnh_317f512e-221d-4587-9817-526adffbe348/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Oct 13 18:24:05 crc kubenswrapper[4720]: I1013 18:24:05.892564 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7c7ddff655-r8ln9_782a78a0-312e-4337-9397-9b476f51f7a8/neutron-api/0.log" Oct 13 18:24:06 crc kubenswrapper[4720]: I1013 18:24:06.167889 4720 scope.go:117] "RemoveContainer" containerID="79ca5c733b5eaab4a6d8e4dab434c0461ff00748dd7441ce45a3602ece00a48e" Oct 13 18:24:06 crc kubenswrapper[4720]: E1013 18:24:06.168145 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:24:06 crc kubenswrapper[4720]: I1013 18:24:06.311908 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_d013b32a-b904-46e1-85be-0691c6d981da/nova-api-log/0.log" Oct 13 18:24:06 crc kubenswrapper[4720]: I1013 18:24:06.362906 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_b869ee69-b8f2-4318-a977-da27405dd698/nova-cell0-conductor-conductor/0.log" Oct 13 18:24:06 crc kubenswrapper[4720]: I1013 18:24:06.554923 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_d013b32a-b904-46e1-85be-0691c6d981da/nova-api-api/0.log" Oct 13 18:24:06 crc kubenswrapper[4720]: I1013 18:24:06.567224 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_4b19f8c7-2583-4a2a-89e0-6a036d0e63a5/nova-cell1-conductor-conductor/0.log" Oct 13 18:24:06 crc kubenswrapper[4720]: I1013 18:24:06.623214 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_3be6a675-72c8-4120-9b8b-458dba2fe7f2/nova-cell1-novncproxy-novncproxy/0.log" Oct 13 18:24:06 crc kubenswrapper[4720]: I1013 18:24:06.863512 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-tgnks_9810822f-63d1-4a31-bde3-6353a5ee9007/nova-edpm-deployment-openstack-edpm-ipam/0.log" Oct 13 18:24:07 crc kubenswrapper[4720]: I1013 18:24:07.003107 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_8ab3d870-8836-484b-a291-4bc7b329ed83/nova-metadata-log/0.log" Oct 13 18:24:07 crc kubenswrapper[4720]: I1013 18:24:07.318602 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f8338f95-b766-4ce8-b60e-020957cdee12/mysql-bootstrap/0.log" Oct 13 18:24:07 crc kubenswrapper[4720]: I1013 18:24:07.361658 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_afdc561e-00de-42e7-aeda-de229e3f7836/nova-scheduler-scheduler/0.log" Oct 13 18:24:07 crc kubenswrapper[4720]: I1013 18:24:07.476568 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f8338f95-b766-4ce8-b60e-020957cdee12/mysql-bootstrap/0.log" Oct 13 18:24:07 crc kubenswrapper[4720]: I1013 18:24:07.481500 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f8338f95-b766-4ce8-b60e-020957cdee12/galera/0.log" Oct 13 18:24:07 crc kubenswrapper[4720]: I1013 18:24:07.686098 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_fe36eeb1-7f7f-424c-a56c-e96cffc3046d/mysql-bootstrap/0.log" Oct 13 18:24:07 crc kubenswrapper[4720]: I1013 18:24:07.806924 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_fe36eeb1-7f7f-424c-a56c-e96cffc3046d/mysql-bootstrap/0.log" Oct 13 18:24:07 crc kubenswrapper[4720]: I1013 18:24:07.853948 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_fe36eeb1-7f7f-424c-a56c-e96cffc3046d/galera/0.log" Oct 13 18:24:07 crc kubenswrapper[4720]: I1013 18:24:07.998332 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_f87dfa54-2548-4cf1-ad02-c7663263650c/openstackclient/0.log" Oct 13 18:24:08 crc kubenswrapper[4720]: I1013 18:24:08.087403 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-fkb8p_d6433630-935f-4a61-acab-4ceb6de36866/openstack-network-exporter/0.log" Oct 13 18:24:08 crc kubenswrapper[4720]: I1013 18:24:08.261652 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-cz99q_b92efbfa-6501-4601-9432-8c37dbe4e020/ovsdb-server-init/0.log" Oct 13 18:24:08 crc kubenswrapper[4720]: I1013 18:24:08.376149 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_8ab3d870-8836-484b-a291-4bc7b329ed83/nova-metadata-metadata/0.log" Oct 13 18:24:08 crc kubenswrapper[4720]: I1013 18:24:08.465266 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-cz99q_b92efbfa-6501-4601-9432-8c37dbe4e020/ovsdb-server-init/0.log" Oct 13 18:24:08 crc kubenswrapper[4720]: I1013 18:24:08.500302 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-cz99q_b92efbfa-6501-4601-9432-8c37dbe4e020/ovsdb-server/0.log" Oct 13 18:24:08 crc kubenswrapper[4720]: I1013 18:24:08.543196 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-cz99q_b92efbfa-6501-4601-9432-8c37dbe4e020/ovs-vswitchd/0.log" Oct 13 18:24:08 crc kubenswrapper[4720]: I1013 18:24:08.716863 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-vbc6h_283c0b58-d0a1-4cf1-af87-3859306c4a60/ovn-controller/0.log" Oct 13 18:24:08 crc kubenswrapper[4720]: I1013 18:24:08.765729 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-fhdbt_64141929-3427-4673-9aea-5ce314ceb23b/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Oct 13 18:24:08 crc kubenswrapper[4720]: I1013 18:24:08.894460 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_de1b534b-dfe4-42f3-ac5f-4aace4f956b6/openstack-network-exporter/0.log" Oct 13 18:24:08 crc kubenswrapper[4720]: I1013 18:24:08.943679 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_de1b534b-dfe4-42f3-ac5f-4aace4f956b6/ovn-northd/0.log" Oct 13 18:24:09 crc kubenswrapper[4720]: I1013 18:24:09.097271 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_a3b2dccc-71b7-4dd6-9c8d-f1c12382a832/openstack-network-exporter/0.log" Oct 13 18:24:09 crc kubenswrapper[4720]: I1013 18:24:09.190868 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_a3b2dccc-71b7-4dd6-9c8d-f1c12382a832/ovsdbserver-nb/0.log" Oct 13 18:24:09 crc kubenswrapper[4720]: I1013 18:24:09.274771 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_01489c11-3710-4d60-a702-71fda5b496ea/openstack-network-exporter/0.log" Oct 13 18:24:09 crc kubenswrapper[4720]: I1013 18:24:09.320099 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_01489c11-3710-4d60-a702-71fda5b496ea/ovsdbserver-sb/0.log" Oct 13 18:24:09 crc kubenswrapper[4720]: I1013 18:24:09.447171 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-544f6df47b-z9rm6_acf3c288-2800-445a-9d67-134e0a7faac9/placement-api/0.log" Oct 13 18:24:09 crc kubenswrapper[4720]: I1013 18:24:09.555385 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-544f6df47b-z9rm6_acf3c288-2800-445a-9d67-134e0a7faac9/placement-log/0.log" Oct 13 18:24:09 crc kubenswrapper[4720]: I1013 18:24:09.617456 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_0bc24914-0bdd-4fa7-a859-a4d4f06f0455/setup-container/0.log" Oct 13 18:24:09 crc kubenswrapper[4720]: I1013 18:24:09.794545 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_0bc24914-0bdd-4fa7-a859-a4d4f06f0455/setup-container/0.log" Oct 13 18:24:09 crc kubenswrapper[4720]: I1013 18:24:09.862687 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_0bc24914-0bdd-4fa7-a859-a4d4f06f0455/rabbitmq/0.log" Oct 13 18:24:09 crc kubenswrapper[4720]: I1013 18:24:09.884768 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_234df878-2921-45dc-854c-b3840afdbd45/setup-container/0.log" Oct 13 18:24:10 crc kubenswrapper[4720]: I1013 18:24:10.079179 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-r964t_28cc87d3-31ea-48dd-8169-3ac47061e244/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 13 18:24:10 crc kubenswrapper[4720]: I1013 18:24:10.088981 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_234df878-2921-45dc-854c-b3840afdbd45/rabbitmq/0.log" Oct 13 18:24:10 crc kubenswrapper[4720]: I1013 18:24:10.165392 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_234df878-2921-45dc-854c-b3840afdbd45/setup-container/0.log" Oct 13 18:24:10 crc kubenswrapper[4720]: I1013 18:24:10.372168 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-fvrff_1571f354-4e14-443b-b5fa-b0158ed87248/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Oct 13 18:24:10 crc kubenswrapper[4720]: I1013 18:24:10.403009 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-fj58w_28ae3d76-f715-46df-be4a-d621a7467347/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Oct 13 18:24:10 crc kubenswrapper[4720]: I1013 18:24:10.523701 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-94xkd_d6ae1ade-ceec-4b00-b028-1272c83dea9a/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 13 18:24:10 crc kubenswrapper[4720]: I1013 18:24:10.677283 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-8vhmv_06bc5000-9f94-4cff-ade7-ac063d97ef79/ssh-known-hosts-edpm-deployment/0.log" Oct 13 18:24:10 crc kubenswrapper[4720]: I1013 18:24:10.924442 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-694bd85589-jdgbb_d51a0725-9566-428f-a34b-3b0345774d1f/proxy-server/0.log" Oct 13 18:24:10 crc kubenswrapper[4720]: I1013 18:24:10.968924 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-694bd85589-jdgbb_d51a0725-9566-428f-a34b-3b0345774d1f/proxy-httpd/0.log" Oct 13 18:24:11 crc kubenswrapper[4720]: I1013 18:24:11.120410 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-9s2xb_3f752de1-5826-4009-a77c-b9186d9811ea/swift-ring-rebalance/0.log" Oct 13 18:24:11 crc kubenswrapper[4720]: I1013 18:24:11.132080 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2ffe28a4-ed4a-44c6-b982-501575dd907d/account-auditor/0.log" Oct 13 18:24:11 crc kubenswrapper[4720]: I1013 18:24:11.305561 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2ffe28a4-ed4a-44c6-b982-501575dd907d/account-reaper/0.log" Oct 13 18:24:11 crc kubenswrapper[4720]: I1013 18:24:11.341288 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2ffe28a4-ed4a-44c6-b982-501575dd907d/account-server/0.log" Oct 13 18:24:11 crc kubenswrapper[4720]: I1013 18:24:11.363758 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2ffe28a4-ed4a-44c6-b982-501575dd907d/container-auditor/0.log" Oct 13 18:24:11 crc kubenswrapper[4720]: I1013 18:24:11.399697 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2ffe28a4-ed4a-44c6-b982-501575dd907d/account-replicator/0.log" Oct 13 18:24:11 crc kubenswrapper[4720]: I1013 18:24:11.484218 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2ffe28a4-ed4a-44c6-b982-501575dd907d/container-replicator/0.log" Oct 13 18:24:11 crc kubenswrapper[4720]: I1013 18:24:11.551963 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2ffe28a4-ed4a-44c6-b982-501575dd907d/container-server/0.log" Oct 13 18:24:11 crc kubenswrapper[4720]: I1013 18:24:11.552941 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2ffe28a4-ed4a-44c6-b982-501575dd907d/container-updater/0.log" Oct 13 18:24:11 crc kubenswrapper[4720]: I1013 18:24:11.684813 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2ffe28a4-ed4a-44c6-b982-501575dd907d/object-expirer/0.log" Oct 13 18:24:11 crc kubenswrapper[4720]: I1013 18:24:11.704484 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2ffe28a4-ed4a-44c6-b982-501575dd907d/object-auditor/0.log" Oct 13 18:24:11 crc kubenswrapper[4720]: I1013 18:24:11.783839 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2ffe28a4-ed4a-44c6-b982-501575dd907d/object-server/0.log" Oct 13 18:24:11 crc kubenswrapper[4720]: I1013 18:24:11.790335 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2ffe28a4-ed4a-44c6-b982-501575dd907d/object-replicator/0.log" Oct 13 18:24:11 crc kubenswrapper[4720]: I1013 18:24:11.924290 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2ffe28a4-ed4a-44c6-b982-501575dd907d/object-updater/0.log" Oct 13 18:24:11 crc kubenswrapper[4720]: I1013 18:24:11.950031 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2ffe28a4-ed4a-44c6-b982-501575dd907d/rsync/0.log" Oct 13 18:24:12 crc kubenswrapper[4720]: I1013 18:24:12.029599 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2ffe28a4-ed4a-44c6-b982-501575dd907d/swift-recon-cron/0.log" Oct 13 18:24:12 crc kubenswrapper[4720]: I1013 18:24:12.162700 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-bsltd_6c0e5c67-6b6c-4b09-8d45-f37f83c017a7/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Oct 13 18:24:12 crc kubenswrapper[4720]: I1013 18:24:12.288890 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_ece01f62-fd6d-4c42-9c9a-3bc25feed3cb/tempest-tests-tempest-tests-runner/0.log" Oct 13 18:24:12 crc kubenswrapper[4720]: I1013 18:24:12.444869 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_82b017de-6691-4cf4-941f-9e0334669ced/test-operator-logs-container/0.log" Oct 13 18:24:12 crc kubenswrapper[4720]: I1013 18:24:12.563460 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-c7zjx_3d96fecb-1b5a-4d39-8f6f-82755c63a757/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Oct 13 18:24:19 crc kubenswrapper[4720]: I1013 18:24:19.172042 4720 scope.go:117] "RemoveContainer" containerID="79ca5c733b5eaab4a6d8e4dab434c0461ff00748dd7441ce45a3602ece00a48e" Oct 13 18:24:19 crc kubenswrapper[4720]: E1013 18:24:19.173549 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:24:22 crc kubenswrapper[4720]: I1013 18:24:22.799629 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_e6b1817f-f719-4727-ad61-56061b241d4b/memcached/0.log" Oct 13 18:24:33 crc kubenswrapper[4720]: I1013 18:24:33.168627 4720 scope.go:117] "RemoveContainer" containerID="79ca5c733b5eaab4a6d8e4dab434c0461ff00748dd7441ce45a3602ece00a48e" Oct 13 18:24:33 crc kubenswrapper[4720]: E1013 18:24:33.169661 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:24:37 crc kubenswrapper[4720]: I1013 18:24:37.143622 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-64f84fcdbb-bk68v_d34e7c64-7562-4a1a-8d47-20b3bb785756/kube-rbac-proxy/0.log" Oct 13 18:24:37 crc kubenswrapper[4720]: I1013 18:24:37.246860 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-64f84fcdbb-bk68v_d34e7c64-7562-4a1a-8d47-20b3bb785756/manager/0.log" Oct 13 18:24:37 crc kubenswrapper[4720]: I1013 18:24:37.357558 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-59cdc64769-nlnsl_d81c88e6-1b2a-405d-861a-ca4b3baed83d/kube-rbac-proxy/0.log" Oct 13 18:24:37 crc kubenswrapper[4720]: I1013 18:24:37.436633 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-59cdc64769-nlnsl_d81c88e6-1b2a-405d-861a-ca4b3baed83d/manager/0.log" Oct 13 18:24:37 crc kubenswrapper[4720]: I1013 18:24:37.483940 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-687df44cdb-ldlvt_c25871a6-cdf1-49c1-8d51-ab4fb186fa83/kube-rbac-proxy/0.log" Oct 13 18:24:37 crc kubenswrapper[4720]: I1013 18:24:37.584466 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-687df44cdb-ldlvt_c25871a6-cdf1-49c1-8d51-ab4fb186fa83/manager/0.log" Oct 13 18:24:37 crc kubenswrapper[4720]: I1013 18:24:37.660924 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f4b764a5aa97d0f87d08f09894533aff6b93246953653cc0ea80385c49l69l6_f56e63f6-a476-4150-a661-07e988c98f28/util/0.log" Oct 13 18:24:37 crc kubenswrapper[4720]: I1013 18:24:37.985245 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f4b764a5aa97d0f87d08f09894533aff6b93246953653cc0ea80385c49l69l6_f56e63f6-a476-4150-a661-07e988c98f28/util/0.log" Oct 13 18:24:38 crc kubenswrapper[4720]: I1013 18:24:38.055097 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f4b764a5aa97d0f87d08f09894533aff6b93246953653cc0ea80385c49l69l6_f56e63f6-a476-4150-a661-07e988c98f28/pull/0.log" Oct 13 18:24:38 crc kubenswrapper[4720]: I1013 18:24:38.068014 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f4b764a5aa97d0f87d08f09894533aff6b93246953653cc0ea80385c49l69l6_f56e63f6-a476-4150-a661-07e988c98f28/pull/0.log" Oct 13 18:24:38 crc kubenswrapper[4720]: I1013 18:24:38.197232 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f4b764a5aa97d0f87d08f09894533aff6b93246953653cc0ea80385c49l69l6_f56e63f6-a476-4150-a661-07e988c98f28/util/0.log" Oct 13 18:24:38 crc kubenswrapper[4720]: I1013 18:24:38.227937 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f4b764a5aa97d0f87d08f09894533aff6b93246953653cc0ea80385c49l69l6_f56e63f6-a476-4150-a661-07e988c98f28/pull/0.log" Oct 13 18:24:38 crc kubenswrapper[4720]: I1013 18:24:38.260622 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f4b764a5aa97d0f87d08f09894533aff6b93246953653cc0ea80385c49l69l6_f56e63f6-a476-4150-a661-07e988c98f28/extract/0.log" Oct 13 18:24:38 crc kubenswrapper[4720]: I1013 18:24:38.465837 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7bb46cd7d-v7vx2_a9b388de-4993-46d1-86db-ac92a9df4f2f/kube-rbac-proxy/0.log" Oct 13 18:24:38 crc kubenswrapper[4720]: I1013 18:24:38.527066 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7bb46cd7d-v7vx2_a9b388de-4993-46d1-86db-ac92a9df4f2f/manager/0.log" Oct 13 18:24:38 crc kubenswrapper[4720]: I1013 18:24:38.578858 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-6d9967f8dd-qvv82_771ce8c4-ac65-4db7-bc56-a8b7cb2f1448/kube-rbac-proxy/0.log" Oct 13 18:24:38 crc kubenswrapper[4720]: I1013 18:24:38.671261 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-6d9967f8dd-qvv82_771ce8c4-ac65-4db7-bc56-a8b7cb2f1448/manager/0.log" Oct 13 18:24:38 crc kubenswrapper[4720]: I1013 18:24:38.703248 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-6d74794d9b-hvb85_d266783d-75ba-4864-af5d-4f2b8702c6a9/kube-rbac-proxy/0.log" Oct 13 18:24:38 crc kubenswrapper[4720]: I1013 18:24:38.801099 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-6d74794d9b-hvb85_d266783d-75ba-4864-af5d-4f2b8702c6a9/manager/0.log" Oct 13 18:24:38 crc kubenswrapper[4720]: I1013 18:24:38.967087 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-585fc5b659-2f6bd_45c8f080-0f28-47b5-80df-e1877c3f77bb/kube-rbac-proxy/0.log" Oct 13 18:24:39 crc kubenswrapper[4720]: I1013 18:24:39.033480 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-585fc5b659-2f6bd_45c8f080-0f28-47b5-80df-e1877c3f77bb/manager/0.log" Oct 13 18:24:39 crc kubenswrapper[4720]: I1013 18:24:39.136490 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-74cb5cbc49-bxtwp_5a86eb76-3453-4f0e-8529-c877f739d822/kube-rbac-proxy/0.log" Oct 13 18:24:39 crc kubenswrapper[4720]: I1013 18:24:39.197102 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-74cb5cbc49-bxtwp_5a86eb76-3453-4f0e-8529-c877f739d822/manager/0.log" Oct 13 18:24:39 crc kubenswrapper[4720]: I1013 18:24:39.311277 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-ddb98f99b-qqmcw_32b53ed3-af12-4a7d-b371-eab8aa1ab1bb/kube-rbac-proxy/0.log" Oct 13 18:24:39 crc kubenswrapper[4720]: I1013 18:24:39.432501 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-ddb98f99b-qqmcw_32b53ed3-af12-4a7d-b371-eab8aa1ab1bb/manager/0.log" Oct 13 18:24:39 crc kubenswrapper[4720]: I1013 18:24:39.513041 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-59578bc799-mhlvn_284cd6cf-5985-4cad-a31c-f91f3c2098c6/kube-rbac-proxy/0.log" Oct 13 18:24:39 crc kubenswrapper[4720]: I1013 18:24:39.574834 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-59578bc799-mhlvn_284cd6cf-5985-4cad-a31c-f91f3c2098c6/manager/0.log" Oct 13 18:24:39 crc kubenswrapper[4720]: I1013 18:24:39.679617 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-5777b4f897-mpdxw_30a28fe6-1905-48df-ab2d-b9d92eaf940e/kube-rbac-proxy/0.log" Oct 13 18:24:39 crc kubenswrapper[4720]: I1013 18:24:39.751163 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-5777b4f897-mpdxw_30a28fe6-1905-48df-ab2d-b9d92eaf940e/manager/0.log" Oct 13 18:24:39 crc kubenswrapper[4720]: I1013 18:24:39.864792 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-797d478b46-m44cs_ccb8b109-9d10-48f5-b8ce-65ad05b5e1a4/kube-rbac-proxy/0.log" Oct 13 18:24:39 crc kubenswrapper[4720]: I1013 18:24:39.943518 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-797d478b46-m44cs_ccb8b109-9d10-48f5-b8ce-65ad05b5e1a4/manager/0.log" Oct 13 18:24:39 crc kubenswrapper[4720]: I1013 18:24:39.944607 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-57bb74c7bf-dkbrd_246b649b-7481-433e-aaf0-30cebf5543d8/kube-rbac-proxy/0.log" Oct 13 18:24:40 crc kubenswrapper[4720]: I1013 18:24:40.156263 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-57bb74c7bf-dkbrd_246b649b-7481-433e-aaf0-30cebf5543d8/manager/0.log" Oct 13 18:24:40 crc kubenswrapper[4720]: I1013 18:24:40.212681 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6d7c7ddf95-4jfq5_60fc7b7f-c85a-4a6d-8de9-e8e9e8df8ada/kube-rbac-proxy/0.log" Oct 13 18:24:40 crc kubenswrapper[4720]: I1013 18:24:40.224840 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6d7c7ddf95-4jfq5_60fc7b7f-c85a-4a6d-8de9-e8e9e8df8ada/manager/0.log" Oct 13 18:24:40 crc kubenswrapper[4720]: I1013 18:24:40.372566 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6cc7fb757dd6zq7_b7c96c4b-b0c5-4c82-a6ab-3878c394eab0/kube-rbac-proxy/0.log" Oct 13 18:24:40 crc kubenswrapper[4720]: I1013 18:24:40.448873 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6cc7fb757dd6zq7_b7c96c4b-b0c5-4c82-a6ab-3878c394eab0/manager/0.log" Oct 13 18:24:40 crc kubenswrapper[4720]: I1013 18:24:40.531879 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-bb4f97fd9-d7cs5_4b641160-215b-4547-a820-d613c04d9348/kube-rbac-proxy/0.log" Oct 13 18:24:40 crc kubenswrapper[4720]: I1013 18:24:40.665035 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-c9c874ff7-8qrr8_2a1eb7a4-db4c-4029-9320-c447d9f1c69c/kube-rbac-proxy/0.log" Oct 13 18:24:40 crc kubenswrapper[4720]: I1013 18:24:40.846604 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-qnssk_21cd75be-1f87-4a83-a140-d31263d1c86f/registry-server/0.log" Oct 13 18:24:40 crc kubenswrapper[4720]: I1013 18:24:40.878001 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-c9c874ff7-8qrr8_2a1eb7a4-db4c-4029-9320-c447d9f1c69c/operator/0.log" Oct 13 18:24:40 crc kubenswrapper[4720]: I1013 18:24:40.973078 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-869cc7797f-gzwxn_487124d6-9dcd-4173-8f78-2dbf29cafe87/kube-rbac-proxy/0.log" Oct 13 18:24:41 crc kubenswrapper[4720]: I1013 18:24:41.135160 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-664664cb68-xwn8g_2eab29c4-2ebe-4f71-af0d-df5f0d113f66/kube-rbac-proxy/0.log" Oct 13 18:24:41 crc kubenswrapper[4720]: I1013 18:24:41.175574 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-869cc7797f-gzwxn_487124d6-9dcd-4173-8f78-2dbf29cafe87/manager/0.log" Oct 13 18:24:41 crc kubenswrapper[4720]: I1013 18:24:41.261721 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-664664cb68-xwn8g_2eab29c4-2ebe-4f71-af0d-df5f0d113f66/manager/0.log" Oct 13 18:24:41 crc kubenswrapper[4720]: I1013 18:24:41.425142 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-5f97d8c699-h959b_f7d32fd1-190f-46ec-a313-b3c0b2c58556/operator/0.log" Oct 13 18:24:41 crc kubenswrapper[4720]: I1013 18:24:41.491470 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-bb4f97fd9-d7cs5_4b641160-215b-4547-a820-d613c04d9348/manager/0.log" Oct 13 18:24:41 crc kubenswrapper[4720]: I1013 18:24:41.556372 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f4d5dfdc6-g4th9_935d79c8-281f-4ad8-8c6d-404c0e89653e/kube-rbac-proxy/0.log" Oct 13 18:24:41 crc kubenswrapper[4720]: I1013 18:24:41.665756 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f4d5dfdc6-g4th9_935d79c8-281f-4ad8-8c6d-404c0e89653e/manager/0.log" Oct 13 18:24:41 crc kubenswrapper[4720]: I1013 18:24:41.783546 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-578874c84d-62vnr_20db86ae-f595-4a1d-b000-c97df02b65af/kube-rbac-proxy/0.log" Oct 13 18:24:41 crc kubenswrapper[4720]: I1013 18:24:41.784175 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-578874c84d-62vnr_20db86ae-f595-4a1d-b000-c97df02b65af/manager/0.log" Oct 13 18:24:41 crc kubenswrapper[4720]: I1013 18:24:41.835385 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-ffcdd6c94-xm8vk_5c9d42bc-4b65-42f2-beda-164c7c5ba3e2/manager/0.log" Oct 13 18:24:41 crc kubenswrapper[4720]: I1013 18:24:41.836774 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-ffcdd6c94-xm8vk_5c9d42bc-4b65-42f2-beda-164c7c5ba3e2/kube-rbac-proxy/0.log" Oct 13 18:24:41 crc kubenswrapper[4720]: I1013 18:24:41.992236 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-646675d848-c8bnc_492905c0-fe64-45b8-af6b-5d7373c3f71a/manager/0.log" Oct 13 18:24:41 crc kubenswrapper[4720]: I1013 18:24:41.995703 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-646675d848-c8bnc_492905c0-fe64-45b8-af6b-5d7373c3f71a/kube-rbac-proxy/0.log" Oct 13 18:24:44 crc kubenswrapper[4720]: I1013 18:24:44.168845 4720 scope.go:117] "RemoveContainer" containerID="79ca5c733b5eaab4a6d8e4dab434c0461ff00748dd7441ce45a3602ece00a48e" Oct 13 18:24:44 crc kubenswrapper[4720]: E1013 18:24:44.169545 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:24:57 crc kubenswrapper[4720]: I1013 18:24:57.174497 4720 scope.go:117] "RemoveContainer" containerID="79ca5c733b5eaab4a6d8e4dab434c0461ff00748dd7441ce45a3602ece00a48e" Oct 13 18:24:57 crc kubenswrapper[4720]: E1013 18:24:57.175267 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:24:58 crc kubenswrapper[4720]: I1013 18:24:58.123579 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-mjzm6_cabe6b29-bccd-4995-ab54-b6cabc86f7bf/control-plane-machine-set-operator/0.log" Oct 13 18:24:58 crc kubenswrapper[4720]: I1013 18:24:58.293997 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-dvc6z_43c40b45-9695-4d29-b627-c4ab23d1d6d0/kube-rbac-proxy/0.log" Oct 13 18:24:58 crc kubenswrapper[4720]: I1013 18:24:58.368028 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-dvc6z_43c40b45-9695-4d29-b627-c4ab23d1d6d0/machine-api-operator/0.log" Oct 13 18:25:08 crc kubenswrapper[4720]: I1013 18:25:08.168640 4720 scope.go:117] "RemoveContainer" containerID="79ca5c733b5eaab4a6d8e4dab434c0461ff00748dd7441ce45a3602ece00a48e" Oct 13 18:25:08 crc kubenswrapper[4720]: E1013 18:25:08.170673 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:25:11 crc kubenswrapper[4720]: I1013 18:25:11.437624 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-hcgbx_468376ef-c1ab-4db7-9006-0ded29f5c690/cert-manager-controller/0.log" Oct 13 18:25:11 crc kubenswrapper[4720]: I1013 18:25:11.582086 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-x8zf7_c0e61a1b-8c01-4c98-a6bd-cff432642c53/cert-manager-cainjector/0.log" Oct 13 18:25:11 crc kubenswrapper[4720]: I1013 18:25:11.675831 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-mrmx6_7142c613-395a-40ef-bef1-20ed0b6cdad3/cert-manager-webhook/0.log" Oct 13 18:25:22 crc kubenswrapper[4720]: I1013 18:25:22.169427 4720 scope.go:117] "RemoveContainer" containerID="79ca5c733b5eaab4a6d8e4dab434c0461ff00748dd7441ce45a3602ece00a48e" Oct 13 18:25:22 crc kubenswrapper[4720]: E1013 18:25:22.170657 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:25:24 crc kubenswrapper[4720]: I1013 18:25:24.989085 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6b874cbd85-jt5jd_f1c52489-5f05-43ca-a79c-db2a69061eac/nmstate-console-plugin/0.log" Oct 13 18:25:25 crc kubenswrapper[4720]: I1013 18:25:25.123721 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-r6sg8_420f8bbd-5b94-4775-8248-68220b91202f/nmstate-handler/0.log" Oct 13 18:25:25 crc kubenswrapper[4720]: I1013 18:25:25.181677 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-fdff9cb8d-k7h8j_fa98b66b-f1b2-4d57-8386-b449bf1076ec/kube-rbac-proxy/0.log" Oct 13 18:25:25 crc kubenswrapper[4720]: I1013 18:25:25.231292 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-fdff9cb8d-k7h8j_fa98b66b-f1b2-4d57-8386-b449bf1076ec/nmstate-metrics/0.log" Oct 13 18:25:25 crc kubenswrapper[4720]: I1013 18:25:25.366812 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-858ddd8f98-jprhs_ffdb8c39-acdf-40d9-9c23-bb881eb0b755/nmstate-operator/0.log" Oct 13 18:25:25 crc kubenswrapper[4720]: I1013 18:25:25.444997 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6cdbc54649-66xwd_e6844590-4dcb-4007-9e33-12ded957f55b/nmstate-webhook/0.log" Oct 13 18:25:34 crc kubenswrapper[4720]: I1013 18:25:34.167642 4720 scope.go:117] "RemoveContainer" containerID="79ca5c733b5eaab4a6d8e4dab434c0461ff00748dd7441ce45a3602ece00a48e" Oct 13 18:25:34 crc kubenswrapper[4720]: E1013 18:25:34.168560 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:25:39 crc kubenswrapper[4720]: I1013 18:25:39.767222 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-68d546b9d8-ztrn5_34074aee-3c24-4d8c-929b-d0feb37ead02/kube-rbac-proxy/0.log" Oct 13 18:25:39 crc kubenswrapper[4720]: I1013 18:25:39.892029 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-68d546b9d8-ztrn5_34074aee-3c24-4d8c-929b-d0feb37ead02/controller/0.log" Oct 13 18:25:39 crc kubenswrapper[4720]: I1013 18:25:39.926396 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fw488_d10a0690-99f5-416a-b597-d41fa0635070/cp-frr-files/0.log" Oct 13 18:25:40 crc kubenswrapper[4720]: I1013 18:25:40.116692 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fw488_d10a0690-99f5-416a-b597-d41fa0635070/cp-reloader/0.log" Oct 13 18:25:40 crc kubenswrapper[4720]: I1013 18:25:40.121360 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fw488_d10a0690-99f5-416a-b597-d41fa0635070/cp-frr-files/0.log" Oct 13 18:25:40 crc kubenswrapper[4720]: I1013 18:25:40.141228 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fw488_d10a0690-99f5-416a-b597-d41fa0635070/cp-metrics/0.log" Oct 13 18:25:40 crc kubenswrapper[4720]: I1013 18:25:40.155268 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fw488_d10a0690-99f5-416a-b597-d41fa0635070/cp-reloader/0.log" Oct 13 18:25:40 crc kubenswrapper[4720]: I1013 18:25:40.330761 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fw488_d10a0690-99f5-416a-b597-d41fa0635070/cp-frr-files/0.log" Oct 13 18:25:40 crc kubenswrapper[4720]: I1013 18:25:40.348500 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fw488_d10a0690-99f5-416a-b597-d41fa0635070/cp-metrics/0.log" Oct 13 18:25:40 crc kubenswrapper[4720]: I1013 18:25:40.366094 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fw488_d10a0690-99f5-416a-b597-d41fa0635070/cp-reloader/0.log" Oct 13 18:25:40 crc kubenswrapper[4720]: I1013 18:25:40.366234 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fw488_d10a0690-99f5-416a-b597-d41fa0635070/cp-metrics/0.log" Oct 13 18:25:40 crc kubenswrapper[4720]: I1013 18:25:40.513872 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fw488_d10a0690-99f5-416a-b597-d41fa0635070/cp-frr-files/0.log" Oct 13 18:25:40 crc kubenswrapper[4720]: I1013 18:25:40.513986 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fw488_d10a0690-99f5-416a-b597-d41fa0635070/cp-reloader/0.log" Oct 13 18:25:40 crc kubenswrapper[4720]: I1013 18:25:40.564495 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fw488_d10a0690-99f5-416a-b597-d41fa0635070/controller/0.log" Oct 13 18:25:40 crc kubenswrapper[4720]: I1013 18:25:40.566649 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fw488_d10a0690-99f5-416a-b597-d41fa0635070/cp-metrics/0.log" Oct 13 18:25:40 crc kubenswrapper[4720]: I1013 18:25:40.720092 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fw488_d10a0690-99f5-416a-b597-d41fa0635070/frr-metrics/0.log" Oct 13 18:25:40 crc kubenswrapper[4720]: I1013 18:25:40.747757 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fw488_d10a0690-99f5-416a-b597-d41fa0635070/kube-rbac-proxy/0.log" Oct 13 18:25:40 crc kubenswrapper[4720]: I1013 18:25:40.774364 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fw488_d10a0690-99f5-416a-b597-d41fa0635070/kube-rbac-proxy-frr/0.log" Oct 13 18:25:40 crc kubenswrapper[4720]: I1013 18:25:40.929488 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fw488_d10a0690-99f5-416a-b597-d41fa0635070/reloader/0.log" Oct 13 18:25:40 crc kubenswrapper[4720]: I1013 18:25:40.989792 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-64bf5d555-wlwb8_01bbae6a-0286-4e02-bf7a-bdc1e5ba9e53/frr-k8s-webhook-server/0.log" Oct 13 18:25:41 crc kubenswrapper[4720]: I1013 18:25:41.187939 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-68b7b9f484-t4mwn_676df020-2204-4ef7-88b8-88eb27f8068b/manager/0.log" Oct 13 18:25:41 crc kubenswrapper[4720]: I1013 18:25:41.418374 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-86c9779c6-vtc8m_8eec22d4-687d-427c-a53e-5316b69e5448/webhook-server/0.log" Oct 13 18:25:41 crc kubenswrapper[4720]: I1013 18:25:41.443869 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-2kfxp_bd395621-3d53-4b0b-b8da-f7d5c7df9570/kube-rbac-proxy/0.log" Oct 13 18:25:42 crc kubenswrapper[4720]: I1013 18:25:42.039221 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-2kfxp_bd395621-3d53-4b0b-b8da-f7d5c7df9570/speaker/0.log" Oct 13 18:25:42 crc kubenswrapper[4720]: I1013 18:25:42.167394 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fw488_d10a0690-99f5-416a-b597-d41fa0635070/frr/0.log" Oct 13 18:25:47 crc kubenswrapper[4720]: I1013 18:25:47.169069 4720 scope.go:117] "RemoveContainer" containerID="79ca5c733b5eaab4a6d8e4dab434c0461ff00748dd7441ce45a3602ece00a48e" Oct 13 18:25:47 crc kubenswrapper[4720]: E1013 18:25:47.170173 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:25:54 crc kubenswrapper[4720]: I1013 18:25:54.674445 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ss4c8_d5bc3e7b-d845-48f4-9387-94904ed3b983/util/0.log" Oct 13 18:25:54 crc kubenswrapper[4720]: I1013 18:25:54.803097 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ss4c8_d5bc3e7b-d845-48f4-9387-94904ed3b983/pull/0.log" Oct 13 18:25:54 crc kubenswrapper[4720]: I1013 18:25:54.830587 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ss4c8_d5bc3e7b-d845-48f4-9387-94904ed3b983/util/0.log" Oct 13 18:25:54 crc kubenswrapper[4720]: I1013 18:25:54.834452 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ss4c8_d5bc3e7b-d845-48f4-9387-94904ed3b983/pull/0.log" Oct 13 18:25:55 crc kubenswrapper[4720]: I1013 18:25:55.022099 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ss4c8_d5bc3e7b-d845-48f4-9387-94904ed3b983/util/0.log" Oct 13 18:25:55 crc kubenswrapper[4720]: I1013 18:25:55.023722 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ss4c8_d5bc3e7b-d845-48f4-9387-94904ed3b983/pull/0.log" Oct 13 18:25:55 crc kubenswrapper[4720]: I1013 18:25:55.046814 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ss4c8_d5bc3e7b-d845-48f4-9387-94904ed3b983/extract/0.log" Oct 13 18:25:55 crc kubenswrapper[4720]: I1013 18:25:55.221900 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wc648_ea62eeb5-7f86-4555-ba88-fb04f9986df6/extract-utilities/0.log" Oct 13 18:25:55 crc kubenswrapper[4720]: I1013 18:25:55.357585 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wc648_ea62eeb5-7f86-4555-ba88-fb04f9986df6/extract-content/0.log" Oct 13 18:25:55 crc kubenswrapper[4720]: I1013 18:25:55.363378 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wc648_ea62eeb5-7f86-4555-ba88-fb04f9986df6/extract-utilities/0.log" Oct 13 18:25:55 crc kubenswrapper[4720]: I1013 18:25:55.364992 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wc648_ea62eeb5-7f86-4555-ba88-fb04f9986df6/extract-content/0.log" Oct 13 18:25:55 crc kubenswrapper[4720]: I1013 18:25:55.593468 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wc648_ea62eeb5-7f86-4555-ba88-fb04f9986df6/extract-content/0.log" Oct 13 18:25:55 crc kubenswrapper[4720]: I1013 18:25:55.622791 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wc648_ea62eeb5-7f86-4555-ba88-fb04f9986df6/extract-utilities/0.log" Oct 13 18:25:55 crc kubenswrapper[4720]: I1013 18:25:55.815207 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rj2jd_7af8d26b-d10a-4be8-8773-03204f461fe3/extract-utilities/0.log" Oct 13 18:25:55 crc kubenswrapper[4720]: I1013 18:25:55.947656 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rj2jd_7af8d26b-d10a-4be8-8773-03204f461fe3/extract-content/0.log" Oct 13 18:25:55 crc kubenswrapper[4720]: I1013 18:25:55.956139 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rj2jd_7af8d26b-d10a-4be8-8773-03204f461fe3/extract-utilities/0.log" Oct 13 18:25:55 crc kubenswrapper[4720]: I1013 18:25:55.977529 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wc648_ea62eeb5-7f86-4555-ba88-fb04f9986df6/registry-server/0.log" Oct 13 18:25:56 crc kubenswrapper[4720]: I1013 18:25:56.044341 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rj2jd_7af8d26b-d10a-4be8-8773-03204f461fe3/extract-content/0.log" Oct 13 18:25:56 crc kubenswrapper[4720]: I1013 18:25:56.150301 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rj2jd_7af8d26b-d10a-4be8-8773-03204f461fe3/extract-utilities/0.log" Oct 13 18:25:56 crc kubenswrapper[4720]: I1013 18:25:56.186337 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rj2jd_7af8d26b-d10a-4be8-8773-03204f461fe3/extract-content/0.log" Oct 13 18:25:56 crc kubenswrapper[4720]: I1013 18:25:56.326524 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cfj2bw_5042638b-5850-4492-a98d-62479bd6624b/util/0.log" Oct 13 18:25:56 crc kubenswrapper[4720]: I1013 18:25:56.599585 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cfj2bw_5042638b-5850-4492-a98d-62479bd6624b/util/0.log" Oct 13 18:25:56 crc kubenswrapper[4720]: I1013 18:25:56.603285 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cfj2bw_5042638b-5850-4492-a98d-62479bd6624b/pull/0.log" Oct 13 18:25:56 crc kubenswrapper[4720]: I1013 18:25:56.637001 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cfj2bw_5042638b-5850-4492-a98d-62479bd6624b/pull/0.log" Oct 13 18:25:56 crc kubenswrapper[4720]: I1013 18:25:56.690024 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rj2jd_7af8d26b-d10a-4be8-8773-03204f461fe3/registry-server/0.log" Oct 13 18:25:56 crc kubenswrapper[4720]: I1013 18:25:56.823726 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cfj2bw_5042638b-5850-4492-a98d-62479bd6624b/util/0.log" Oct 13 18:25:56 crc kubenswrapper[4720]: I1013 18:25:56.824739 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cfj2bw_5042638b-5850-4492-a98d-62479bd6624b/pull/0.log" Oct 13 18:25:56 crc kubenswrapper[4720]: I1013 18:25:56.860733 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cfj2bw_5042638b-5850-4492-a98d-62479bd6624b/extract/0.log" Oct 13 18:25:57 crc kubenswrapper[4720]: I1013 18:25:57.040021 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-wgslm_8524b73b-8e30-4e35-bc36-1b3c9e911ad0/marketplace-operator/0.log" Oct 13 18:25:57 crc kubenswrapper[4720]: I1013 18:25:57.065965 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-69hwq_dab7d1c1-75af-4ffa-a15b-2cc516acfabf/extract-utilities/0.log" Oct 13 18:25:57 crc kubenswrapper[4720]: I1013 18:25:57.221103 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-69hwq_dab7d1c1-75af-4ffa-a15b-2cc516acfabf/extract-utilities/0.log" Oct 13 18:25:57 crc kubenswrapper[4720]: I1013 18:25:57.235373 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-69hwq_dab7d1c1-75af-4ffa-a15b-2cc516acfabf/extract-content/0.log" Oct 13 18:25:57 crc kubenswrapper[4720]: I1013 18:25:57.250197 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-69hwq_dab7d1c1-75af-4ffa-a15b-2cc516acfabf/extract-content/0.log" Oct 13 18:25:57 crc kubenswrapper[4720]: I1013 18:25:57.482673 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-69hwq_dab7d1c1-75af-4ffa-a15b-2cc516acfabf/extract-utilities/0.log" Oct 13 18:25:57 crc kubenswrapper[4720]: I1013 18:25:57.492179 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-69hwq_dab7d1c1-75af-4ffa-a15b-2cc516acfabf/extract-content/0.log" Oct 13 18:25:57 crc kubenswrapper[4720]: I1013 18:25:57.564843 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-69hwq_dab7d1c1-75af-4ffa-a15b-2cc516acfabf/registry-server/0.log" Oct 13 18:25:57 crc kubenswrapper[4720]: I1013 18:25:57.655919 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mdzhd_54894eb4-2aeb-4c93-b8d7-0e22213452f5/extract-utilities/0.log" Oct 13 18:25:57 crc kubenswrapper[4720]: I1013 18:25:57.812628 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mdzhd_54894eb4-2aeb-4c93-b8d7-0e22213452f5/extract-utilities/0.log" Oct 13 18:25:57 crc kubenswrapper[4720]: I1013 18:25:57.820135 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mdzhd_54894eb4-2aeb-4c93-b8d7-0e22213452f5/extract-content/0.log" Oct 13 18:25:57 crc kubenswrapper[4720]: I1013 18:25:57.826232 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mdzhd_54894eb4-2aeb-4c93-b8d7-0e22213452f5/extract-content/0.log" Oct 13 18:25:58 crc kubenswrapper[4720]: I1013 18:25:58.005073 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mdzhd_54894eb4-2aeb-4c93-b8d7-0e22213452f5/extract-content/0.log" Oct 13 18:25:58 crc kubenswrapper[4720]: I1013 18:25:58.018592 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mdzhd_54894eb4-2aeb-4c93-b8d7-0e22213452f5/extract-utilities/0.log" Oct 13 18:25:58 crc kubenswrapper[4720]: I1013 18:25:58.168377 4720 scope.go:117] "RemoveContainer" containerID="79ca5c733b5eaab4a6d8e4dab434c0461ff00748dd7441ce45a3602ece00a48e" Oct 13 18:25:58 crc kubenswrapper[4720]: E1013 18:25:58.168599 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:25:58 crc kubenswrapper[4720]: I1013 18:25:58.463913 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mdzhd_54894eb4-2aeb-4c93-b8d7-0e22213452f5/registry-server/0.log" Oct 13 18:26:11 crc kubenswrapper[4720]: I1013 18:26:11.168034 4720 scope.go:117] "RemoveContainer" containerID="79ca5c733b5eaab4a6d8e4dab434c0461ff00748dd7441ce45a3602ece00a48e" Oct 13 18:26:11 crc kubenswrapper[4720]: E1013 18:26:11.168704 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:26:15 crc kubenswrapper[4720]: E1013 18:26:15.243463 4720 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.17:57308->38.102.83.17:42869: write tcp 38.102.83.17:57308->38.102.83.17:42869: write: broken pipe Oct 13 18:26:24 crc kubenswrapper[4720]: I1013 18:26:24.168681 4720 scope.go:117] "RemoveContainer" containerID="79ca5c733b5eaab4a6d8e4dab434c0461ff00748dd7441ce45a3602ece00a48e" Oct 13 18:26:24 crc kubenswrapper[4720]: E1013 18:26:24.169443 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:26:36 crc kubenswrapper[4720]: I1013 18:26:36.168693 4720 scope.go:117] "RemoveContainer" containerID="79ca5c733b5eaab4a6d8e4dab434c0461ff00748dd7441ce45a3602ece00a48e" Oct 13 18:26:36 crc kubenswrapper[4720]: E1013 18:26:36.169745 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:26:48 crc kubenswrapper[4720]: I1013 18:26:48.169553 4720 scope.go:117] "RemoveContainer" containerID="79ca5c733b5eaab4a6d8e4dab434c0461ff00748dd7441ce45a3602ece00a48e" Oct 13 18:26:48 crc kubenswrapper[4720]: E1013 18:26:48.170604 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:27:01 crc kubenswrapper[4720]: I1013 18:27:01.169049 4720 scope.go:117] "RemoveContainer" containerID="79ca5c733b5eaab4a6d8e4dab434c0461ff00748dd7441ce45a3602ece00a48e" Oct 13 18:27:01 crc kubenswrapper[4720]: E1013 18:27:01.169914 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:27:13 crc kubenswrapper[4720]: I1013 18:27:13.169404 4720 scope.go:117] "RemoveContainer" containerID="79ca5c733b5eaab4a6d8e4dab434c0461ff00748dd7441ce45a3602ece00a48e" Oct 13 18:27:13 crc kubenswrapper[4720]: E1013 18:27:13.170377 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:27:28 crc kubenswrapper[4720]: I1013 18:27:28.168354 4720 scope.go:117] "RemoveContainer" containerID="79ca5c733b5eaab4a6d8e4dab434c0461ff00748dd7441ce45a3602ece00a48e" Oct 13 18:27:28 crc kubenswrapper[4720]: E1013 18:27:28.169167 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:27:34 crc kubenswrapper[4720]: I1013 18:27:34.793872 4720 generic.go:334] "Generic (PLEG): container finished" podID="bee557f2-d5ec-4166-b78b-ab8b71c413b0" containerID="376529539af074bee8515187f674d0998b19aa0f9912f033f3e24eee029da047" exitCode=0 Oct 13 18:27:34 crc kubenswrapper[4720]: I1013 18:27:34.793987 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6mndj/must-gather-j95b9" event={"ID":"bee557f2-d5ec-4166-b78b-ab8b71c413b0","Type":"ContainerDied","Data":"376529539af074bee8515187f674d0998b19aa0f9912f033f3e24eee029da047"} Oct 13 18:27:34 crc kubenswrapper[4720]: I1013 18:27:34.795412 4720 scope.go:117] "RemoveContainer" containerID="376529539af074bee8515187f674d0998b19aa0f9912f033f3e24eee029da047" Oct 13 18:27:35 crc kubenswrapper[4720]: I1013 18:27:35.082996 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-6mndj_must-gather-j95b9_bee557f2-d5ec-4166-b78b-ab8b71c413b0/gather/0.log" Oct 13 18:27:42 crc kubenswrapper[4720]: I1013 18:27:42.958885 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-6mndj/must-gather-j95b9"] Oct 13 18:27:42 crc kubenswrapper[4720]: I1013 18:27:42.960513 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-6mndj/must-gather-j95b9" podUID="bee557f2-d5ec-4166-b78b-ab8b71c413b0" containerName="copy" containerID="cri-o://54f0dfe7f7e5ea87d39379885aa932eba4266594432b4f7fa9bcf29d05b4c2a3" gracePeriod=2 Oct 13 18:27:42 crc kubenswrapper[4720]: I1013 18:27:42.978081 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-6mndj/must-gather-j95b9"] Oct 13 18:27:43 crc kubenswrapper[4720]: I1013 18:27:43.168157 4720 scope.go:117] "RemoveContainer" containerID="79ca5c733b5eaab4a6d8e4dab434c0461ff00748dd7441ce45a3602ece00a48e" Oct 13 18:27:43 crc kubenswrapper[4720]: E1013 18:27:43.168470 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:27:43 crc kubenswrapper[4720]: I1013 18:27:43.381545 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-6mndj_must-gather-j95b9_bee557f2-d5ec-4166-b78b-ab8b71c413b0/copy/0.log" Oct 13 18:27:43 crc kubenswrapper[4720]: I1013 18:27:43.382074 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6mndj/must-gather-j95b9" Oct 13 18:27:43 crc kubenswrapper[4720]: I1013 18:27:43.454440 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k67p2\" (UniqueName: \"kubernetes.io/projected/bee557f2-d5ec-4166-b78b-ab8b71c413b0-kube-api-access-k67p2\") pod \"bee557f2-d5ec-4166-b78b-ab8b71c413b0\" (UID: \"bee557f2-d5ec-4166-b78b-ab8b71c413b0\") " Oct 13 18:27:43 crc kubenswrapper[4720]: I1013 18:27:43.454694 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bee557f2-d5ec-4166-b78b-ab8b71c413b0-must-gather-output\") pod \"bee557f2-d5ec-4166-b78b-ab8b71c413b0\" (UID: \"bee557f2-d5ec-4166-b78b-ab8b71c413b0\") " Oct 13 18:27:43 crc kubenswrapper[4720]: I1013 18:27:43.460598 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bee557f2-d5ec-4166-b78b-ab8b71c413b0-kube-api-access-k67p2" (OuterVolumeSpecName: "kube-api-access-k67p2") pod "bee557f2-d5ec-4166-b78b-ab8b71c413b0" (UID: "bee557f2-d5ec-4166-b78b-ab8b71c413b0"). InnerVolumeSpecName "kube-api-access-k67p2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 18:27:43 crc kubenswrapper[4720]: I1013 18:27:43.556783 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k67p2\" (UniqueName: \"kubernetes.io/projected/bee557f2-d5ec-4166-b78b-ab8b71c413b0-kube-api-access-k67p2\") on node \"crc\" DevicePath \"\"" Oct 13 18:27:43 crc kubenswrapper[4720]: I1013 18:27:43.588704 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bee557f2-d5ec-4166-b78b-ab8b71c413b0-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "bee557f2-d5ec-4166-b78b-ab8b71c413b0" (UID: "bee557f2-d5ec-4166-b78b-ab8b71c413b0"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 18:27:43 crc kubenswrapper[4720]: I1013 18:27:43.658095 4720 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bee557f2-d5ec-4166-b78b-ab8b71c413b0-must-gather-output\") on node \"crc\" DevicePath \"\"" Oct 13 18:27:43 crc kubenswrapper[4720]: I1013 18:27:43.884644 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-6mndj_must-gather-j95b9_bee557f2-d5ec-4166-b78b-ab8b71c413b0/copy/0.log" Oct 13 18:27:43 crc kubenswrapper[4720]: I1013 18:27:43.885121 4720 generic.go:334] "Generic (PLEG): container finished" podID="bee557f2-d5ec-4166-b78b-ab8b71c413b0" containerID="54f0dfe7f7e5ea87d39379885aa932eba4266594432b4f7fa9bcf29d05b4c2a3" exitCode=143 Oct 13 18:27:43 crc kubenswrapper[4720]: I1013 18:27:43.885167 4720 scope.go:117] "RemoveContainer" containerID="54f0dfe7f7e5ea87d39379885aa932eba4266594432b4f7fa9bcf29d05b4c2a3" Oct 13 18:27:43 crc kubenswrapper[4720]: I1013 18:27:43.885316 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6mndj/must-gather-j95b9" Oct 13 18:27:43 crc kubenswrapper[4720]: I1013 18:27:43.909431 4720 scope.go:117] "RemoveContainer" containerID="376529539af074bee8515187f674d0998b19aa0f9912f033f3e24eee029da047" Oct 13 18:27:43 crc kubenswrapper[4720]: I1013 18:27:43.960684 4720 scope.go:117] "RemoveContainer" containerID="54f0dfe7f7e5ea87d39379885aa932eba4266594432b4f7fa9bcf29d05b4c2a3" Oct 13 18:27:43 crc kubenswrapper[4720]: E1013 18:27:43.961323 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54f0dfe7f7e5ea87d39379885aa932eba4266594432b4f7fa9bcf29d05b4c2a3\": container with ID starting with 54f0dfe7f7e5ea87d39379885aa932eba4266594432b4f7fa9bcf29d05b4c2a3 not found: ID does not exist" containerID="54f0dfe7f7e5ea87d39379885aa932eba4266594432b4f7fa9bcf29d05b4c2a3" Oct 13 18:27:43 crc kubenswrapper[4720]: I1013 18:27:43.961410 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54f0dfe7f7e5ea87d39379885aa932eba4266594432b4f7fa9bcf29d05b4c2a3"} err="failed to get container status \"54f0dfe7f7e5ea87d39379885aa932eba4266594432b4f7fa9bcf29d05b4c2a3\": rpc error: code = NotFound desc = could not find container \"54f0dfe7f7e5ea87d39379885aa932eba4266594432b4f7fa9bcf29d05b4c2a3\": container with ID starting with 54f0dfe7f7e5ea87d39379885aa932eba4266594432b4f7fa9bcf29d05b4c2a3 not found: ID does not exist" Oct 13 18:27:43 crc kubenswrapper[4720]: I1013 18:27:43.961481 4720 scope.go:117] "RemoveContainer" containerID="376529539af074bee8515187f674d0998b19aa0f9912f033f3e24eee029da047" Oct 13 18:27:43 crc kubenswrapper[4720]: E1013 18:27:43.964970 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"376529539af074bee8515187f674d0998b19aa0f9912f033f3e24eee029da047\": container with ID starting with 376529539af074bee8515187f674d0998b19aa0f9912f033f3e24eee029da047 not found: ID does not exist" containerID="376529539af074bee8515187f674d0998b19aa0f9912f033f3e24eee029da047" Oct 13 18:27:43 crc kubenswrapper[4720]: I1013 18:27:43.965038 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"376529539af074bee8515187f674d0998b19aa0f9912f033f3e24eee029da047"} err="failed to get container status \"376529539af074bee8515187f674d0998b19aa0f9912f033f3e24eee029da047\": rpc error: code = NotFound desc = could not find container \"376529539af074bee8515187f674d0998b19aa0f9912f033f3e24eee029da047\": container with ID starting with 376529539af074bee8515187f674d0998b19aa0f9912f033f3e24eee029da047 not found: ID does not exist" Oct 13 18:27:45 crc kubenswrapper[4720]: I1013 18:27:45.179442 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bee557f2-d5ec-4166-b78b-ab8b71c413b0" path="/var/lib/kubelet/pods/bee557f2-d5ec-4166-b78b-ab8b71c413b0/volumes" Oct 13 18:27:54 crc kubenswrapper[4720]: I1013 18:27:54.168850 4720 scope.go:117] "RemoveContainer" containerID="79ca5c733b5eaab4a6d8e4dab434c0461ff00748dd7441ce45a3602ece00a48e" Oct 13 18:27:54 crc kubenswrapper[4720]: E1013 18:27:54.170229 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:28:09 crc kubenswrapper[4720]: I1013 18:28:09.168729 4720 scope.go:117] "RemoveContainer" containerID="79ca5c733b5eaab4a6d8e4dab434c0461ff00748dd7441ce45a3602ece00a48e" Oct 13 18:28:09 crc kubenswrapper[4720]: E1013 18:28:09.169710 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:28:11 crc kubenswrapper[4720]: I1013 18:28:11.142716 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qqx25/must-gather-t7pst"] Oct 13 18:28:11 crc kubenswrapper[4720]: E1013 18:28:11.143581 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bee557f2-d5ec-4166-b78b-ab8b71c413b0" containerName="copy" Oct 13 18:28:11 crc kubenswrapper[4720]: I1013 18:28:11.143599 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="bee557f2-d5ec-4166-b78b-ab8b71c413b0" containerName="copy" Oct 13 18:28:11 crc kubenswrapper[4720]: E1013 18:28:11.143625 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bee557f2-d5ec-4166-b78b-ab8b71c413b0" containerName="gather" Oct 13 18:28:11 crc kubenswrapper[4720]: I1013 18:28:11.143635 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="bee557f2-d5ec-4166-b78b-ab8b71c413b0" containerName="gather" Oct 13 18:28:11 crc kubenswrapper[4720]: E1013 18:28:11.143679 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f17f6046-4914-4528-951a-717dc3b64bc9" containerName="container-00" Oct 13 18:28:11 crc kubenswrapper[4720]: I1013 18:28:11.143688 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="f17f6046-4914-4528-951a-717dc3b64bc9" containerName="container-00" Oct 13 18:28:11 crc kubenswrapper[4720]: I1013 18:28:11.143906 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="bee557f2-d5ec-4166-b78b-ab8b71c413b0" containerName="copy" Oct 13 18:28:11 crc kubenswrapper[4720]: I1013 18:28:11.143936 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="f17f6046-4914-4528-951a-717dc3b64bc9" containerName="container-00" Oct 13 18:28:11 crc kubenswrapper[4720]: I1013 18:28:11.143959 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="bee557f2-d5ec-4166-b78b-ab8b71c413b0" containerName="gather" Oct 13 18:28:11 crc kubenswrapper[4720]: I1013 18:28:11.145289 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qqx25/must-gather-t7pst" Oct 13 18:28:11 crc kubenswrapper[4720]: I1013 18:28:11.146859 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-qqx25"/"kube-root-ca.crt" Oct 13 18:28:11 crc kubenswrapper[4720]: I1013 18:28:11.149560 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-qqx25"/"default-dockercfg-z2h92" Oct 13 18:28:11 crc kubenswrapper[4720]: I1013 18:28:11.149626 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-qqx25"/"openshift-service-ca.crt" Oct 13 18:28:11 crc kubenswrapper[4720]: I1013 18:28:11.221753 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-qqx25/must-gather-t7pst"] Oct 13 18:28:11 crc kubenswrapper[4720]: I1013 18:28:11.285434 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/05df5caf-006a-4071-b2f2-25fc6b06d156-must-gather-output\") pod \"must-gather-t7pst\" (UID: \"05df5caf-006a-4071-b2f2-25fc6b06d156\") " pod="openshift-must-gather-qqx25/must-gather-t7pst" Oct 13 18:28:11 crc kubenswrapper[4720]: I1013 18:28:11.285483 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg4st\" (UniqueName: \"kubernetes.io/projected/05df5caf-006a-4071-b2f2-25fc6b06d156-kube-api-access-dg4st\") pod \"must-gather-t7pst\" (UID: \"05df5caf-006a-4071-b2f2-25fc6b06d156\") " pod="openshift-must-gather-qqx25/must-gather-t7pst" Oct 13 18:28:11 crc kubenswrapper[4720]: I1013 18:28:11.387156 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/05df5caf-006a-4071-b2f2-25fc6b06d156-must-gather-output\") pod \"must-gather-t7pst\" (UID: \"05df5caf-006a-4071-b2f2-25fc6b06d156\") " pod="openshift-must-gather-qqx25/must-gather-t7pst" Oct 13 18:28:11 crc kubenswrapper[4720]: I1013 18:28:11.387220 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dg4st\" (UniqueName: \"kubernetes.io/projected/05df5caf-006a-4071-b2f2-25fc6b06d156-kube-api-access-dg4st\") pod \"must-gather-t7pst\" (UID: \"05df5caf-006a-4071-b2f2-25fc6b06d156\") " pod="openshift-must-gather-qqx25/must-gather-t7pst" Oct 13 18:28:11 crc kubenswrapper[4720]: I1013 18:28:11.387702 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/05df5caf-006a-4071-b2f2-25fc6b06d156-must-gather-output\") pod \"must-gather-t7pst\" (UID: \"05df5caf-006a-4071-b2f2-25fc6b06d156\") " pod="openshift-must-gather-qqx25/must-gather-t7pst" Oct 13 18:28:11 crc kubenswrapper[4720]: I1013 18:28:11.404326 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dg4st\" (UniqueName: \"kubernetes.io/projected/05df5caf-006a-4071-b2f2-25fc6b06d156-kube-api-access-dg4st\") pod \"must-gather-t7pst\" (UID: \"05df5caf-006a-4071-b2f2-25fc6b06d156\") " pod="openshift-must-gather-qqx25/must-gather-t7pst" Oct 13 18:28:11 crc kubenswrapper[4720]: I1013 18:28:11.468366 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qqx25/must-gather-t7pst" Oct 13 18:28:11 crc kubenswrapper[4720]: I1013 18:28:11.751119 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-qqx25/must-gather-t7pst"] Oct 13 18:28:12 crc kubenswrapper[4720]: I1013 18:28:12.231488 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qqx25/must-gather-t7pst" event={"ID":"05df5caf-006a-4071-b2f2-25fc6b06d156","Type":"ContainerStarted","Data":"fb9ff718e9905527a223a83d9386147be4fe82b9d70f96b2fc36327f3038da9a"} Oct 13 18:28:12 crc kubenswrapper[4720]: I1013 18:28:12.231876 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qqx25/must-gather-t7pst" event={"ID":"05df5caf-006a-4071-b2f2-25fc6b06d156","Type":"ContainerStarted","Data":"92963bdf5692b1d771ac19055f98b622fb1d72adcb717a55152e5269d5ac134f"} Oct 13 18:28:13 crc kubenswrapper[4720]: I1013 18:28:13.244383 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qqx25/must-gather-t7pst" event={"ID":"05df5caf-006a-4071-b2f2-25fc6b06d156","Type":"ContainerStarted","Data":"193bcc46713c60adca84947ababbfb5a3668b55da58c8889c624a0e9eee19582"} Oct 13 18:28:13 crc kubenswrapper[4720]: I1013 18:28:13.265847 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-qqx25/must-gather-t7pst" podStartSLOduration=2.2658271 podStartE2EDuration="2.2658271s" podCreationTimestamp="2025-10-13 18:28:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 18:28:13.259954338 +0000 UTC m=+3838.717204480" watchObservedRunningTime="2025-10-13 18:28:13.2658271 +0000 UTC m=+3838.723077242" Oct 13 18:28:15 crc kubenswrapper[4720]: I1013 18:28:15.626890 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qqx25/crc-debug-7s7bf"] Oct 13 18:28:15 crc kubenswrapper[4720]: I1013 18:28:15.629591 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qqx25/crc-debug-7s7bf" Oct 13 18:28:15 crc kubenswrapper[4720]: I1013 18:28:15.683158 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/62ed32ce-bb72-4398-9570-ce10a1bd5dbc-host\") pod \"crc-debug-7s7bf\" (UID: \"62ed32ce-bb72-4398-9570-ce10a1bd5dbc\") " pod="openshift-must-gather-qqx25/crc-debug-7s7bf" Oct 13 18:28:15 crc kubenswrapper[4720]: I1013 18:28:15.683304 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z27mh\" (UniqueName: \"kubernetes.io/projected/62ed32ce-bb72-4398-9570-ce10a1bd5dbc-kube-api-access-z27mh\") pod \"crc-debug-7s7bf\" (UID: \"62ed32ce-bb72-4398-9570-ce10a1bd5dbc\") " pod="openshift-must-gather-qqx25/crc-debug-7s7bf" Oct 13 18:28:15 crc kubenswrapper[4720]: I1013 18:28:15.784311 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/62ed32ce-bb72-4398-9570-ce10a1bd5dbc-host\") pod \"crc-debug-7s7bf\" (UID: \"62ed32ce-bb72-4398-9570-ce10a1bd5dbc\") " pod="openshift-must-gather-qqx25/crc-debug-7s7bf" Oct 13 18:28:15 crc kubenswrapper[4720]: I1013 18:28:15.784416 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/62ed32ce-bb72-4398-9570-ce10a1bd5dbc-host\") pod \"crc-debug-7s7bf\" (UID: \"62ed32ce-bb72-4398-9570-ce10a1bd5dbc\") " pod="openshift-must-gather-qqx25/crc-debug-7s7bf" Oct 13 18:28:15 crc kubenswrapper[4720]: I1013 18:28:15.784449 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z27mh\" (UniqueName: \"kubernetes.io/projected/62ed32ce-bb72-4398-9570-ce10a1bd5dbc-kube-api-access-z27mh\") pod \"crc-debug-7s7bf\" (UID: \"62ed32ce-bb72-4398-9570-ce10a1bd5dbc\") " pod="openshift-must-gather-qqx25/crc-debug-7s7bf" Oct 13 18:28:15 crc kubenswrapper[4720]: I1013 18:28:15.812092 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z27mh\" (UniqueName: \"kubernetes.io/projected/62ed32ce-bb72-4398-9570-ce10a1bd5dbc-kube-api-access-z27mh\") pod \"crc-debug-7s7bf\" (UID: \"62ed32ce-bb72-4398-9570-ce10a1bd5dbc\") " pod="openshift-must-gather-qqx25/crc-debug-7s7bf" Oct 13 18:28:15 crc kubenswrapper[4720]: I1013 18:28:15.958830 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qqx25/crc-debug-7s7bf" Oct 13 18:28:16 crc kubenswrapper[4720]: I1013 18:28:16.282355 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qqx25/crc-debug-7s7bf" event={"ID":"62ed32ce-bb72-4398-9570-ce10a1bd5dbc","Type":"ContainerStarted","Data":"b312ffd4ca60ad21e868ef21d0dcd18b3627853b804b4b056bef82e2063baa2e"} Oct 13 18:28:16 crc kubenswrapper[4720]: I1013 18:28:16.282712 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qqx25/crc-debug-7s7bf" event={"ID":"62ed32ce-bb72-4398-9570-ce10a1bd5dbc","Type":"ContainerStarted","Data":"9254393f15d66c083464253f714f5bec7f375e0a63ae6ae4c9273f7418230c3b"} Oct 13 18:28:16 crc kubenswrapper[4720]: I1013 18:28:16.308080 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-qqx25/crc-debug-7s7bf" podStartSLOduration=1.308063352 podStartE2EDuration="1.308063352s" podCreationTimestamp="2025-10-13 18:28:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 18:28:16.301078351 +0000 UTC m=+3841.758328493" watchObservedRunningTime="2025-10-13 18:28:16.308063352 +0000 UTC m=+3841.765313484" Oct 13 18:28:21 crc kubenswrapper[4720]: I1013 18:28:21.168010 4720 scope.go:117] "RemoveContainer" containerID="79ca5c733b5eaab4a6d8e4dab434c0461ff00748dd7441ce45a3602ece00a48e" Oct 13 18:28:22 crc kubenswrapper[4720]: I1013 18:28:22.332516 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" event={"ID":"ce442c80-fcde-4b79-b6f9-f8f25771dfd4","Type":"ContainerStarted","Data":"daa911e13591966ffbf77524066a5e02ca5e52e1c46be154956ad33bdd15eac1"} Oct 13 18:28:50 crc kubenswrapper[4720]: I1013 18:28:50.596540 4720 generic.go:334] "Generic (PLEG): container finished" podID="62ed32ce-bb72-4398-9570-ce10a1bd5dbc" containerID="b312ffd4ca60ad21e868ef21d0dcd18b3627853b804b4b056bef82e2063baa2e" exitCode=0 Oct 13 18:28:50 crc kubenswrapper[4720]: I1013 18:28:50.596659 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qqx25/crc-debug-7s7bf" event={"ID":"62ed32ce-bb72-4398-9570-ce10a1bd5dbc","Type":"ContainerDied","Data":"b312ffd4ca60ad21e868ef21d0dcd18b3627853b804b4b056bef82e2063baa2e"} Oct 13 18:28:51 crc kubenswrapper[4720]: I1013 18:28:51.697338 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qqx25/crc-debug-7s7bf" Oct 13 18:28:51 crc kubenswrapper[4720]: I1013 18:28:51.726657 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qqx25/crc-debug-7s7bf"] Oct 13 18:28:51 crc kubenswrapper[4720]: I1013 18:28:51.733723 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qqx25/crc-debug-7s7bf"] Oct 13 18:28:51 crc kubenswrapper[4720]: I1013 18:28:51.824576 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z27mh\" (UniqueName: \"kubernetes.io/projected/62ed32ce-bb72-4398-9570-ce10a1bd5dbc-kube-api-access-z27mh\") pod \"62ed32ce-bb72-4398-9570-ce10a1bd5dbc\" (UID: \"62ed32ce-bb72-4398-9570-ce10a1bd5dbc\") " Oct 13 18:28:51 crc kubenswrapper[4720]: I1013 18:28:51.824626 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/62ed32ce-bb72-4398-9570-ce10a1bd5dbc-host\") pod \"62ed32ce-bb72-4398-9570-ce10a1bd5dbc\" (UID: \"62ed32ce-bb72-4398-9570-ce10a1bd5dbc\") " Oct 13 18:28:51 crc kubenswrapper[4720]: I1013 18:28:51.825028 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62ed32ce-bb72-4398-9570-ce10a1bd5dbc-host" (OuterVolumeSpecName: "host") pod "62ed32ce-bb72-4398-9570-ce10a1bd5dbc" (UID: "62ed32ce-bb72-4398-9570-ce10a1bd5dbc"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 18:28:51 crc kubenswrapper[4720]: I1013 18:28:51.840450 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62ed32ce-bb72-4398-9570-ce10a1bd5dbc-kube-api-access-z27mh" (OuterVolumeSpecName: "kube-api-access-z27mh") pod "62ed32ce-bb72-4398-9570-ce10a1bd5dbc" (UID: "62ed32ce-bb72-4398-9570-ce10a1bd5dbc"). InnerVolumeSpecName "kube-api-access-z27mh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 18:28:51 crc kubenswrapper[4720]: I1013 18:28:51.927607 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z27mh\" (UniqueName: \"kubernetes.io/projected/62ed32ce-bb72-4398-9570-ce10a1bd5dbc-kube-api-access-z27mh\") on node \"crc\" DevicePath \"\"" Oct 13 18:28:51 crc kubenswrapper[4720]: I1013 18:28:51.927645 4720 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/62ed32ce-bb72-4398-9570-ce10a1bd5dbc-host\") on node \"crc\" DevicePath \"\"" Oct 13 18:28:52 crc kubenswrapper[4720]: I1013 18:28:52.626999 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9254393f15d66c083464253f714f5bec7f375e0a63ae6ae4c9273f7418230c3b" Oct 13 18:28:52 crc kubenswrapper[4720]: I1013 18:28:52.627108 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qqx25/crc-debug-7s7bf" Oct 13 18:28:52 crc kubenswrapper[4720]: I1013 18:28:52.890088 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qqx25/crc-debug-jthlz"] Oct 13 18:28:52 crc kubenswrapper[4720]: E1013 18:28:52.890490 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62ed32ce-bb72-4398-9570-ce10a1bd5dbc" containerName="container-00" Oct 13 18:28:52 crc kubenswrapper[4720]: I1013 18:28:52.890502 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="62ed32ce-bb72-4398-9570-ce10a1bd5dbc" containerName="container-00" Oct 13 18:28:52 crc kubenswrapper[4720]: I1013 18:28:52.890722 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="62ed32ce-bb72-4398-9570-ce10a1bd5dbc" containerName="container-00" Oct 13 18:28:52 crc kubenswrapper[4720]: I1013 18:28:52.891393 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qqx25/crc-debug-jthlz" Oct 13 18:28:53 crc kubenswrapper[4720]: I1013 18:28:53.047616 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f77a93de-6e22-40a8-b2b9-cd1524472e32-host\") pod \"crc-debug-jthlz\" (UID: \"f77a93de-6e22-40a8-b2b9-cd1524472e32\") " pod="openshift-must-gather-qqx25/crc-debug-jthlz" Oct 13 18:28:53 crc kubenswrapper[4720]: I1013 18:28:53.047733 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsxr4\" (UniqueName: \"kubernetes.io/projected/f77a93de-6e22-40a8-b2b9-cd1524472e32-kube-api-access-qsxr4\") pod \"crc-debug-jthlz\" (UID: \"f77a93de-6e22-40a8-b2b9-cd1524472e32\") " pod="openshift-must-gather-qqx25/crc-debug-jthlz" Oct 13 18:28:53 crc kubenswrapper[4720]: I1013 18:28:53.150099 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsxr4\" (UniqueName: \"kubernetes.io/projected/f77a93de-6e22-40a8-b2b9-cd1524472e32-kube-api-access-qsxr4\") pod \"crc-debug-jthlz\" (UID: \"f77a93de-6e22-40a8-b2b9-cd1524472e32\") " pod="openshift-must-gather-qqx25/crc-debug-jthlz" Oct 13 18:28:53 crc kubenswrapper[4720]: I1013 18:28:53.150674 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f77a93de-6e22-40a8-b2b9-cd1524472e32-host\") pod \"crc-debug-jthlz\" (UID: \"f77a93de-6e22-40a8-b2b9-cd1524472e32\") " pod="openshift-must-gather-qqx25/crc-debug-jthlz" Oct 13 18:28:53 crc kubenswrapper[4720]: I1013 18:28:53.150746 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f77a93de-6e22-40a8-b2b9-cd1524472e32-host\") pod \"crc-debug-jthlz\" (UID: \"f77a93de-6e22-40a8-b2b9-cd1524472e32\") " pod="openshift-must-gather-qqx25/crc-debug-jthlz" Oct 13 18:28:53 crc kubenswrapper[4720]: I1013 18:28:53.173644 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsxr4\" (UniqueName: \"kubernetes.io/projected/f77a93de-6e22-40a8-b2b9-cd1524472e32-kube-api-access-qsxr4\") pod \"crc-debug-jthlz\" (UID: \"f77a93de-6e22-40a8-b2b9-cd1524472e32\") " pod="openshift-must-gather-qqx25/crc-debug-jthlz" Oct 13 18:28:53 crc kubenswrapper[4720]: I1013 18:28:53.181957 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62ed32ce-bb72-4398-9570-ce10a1bd5dbc" path="/var/lib/kubelet/pods/62ed32ce-bb72-4398-9570-ce10a1bd5dbc/volumes" Oct 13 18:28:53 crc kubenswrapper[4720]: I1013 18:28:53.209869 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qqx25/crc-debug-jthlz" Oct 13 18:28:53 crc kubenswrapper[4720]: I1013 18:28:53.638237 4720 generic.go:334] "Generic (PLEG): container finished" podID="f77a93de-6e22-40a8-b2b9-cd1524472e32" containerID="0396860c72b2fe4d26ea74942b3da0ed6360579d7646b9d9ec56bdcbb3215379" exitCode=0 Oct 13 18:28:53 crc kubenswrapper[4720]: I1013 18:28:53.638344 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qqx25/crc-debug-jthlz" event={"ID":"f77a93de-6e22-40a8-b2b9-cd1524472e32","Type":"ContainerDied","Data":"0396860c72b2fe4d26ea74942b3da0ed6360579d7646b9d9ec56bdcbb3215379"} Oct 13 18:28:53 crc kubenswrapper[4720]: I1013 18:28:53.638554 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qqx25/crc-debug-jthlz" event={"ID":"f77a93de-6e22-40a8-b2b9-cd1524472e32","Type":"ContainerStarted","Data":"6b7cac9c8e39dc7f359257c3403f9263738f0e3389baa1efd67d1d5656211308"} Oct 13 18:28:54 crc kubenswrapper[4720]: I1013 18:28:54.147540 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qqx25/crc-debug-jthlz"] Oct 13 18:28:54 crc kubenswrapper[4720]: I1013 18:28:54.158613 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qqx25/crc-debug-jthlz"] Oct 13 18:28:54 crc kubenswrapper[4720]: I1013 18:28:54.751626 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qqx25/crc-debug-jthlz" Oct 13 18:28:54 crc kubenswrapper[4720]: I1013 18:28:54.798491 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qsxr4\" (UniqueName: \"kubernetes.io/projected/f77a93de-6e22-40a8-b2b9-cd1524472e32-kube-api-access-qsxr4\") pod \"f77a93de-6e22-40a8-b2b9-cd1524472e32\" (UID: \"f77a93de-6e22-40a8-b2b9-cd1524472e32\") " Oct 13 18:28:54 crc kubenswrapper[4720]: I1013 18:28:54.798612 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f77a93de-6e22-40a8-b2b9-cd1524472e32-host\") pod \"f77a93de-6e22-40a8-b2b9-cd1524472e32\" (UID: \"f77a93de-6e22-40a8-b2b9-cd1524472e32\") " Oct 13 18:28:54 crc kubenswrapper[4720]: I1013 18:28:54.798967 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f77a93de-6e22-40a8-b2b9-cd1524472e32-host" (OuterVolumeSpecName: "host") pod "f77a93de-6e22-40a8-b2b9-cd1524472e32" (UID: "f77a93de-6e22-40a8-b2b9-cd1524472e32"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 18:28:54 crc kubenswrapper[4720]: I1013 18:28:54.799687 4720 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f77a93de-6e22-40a8-b2b9-cd1524472e32-host\") on node \"crc\" DevicePath \"\"" Oct 13 18:28:54 crc kubenswrapper[4720]: I1013 18:28:54.804222 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f77a93de-6e22-40a8-b2b9-cd1524472e32-kube-api-access-qsxr4" (OuterVolumeSpecName: "kube-api-access-qsxr4") pod "f77a93de-6e22-40a8-b2b9-cd1524472e32" (UID: "f77a93de-6e22-40a8-b2b9-cd1524472e32"). InnerVolumeSpecName "kube-api-access-qsxr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 18:28:54 crc kubenswrapper[4720]: I1013 18:28:54.900858 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qsxr4\" (UniqueName: \"kubernetes.io/projected/f77a93de-6e22-40a8-b2b9-cd1524472e32-kube-api-access-qsxr4\") on node \"crc\" DevicePath \"\"" Oct 13 18:28:55 crc kubenswrapper[4720]: I1013 18:28:55.183652 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f77a93de-6e22-40a8-b2b9-cd1524472e32" path="/var/lib/kubelet/pods/f77a93de-6e22-40a8-b2b9-cd1524472e32/volumes" Oct 13 18:28:55 crc kubenswrapper[4720]: I1013 18:28:55.318299 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qqx25/crc-debug-zvpsl"] Oct 13 18:28:55 crc kubenswrapper[4720]: E1013 18:28:55.318729 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f77a93de-6e22-40a8-b2b9-cd1524472e32" containerName="container-00" Oct 13 18:28:55 crc kubenswrapper[4720]: I1013 18:28:55.318758 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="f77a93de-6e22-40a8-b2b9-cd1524472e32" containerName="container-00" Oct 13 18:28:55 crc kubenswrapper[4720]: I1013 18:28:55.319294 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="f77a93de-6e22-40a8-b2b9-cd1524472e32" containerName="container-00" Oct 13 18:28:55 crc kubenswrapper[4720]: I1013 18:28:55.320166 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qqx25/crc-debug-zvpsl" Oct 13 18:28:55 crc kubenswrapper[4720]: I1013 18:28:55.410491 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsfz5\" (UniqueName: \"kubernetes.io/projected/ee6bf4a1-968b-42e6-9504-b83f90181d0e-kube-api-access-rsfz5\") pod \"crc-debug-zvpsl\" (UID: \"ee6bf4a1-968b-42e6-9504-b83f90181d0e\") " pod="openshift-must-gather-qqx25/crc-debug-zvpsl" Oct 13 18:28:55 crc kubenswrapper[4720]: I1013 18:28:55.410569 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ee6bf4a1-968b-42e6-9504-b83f90181d0e-host\") pod \"crc-debug-zvpsl\" (UID: \"ee6bf4a1-968b-42e6-9504-b83f90181d0e\") " pod="openshift-must-gather-qqx25/crc-debug-zvpsl" Oct 13 18:28:55 crc kubenswrapper[4720]: I1013 18:28:55.511876 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsfz5\" (UniqueName: \"kubernetes.io/projected/ee6bf4a1-968b-42e6-9504-b83f90181d0e-kube-api-access-rsfz5\") pod \"crc-debug-zvpsl\" (UID: \"ee6bf4a1-968b-42e6-9504-b83f90181d0e\") " pod="openshift-must-gather-qqx25/crc-debug-zvpsl" Oct 13 18:28:55 crc kubenswrapper[4720]: I1013 18:28:55.512252 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ee6bf4a1-968b-42e6-9504-b83f90181d0e-host\") pod \"crc-debug-zvpsl\" (UID: \"ee6bf4a1-968b-42e6-9504-b83f90181d0e\") " pod="openshift-must-gather-qqx25/crc-debug-zvpsl" Oct 13 18:28:55 crc kubenswrapper[4720]: I1013 18:28:55.512385 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ee6bf4a1-968b-42e6-9504-b83f90181d0e-host\") pod \"crc-debug-zvpsl\" (UID: \"ee6bf4a1-968b-42e6-9504-b83f90181d0e\") " pod="openshift-must-gather-qqx25/crc-debug-zvpsl" Oct 13 18:28:55 crc kubenswrapper[4720]: I1013 18:28:55.538471 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsfz5\" (UniqueName: \"kubernetes.io/projected/ee6bf4a1-968b-42e6-9504-b83f90181d0e-kube-api-access-rsfz5\") pod \"crc-debug-zvpsl\" (UID: \"ee6bf4a1-968b-42e6-9504-b83f90181d0e\") " pod="openshift-must-gather-qqx25/crc-debug-zvpsl" Oct 13 18:28:55 crc kubenswrapper[4720]: I1013 18:28:55.644337 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qqx25/crc-debug-zvpsl" Oct 13 18:28:55 crc kubenswrapper[4720]: I1013 18:28:55.657413 4720 scope.go:117] "RemoveContainer" containerID="0396860c72b2fe4d26ea74942b3da0ed6360579d7646b9d9ec56bdcbb3215379" Oct 13 18:28:55 crc kubenswrapper[4720]: I1013 18:28:55.657490 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qqx25/crc-debug-jthlz" Oct 13 18:28:56 crc kubenswrapper[4720]: I1013 18:28:56.666748 4720 generic.go:334] "Generic (PLEG): container finished" podID="ee6bf4a1-968b-42e6-9504-b83f90181d0e" containerID="17d367b4c8271d32d8ff75db0b8bcf8e8be47d688e8dd4e5e262d685d44b463f" exitCode=0 Oct 13 18:28:56 crc kubenswrapper[4720]: I1013 18:28:56.666859 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qqx25/crc-debug-zvpsl" event={"ID":"ee6bf4a1-968b-42e6-9504-b83f90181d0e","Type":"ContainerDied","Data":"17d367b4c8271d32d8ff75db0b8bcf8e8be47d688e8dd4e5e262d685d44b463f"} Oct 13 18:28:56 crc kubenswrapper[4720]: I1013 18:28:56.668279 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qqx25/crc-debug-zvpsl" event={"ID":"ee6bf4a1-968b-42e6-9504-b83f90181d0e","Type":"ContainerStarted","Data":"c141ece6d0e4110334d9a9e543f123580b3a3fb87e62bb2ebe8b4a24904a114c"} Oct 13 18:28:56 crc kubenswrapper[4720]: I1013 18:28:56.754790 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qqx25/crc-debug-zvpsl"] Oct 13 18:28:56 crc kubenswrapper[4720]: I1013 18:28:56.768845 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qqx25/crc-debug-zvpsl"] Oct 13 18:28:57 crc kubenswrapper[4720]: I1013 18:28:57.775117 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qqx25/crc-debug-zvpsl" Oct 13 18:28:57 crc kubenswrapper[4720]: I1013 18:28:57.854242 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ee6bf4a1-968b-42e6-9504-b83f90181d0e-host\") pod \"ee6bf4a1-968b-42e6-9504-b83f90181d0e\" (UID: \"ee6bf4a1-968b-42e6-9504-b83f90181d0e\") " Oct 13 18:28:57 crc kubenswrapper[4720]: I1013 18:28:57.854448 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsfz5\" (UniqueName: \"kubernetes.io/projected/ee6bf4a1-968b-42e6-9504-b83f90181d0e-kube-api-access-rsfz5\") pod \"ee6bf4a1-968b-42e6-9504-b83f90181d0e\" (UID: \"ee6bf4a1-968b-42e6-9504-b83f90181d0e\") " Oct 13 18:28:57 crc kubenswrapper[4720]: I1013 18:28:57.855748 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee6bf4a1-968b-42e6-9504-b83f90181d0e-host" (OuterVolumeSpecName: "host") pod "ee6bf4a1-968b-42e6-9504-b83f90181d0e" (UID: "ee6bf4a1-968b-42e6-9504-b83f90181d0e"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 13 18:28:57 crc kubenswrapper[4720]: I1013 18:28:57.860515 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee6bf4a1-968b-42e6-9504-b83f90181d0e-kube-api-access-rsfz5" (OuterVolumeSpecName: "kube-api-access-rsfz5") pod "ee6bf4a1-968b-42e6-9504-b83f90181d0e" (UID: "ee6bf4a1-968b-42e6-9504-b83f90181d0e"). InnerVolumeSpecName "kube-api-access-rsfz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 18:28:57 crc kubenswrapper[4720]: I1013 18:28:57.956587 4720 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ee6bf4a1-968b-42e6-9504-b83f90181d0e-host\") on node \"crc\" DevicePath \"\"" Oct 13 18:28:57 crc kubenswrapper[4720]: I1013 18:28:57.956628 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rsfz5\" (UniqueName: \"kubernetes.io/projected/ee6bf4a1-968b-42e6-9504-b83f90181d0e-kube-api-access-rsfz5\") on node \"crc\" DevicePath \"\"" Oct 13 18:28:58 crc kubenswrapper[4720]: I1013 18:28:58.688360 4720 scope.go:117] "RemoveContainer" containerID="17d367b4c8271d32d8ff75db0b8bcf8e8be47d688e8dd4e5e262d685d44b463f" Oct 13 18:28:58 crc kubenswrapper[4720]: I1013 18:28:58.688393 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qqx25/crc-debug-zvpsl" Oct 13 18:28:59 crc kubenswrapper[4720]: I1013 18:28:59.180564 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee6bf4a1-968b-42e6-9504-b83f90181d0e" path="/var/lib/kubelet/pods/ee6bf4a1-968b-42e6-9504-b83f90181d0e/volumes" Oct 13 18:29:13 crc kubenswrapper[4720]: I1013 18:29:13.713480 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-77544fcf9d-jwg9p_84cf65bc-1603-4782-9b88-15937c9c7c6f/barbican-api/0.log" Oct 13 18:29:13 crc kubenswrapper[4720]: I1013 18:29:13.829229 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-77544fcf9d-jwg9p_84cf65bc-1603-4782-9b88-15937c9c7c6f/barbican-api-log/0.log" Oct 13 18:29:13 crc kubenswrapper[4720]: I1013 18:29:13.949270 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-846978fd94-gg45m_73bee848-defc-4a29-b1a2-a359359e3c67/barbican-keystone-listener/0.log" Oct 13 18:29:14 crc kubenswrapper[4720]: I1013 18:29:14.003950 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-846978fd94-gg45m_73bee848-defc-4a29-b1a2-a359359e3c67/barbican-keystone-listener-log/0.log" Oct 13 18:29:14 crc kubenswrapper[4720]: I1013 18:29:14.133169 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-77879c6d6f-fqm88_f39030ec-2975-459a-972e-4928cb31e15a/barbican-worker/0.log" Oct 13 18:29:14 crc kubenswrapper[4720]: I1013 18:29:14.143085 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-77879c6d6f-fqm88_f39030ec-2975-459a-972e-4928cb31e15a/barbican-worker-log/0.log" Oct 13 18:29:14 crc kubenswrapper[4720]: I1013 18:29:14.253762 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-gqz6r_d7953eee-f335-4fdc-9834-caa5a4695476/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Oct 13 18:29:14 crc kubenswrapper[4720]: I1013 18:29:14.374030 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_18a26891-fa3b-4433-a74a-592bef9b8241/ceilometer-central-agent/0.log" Oct 13 18:29:14 crc kubenswrapper[4720]: I1013 18:29:14.416944 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_18a26891-fa3b-4433-a74a-592bef9b8241/ceilometer-notification-agent/0.log" Oct 13 18:29:14 crc kubenswrapper[4720]: I1013 18:29:14.486295 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_18a26891-fa3b-4433-a74a-592bef9b8241/proxy-httpd/0.log" Oct 13 18:29:14 crc kubenswrapper[4720]: I1013 18:29:14.614568 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_18a26891-fa3b-4433-a74a-592bef9b8241/sg-core/0.log" Oct 13 18:29:14 crc kubenswrapper[4720]: I1013 18:29:14.701412 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_9ed0bf93-abdc-4a94-bfcb-6293c9e01853/cinder-api/0.log" Oct 13 18:29:14 crc kubenswrapper[4720]: I1013 18:29:14.725229 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_9ed0bf93-abdc-4a94-bfcb-6293c9e01853/cinder-api-log/0.log" Oct 13 18:29:14 crc kubenswrapper[4720]: I1013 18:29:14.919481 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_a68622d1-743e-45e7-a021-6d766840711a/probe/0.log" Oct 13 18:29:14 crc kubenswrapper[4720]: I1013 18:29:14.954957 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_a68622d1-743e-45e7-a021-6d766840711a/cinder-scheduler/0.log" Oct 13 18:29:15 crc kubenswrapper[4720]: I1013 18:29:15.082281 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-vf6s4_2dedc602-7303-4ca5-8d61-143a7975c01c/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Oct 13 18:29:15 crc kubenswrapper[4720]: I1013 18:29:15.168839 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-dtplp_41f1773b-8761-4e7e-bcfe-853ca5977b3b/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 13 18:29:15 crc kubenswrapper[4720]: I1013 18:29:15.354318 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-w24k6_9e9b7c8c-f6f4-448d-af87-164d1f0d008f/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 13 18:29:15 crc kubenswrapper[4720]: I1013 18:29:15.370213 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-wvm8b_323cdd25-bf01-4cf0-8ccc-7dbc90581afd/init/0.log" Oct 13 18:29:15 crc kubenswrapper[4720]: I1013 18:29:15.622675 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-wvm8b_323cdd25-bf01-4cf0-8ccc-7dbc90581afd/init/0.log" Oct 13 18:29:15 crc kubenswrapper[4720]: I1013 18:29:15.631223 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-xqf7d_2b7d23a3-f722-47e4-85af-fe733bfc5fdc/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Oct 13 18:29:15 crc kubenswrapper[4720]: I1013 18:29:15.690481 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-wvm8b_323cdd25-bf01-4cf0-8ccc-7dbc90581afd/dnsmasq-dns/0.log" Oct 13 18:29:15 crc kubenswrapper[4720]: I1013 18:29:15.853997 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_40ee2ba6-d58a-4ee8-b28e-6aeea90de7f6/glance-log/0.log" Oct 13 18:29:15 crc kubenswrapper[4720]: I1013 18:29:15.871064 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_40ee2ba6-d58a-4ee8-b28e-6aeea90de7f6/glance-httpd/0.log" Oct 13 18:29:16 crc kubenswrapper[4720]: I1013 18:29:16.109708 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_65eecc5b-dc6b-482e-bcfe-93915016a1f5/glance-log/0.log" Oct 13 18:29:16 crc kubenswrapper[4720]: I1013 18:29:16.147785 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_65eecc5b-dc6b-482e-bcfe-93915016a1f5/glance-httpd/0.log" Oct 13 18:29:16 crc kubenswrapper[4720]: I1013 18:29:16.261800 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7984dcc5d8-8c2ss_27768d75-429c-45c3-bf03-98527e94fe63/horizon/0.log" Oct 13 18:29:16 crc kubenswrapper[4720]: I1013 18:29:16.415458 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-rcnp2_b6dc194d-cfc2-4303-ad72-ead87650ea96/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Oct 13 18:29:16 crc kubenswrapper[4720]: I1013 18:29:16.583988 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7984dcc5d8-8c2ss_27768d75-429c-45c3-bf03-98527e94fe63/horizon-log/0.log" Oct 13 18:29:16 crc kubenswrapper[4720]: I1013 18:29:16.835820 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-7nx42_8294bbb9-8b70-411f-af1f-cca84d7c5dbb/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 13 18:29:17 crc kubenswrapper[4720]: I1013 18:29:17.012717 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29339641-fqkjh_f21ba5f0-a0a5-4a29-9025-614d7f33c643/keystone-cron/0.log" Oct 13 18:29:17 crc kubenswrapper[4720]: I1013 18:29:17.034696 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5744dfc665-n6ts6_5de24778-de7e-4b2b-bf60-24ae857c2ed9/keystone-api/0.log" Oct 13 18:29:17 crc kubenswrapper[4720]: I1013 18:29:17.141977 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_3ce4e7b1-7ffc-4444-8e0c-2bc9779d4ef1/kube-state-metrics/0.log" Oct 13 18:29:17 crc kubenswrapper[4720]: I1013 18:29:17.237228 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-ht9ww_0b253b74-8253-44c4-962c-b01331772a19/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Oct 13 18:29:17 crc kubenswrapper[4720]: I1013 18:29:17.574718 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7c7ddff655-r8ln9_782a78a0-312e-4337-9397-9b476f51f7a8/neutron-httpd/0.log" Oct 13 18:29:17 crc kubenswrapper[4720]: I1013 18:29:17.632885 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7c7ddff655-r8ln9_782a78a0-312e-4337-9397-9b476f51f7a8/neutron-api/0.log" Oct 13 18:29:17 crc kubenswrapper[4720]: I1013 18:29:17.640883 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-2rcnh_317f512e-221d-4587-9817-526adffbe348/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Oct 13 18:29:18 crc kubenswrapper[4720]: I1013 18:29:18.290462 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_d013b32a-b904-46e1-85be-0691c6d981da/nova-api-log/0.log" Oct 13 18:29:18 crc kubenswrapper[4720]: I1013 18:29:18.349624 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_b869ee69-b8f2-4318-a977-da27405dd698/nova-cell0-conductor-conductor/0.log" Oct 13 18:29:18 crc kubenswrapper[4720]: I1013 18:29:18.576286 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_d013b32a-b904-46e1-85be-0691c6d981da/nova-api-api/0.log" Oct 13 18:29:18 crc kubenswrapper[4720]: I1013 18:29:18.588960 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_4b19f8c7-2583-4a2a-89e0-6a036d0e63a5/nova-cell1-conductor-conductor/0.log" Oct 13 18:29:18 crc kubenswrapper[4720]: I1013 18:29:18.721957 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_3be6a675-72c8-4120-9b8b-458dba2fe7f2/nova-cell1-novncproxy-novncproxy/0.log" Oct 13 18:29:18 crc kubenswrapper[4720]: I1013 18:29:18.888870 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-tgnks_9810822f-63d1-4a31-bde3-6353a5ee9007/nova-edpm-deployment-openstack-edpm-ipam/0.log" Oct 13 18:29:19 crc kubenswrapper[4720]: I1013 18:29:19.105506 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_8ab3d870-8836-484b-a291-4bc7b329ed83/nova-metadata-log/0.log" Oct 13 18:29:19 crc kubenswrapper[4720]: I1013 18:29:19.332302 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f8338f95-b766-4ce8-b60e-020957cdee12/mysql-bootstrap/0.log" Oct 13 18:29:19 crc kubenswrapper[4720]: I1013 18:29:19.336241 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_afdc561e-00de-42e7-aeda-de229e3f7836/nova-scheduler-scheduler/0.log" Oct 13 18:29:19 crc kubenswrapper[4720]: I1013 18:29:19.509954 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f8338f95-b766-4ce8-b60e-020957cdee12/mysql-bootstrap/0.log" Oct 13 18:29:19 crc kubenswrapper[4720]: I1013 18:29:19.568261 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f8338f95-b766-4ce8-b60e-020957cdee12/galera/0.log" Oct 13 18:29:19 crc kubenswrapper[4720]: I1013 18:29:19.732668 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_fe36eeb1-7f7f-424c-a56c-e96cffc3046d/mysql-bootstrap/0.log" Oct 13 18:29:19 crc kubenswrapper[4720]: I1013 18:29:19.932969 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_fe36eeb1-7f7f-424c-a56c-e96cffc3046d/galera/0.log" Oct 13 18:29:19 crc kubenswrapper[4720]: I1013 18:29:19.973153 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_fe36eeb1-7f7f-424c-a56c-e96cffc3046d/mysql-bootstrap/0.log" Oct 13 18:29:20 crc kubenswrapper[4720]: I1013 18:29:20.146086 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_f87dfa54-2548-4cf1-ad02-c7663263650c/openstackclient/0.log" Oct 13 18:29:20 crc kubenswrapper[4720]: I1013 18:29:20.243068 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-fkb8p_d6433630-935f-4a61-acab-4ceb6de36866/openstack-network-exporter/0.log" Oct 13 18:29:20 crc kubenswrapper[4720]: I1013 18:29:20.351159 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_8ab3d870-8836-484b-a291-4bc7b329ed83/nova-metadata-metadata/0.log" Oct 13 18:29:20 crc kubenswrapper[4720]: I1013 18:29:20.643480 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-cz99q_b92efbfa-6501-4601-9432-8c37dbe4e020/ovsdb-server-init/0.log" Oct 13 18:29:20 crc kubenswrapper[4720]: I1013 18:29:20.797038 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-cz99q_b92efbfa-6501-4601-9432-8c37dbe4e020/ovsdb-server-init/0.log" Oct 13 18:29:20 crc kubenswrapper[4720]: I1013 18:29:20.802172 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-cz99q_b92efbfa-6501-4601-9432-8c37dbe4e020/ovs-vswitchd/0.log" Oct 13 18:29:20 crc kubenswrapper[4720]: I1013 18:29:20.806765 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-cz99q_b92efbfa-6501-4601-9432-8c37dbe4e020/ovsdb-server/0.log" Oct 13 18:29:21 crc kubenswrapper[4720]: I1013 18:29:21.047702 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-vbc6h_283c0b58-d0a1-4cf1-af87-3859306c4a60/ovn-controller/0.log" Oct 13 18:29:21 crc kubenswrapper[4720]: I1013 18:29:21.080004 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-fhdbt_64141929-3427-4673-9aea-5ce314ceb23b/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Oct 13 18:29:21 crc kubenswrapper[4720]: I1013 18:29:21.273890 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_de1b534b-dfe4-42f3-ac5f-4aace4f956b6/openstack-network-exporter/0.log" Oct 13 18:29:21 crc kubenswrapper[4720]: I1013 18:29:21.276130 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_de1b534b-dfe4-42f3-ac5f-4aace4f956b6/ovn-northd/0.log" Oct 13 18:29:21 crc kubenswrapper[4720]: I1013 18:29:21.446717 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_a3b2dccc-71b7-4dd6-9c8d-f1c12382a832/openstack-network-exporter/0.log" Oct 13 18:29:21 crc kubenswrapper[4720]: I1013 18:29:21.509763 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_a3b2dccc-71b7-4dd6-9c8d-f1c12382a832/ovsdbserver-nb/0.log" Oct 13 18:29:21 crc kubenswrapper[4720]: I1013 18:29:21.603610 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_01489c11-3710-4d60-a702-71fda5b496ea/openstack-network-exporter/0.log" Oct 13 18:29:21 crc kubenswrapper[4720]: I1013 18:29:21.638553 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_01489c11-3710-4d60-a702-71fda5b496ea/ovsdbserver-sb/0.log" Oct 13 18:29:21 crc kubenswrapper[4720]: I1013 18:29:21.874359 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-544f6df47b-z9rm6_acf3c288-2800-445a-9d67-134e0a7faac9/placement-api/0.log" Oct 13 18:29:21 crc kubenswrapper[4720]: I1013 18:29:21.950483 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-544f6df47b-z9rm6_acf3c288-2800-445a-9d67-134e0a7faac9/placement-log/0.log" Oct 13 18:29:22 crc kubenswrapper[4720]: I1013 18:29:22.007353 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_0bc24914-0bdd-4fa7-a859-a4d4f06f0455/setup-container/0.log" Oct 13 18:29:22 crc kubenswrapper[4720]: I1013 18:29:22.246554 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_0bc24914-0bdd-4fa7-a859-a4d4f06f0455/setup-container/0.log" Oct 13 18:29:22 crc kubenswrapper[4720]: I1013 18:29:22.282670 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_0bc24914-0bdd-4fa7-a859-a4d4f06f0455/rabbitmq/0.log" Oct 13 18:29:22 crc kubenswrapper[4720]: I1013 18:29:22.386070 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_234df878-2921-45dc-854c-b3840afdbd45/setup-container/0.log" Oct 13 18:29:22 crc kubenswrapper[4720]: I1013 18:29:22.516346 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_234df878-2921-45dc-854c-b3840afdbd45/rabbitmq/0.log" Oct 13 18:29:22 crc kubenswrapper[4720]: I1013 18:29:22.551225 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_234df878-2921-45dc-854c-b3840afdbd45/setup-container/0.log" Oct 13 18:29:22 crc kubenswrapper[4720]: I1013 18:29:22.592487 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-r964t_28cc87d3-31ea-48dd-8169-3ac47061e244/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 13 18:29:22 crc kubenswrapper[4720]: I1013 18:29:22.804640 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-fvrff_1571f354-4e14-443b-b5fa-b0158ed87248/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Oct 13 18:29:22 crc kubenswrapper[4720]: I1013 18:29:22.896958 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-fj58w_28ae3d76-f715-46df-be4a-d621a7467347/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Oct 13 18:29:23 crc kubenswrapper[4720]: I1013 18:29:23.011558 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-94xkd_d6ae1ade-ceec-4b00-b028-1272c83dea9a/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 13 18:29:23 crc kubenswrapper[4720]: I1013 18:29:23.157893 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-8vhmv_06bc5000-9f94-4cff-ade7-ac063d97ef79/ssh-known-hosts-edpm-deployment/0.log" Oct 13 18:29:23 crc kubenswrapper[4720]: I1013 18:29:23.379302 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-694bd85589-jdgbb_d51a0725-9566-428f-a34b-3b0345774d1f/proxy-server/0.log" Oct 13 18:29:23 crc kubenswrapper[4720]: I1013 18:29:23.405296 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-694bd85589-jdgbb_d51a0725-9566-428f-a34b-3b0345774d1f/proxy-httpd/0.log" Oct 13 18:29:23 crc kubenswrapper[4720]: I1013 18:29:23.475817 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-9s2xb_3f752de1-5826-4009-a77c-b9186d9811ea/swift-ring-rebalance/0.log" Oct 13 18:29:23 crc kubenswrapper[4720]: I1013 18:29:23.620805 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2ffe28a4-ed4a-44c6-b982-501575dd907d/account-auditor/0.log" Oct 13 18:29:23 crc kubenswrapper[4720]: I1013 18:29:23.686007 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2ffe28a4-ed4a-44c6-b982-501575dd907d/account-replicator/0.log" Oct 13 18:29:23 crc kubenswrapper[4720]: I1013 18:29:23.695391 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2ffe28a4-ed4a-44c6-b982-501575dd907d/account-reaper/0.log" Oct 13 18:29:23 crc kubenswrapper[4720]: I1013 18:29:23.801717 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2ffe28a4-ed4a-44c6-b982-501575dd907d/container-auditor/0.log" Oct 13 18:29:23 crc kubenswrapper[4720]: I1013 18:29:23.845477 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2ffe28a4-ed4a-44c6-b982-501575dd907d/account-server/0.log" Oct 13 18:29:23 crc kubenswrapper[4720]: I1013 18:29:23.968471 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2ffe28a4-ed4a-44c6-b982-501575dd907d/container-updater/0.log" Oct 13 18:29:24 crc kubenswrapper[4720]: I1013 18:29:24.005393 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2ffe28a4-ed4a-44c6-b982-501575dd907d/container-replicator/0.log" Oct 13 18:29:24 crc kubenswrapper[4720]: I1013 18:29:24.052070 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2ffe28a4-ed4a-44c6-b982-501575dd907d/container-server/0.log" Oct 13 18:29:24 crc kubenswrapper[4720]: I1013 18:29:24.311487 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2ffe28a4-ed4a-44c6-b982-501575dd907d/object-auditor/0.log" Oct 13 18:29:24 crc kubenswrapper[4720]: I1013 18:29:24.356182 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2ffe28a4-ed4a-44c6-b982-501575dd907d/object-expirer/0.log" Oct 13 18:29:24 crc kubenswrapper[4720]: I1013 18:29:24.482552 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2ffe28a4-ed4a-44c6-b982-501575dd907d/object-replicator/0.log" Oct 13 18:29:24 crc kubenswrapper[4720]: I1013 18:29:24.508931 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2ffe28a4-ed4a-44c6-b982-501575dd907d/object-server/0.log" Oct 13 18:29:24 crc kubenswrapper[4720]: I1013 18:29:24.600491 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2ffe28a4-ed4a-44c6-b982-501575dd907d/rsync/0.log" Oct 13 18:29:24 crc kubenswrapper[4720]: I1013 18:29:24.641094 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2ffe28a4-ed4a-44c6-b982-501575dd907d/object-updater/0.log" Oct 13 18:29:24 crc kubenswrapper[4720]: I1013 18:29:24.682143 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2ffe28a4-ed4a-44c6-b982-501575dd907d/swift-recon-cron/0.log" Oct 13 18:29:24 crc kubenswrapper[4720]: I1013 18:29:24.883652 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-bsltd_6c0e5c67-6b6c-4b09-8d45-f37f83c017a7/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Oct 13 18:29:24 crc kubenswrapper[4720]: I1013 18:29:24.924471 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_ece01f62-fd6d-4c42-9c9a-3bc25feed3cb/tempest-tests-tempest-tests-runner/0.log" Oct 13 18:29:25 crc kubenswrapper[4720]: I1013 18:29:25.140089 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_82b017de-6691-4cf4-941f-9e0334669ced/test-operator-logs-container/0.log" Oct 13 18:29:25 crc kubenswrapper[4720]: I1013 18:29:25.204300 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-c7zjx_3d96fecb-1b5a-4d39-8f6f-82755c63a757/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Oct 13 18:29:35 crc kubenswrapper[4720]: I1013 18:29:35.271662 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_e6b1817f-f719-4727-ad61-56061b241d4b/memcached/0.log" Oct 13 18:29:51 crc kubenswrapper[4720]: I1013 18:29:51.396934 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-64f84fcdbb-bk68v_d34e7c64-7562-4a1a-8d47-20b3bb785756/kube-rbac-proxy/0.log" Oct 13 18:29:51 crc kubenswrapper[4720]: I1013 18:29:51.422677 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-64f84fcdbb-bk68v_d34e7c64-7562-4a1a-8d47-20b3bb785756/manager/0.log" Oct 13 18:29:51 crc kubenswrapper[4720]: I1013 18:29:51.593566 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-59cdc64769-nlnsl_d81c88e6-1b2a-405d-861a-ca4b3baed83d/kube-rbac-proxy/0.log" Oct 13 18:29:51 crc kubenswrapper[4720]: I1013 18:29:51.664465 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-59cdc64769-nlnsl_d81c88e6-1b2a-405d-861a-ca4b3baed83d/manager/0.log" Oct 13 18:29:51 crc kubenswrapper[4720]: I1013 18:29:51.756532 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-687df44cdb-ldlvt_c25871a6-cdf1-49c1-8d51-ab4fb186fa83/manager/0.log" Oct 13 18:29:51 crc kubenswrapper[4720]: I1013 18:29:51.773368 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-687df44cdb-ldlvt_c25871a6-cdf1-49c1-8d51-ab4fb186fa83/kube-rbac-proxy/0.log" Oct 13 18:29:51 crc kubenswrapper[4720]: I1013 18:29:51.868985 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f4b764a5aa97d0f87d08f09894533aff6b93246953653cc0ea80385c49l69l6_f56e63f6-a476-4150-a661-07e988c98f28/util/0.log" Oct 13 18:29:51 crc kubenswrapper[4720]: I1013 18:29:51.998858 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f4b764a5aa97d0f87d08f09894533aff6b93246953653cc0ea80385c49l69l6_f56e63f6-a476-4150-a661-07e988c98f28/util/0.log" Oct 13 18:29:51 crc kubenswrapper[4720]: I1013 18:29:51.999498 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f4b764a5aa97d0f87d08f09894533aff6b93246953653cc0ea80385c49l69l6_f56e63f6-a476-4150-a661-07e988c98f28/pull/0.log" Oct 13 18:29:52 crc kubenswrapper[4720]: I1013 18:29:52.021596 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f4b764a5aa97d0f87d08f09894533aff6b93246953653cc0ea80385c49l69l6_f56e63f6-a476-4150-a661-07e988c98f28/pull/0.log" Oct 13 18:29:52 crc kubenswrapper[4720]: I1013 18:29:52.168544 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f4b764a5aa97d0f87d08f09894533aff6b93246953653cc0ea80385c49l69l6_f56e63f6-a476-4150-a661-07e988c98f28/pull/0.log" Oct 13 18:29:52 crc kubenswrapper[4720]: I1013 18:29:52.178673 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f4b764a5aa97d0f87d08f09894533aff6b93246953653cc0ea80385c49l69l6_f56e63f6-a476-4150-a661-07e988c98f28/extract/0.log" Oct 13 18:29:52 crc kubenswrapper[4720]: I1013 18:29:52.178719 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f4b764a5aa97d0f87d08f09894533aff6b93246953653cc0ea80385c49l69l6_f56e63f6-a476-4150-a661-07e988c98f28/util/0.log" Oct 13 18:29:52 crc kubenswrapper[4720]: I1013 18:29:52.313477 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7bb46cd7d-v7vx2_a9b388de-4993-46d1-86db-ac92a9df4f2f/kube-rbac-proxy/0.log" Oct 13 18:29:52 crc kubenswrapper[4720]: I1013 18:29:52.370347 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7bb46cd7d-v7vx2_a9b388de-4993-46d1-86db-ac92a9df4f2f/manager/0.log" Oct 13 18:29:52 crc kubenswrapper[4720]: I1013 18:29:52.416896 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-6d9967f8dd-qvv82_771ce8c4-ac65-4db7-bc56-a8b7cb2f1448/kube-rbac-proxy/0.log" Oct 13 18:29:52 crc kubenswrapper[4720]: I1013 18:29:52.510143 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-6d9967f8dd-qvv82_771ce8c4-ac65-4db7-bc56-a8b7cb2f1448/manager/0.log" Oct 13 18:29:52 crc kubenswrapper[4720]: I1013 18:29:52.600489 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-6d74794d9b-hvb85_d266783d-75ba-4864-af5d-4f2b8702c6a9/kube-rbac-proxy/0.log" Oct 13 18:29:52 crc kubenswrapper[4720]: I1013 18:29:52.642605 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-6d74794d9b-hvb85_d266783d-75ba-4864-af5d-4f2b8702c6a9/manager/0.log" Oct 13 18:29:52 crc kubenswrapper[4720]: I1013 18:29:52.781656 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-585fc5b659-2f6bd_45c8f080-0f28-47b5-80df-e1877c3f77bb/kube-rbac-proxy/0.log" Oct 13 18:29:52 crc kubenswrapper[4720]: I1013 18:29:52.875141 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-585fc5b659-2f6bd_45c8f080-0f28-47b5-80df-e1877c3f77bb/manager/0.log" Oct 13 18:29:52 crc kubenswrapper[4720]: I1013 18:29:52.889592 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-74cb5cbc49-bxtwp_5a86eb76-3453-4f0e-8529-c877f739d822/kube-rbac-proxy/0.log" Oct 13 18:29:52 crc kubenswrapper[4720]: I1013 18:29:52.945103 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-74cb5cbc49-bxtwp_5a86eb76-3453-4f0e-8529-c877f739d822/manager/0.log" Oct 13 18:29:53 crc kubenswrapper[4720]: I1013 18:29:53.024031 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-ddb98f99b-qqmcw_32b53ed3-af12-4a7d-b371-eab8aa1ab1bb/kube-rbac-proxy/0.log" Oct 13 18:29:53 crc kubenswrapper[4720]: I1013 18:29:53.111354 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-ddb98f99b-qqmcw_32b53ed3-af12-4a7d-b371-eab8aa1ab1bb/manager/0.log" Oct 13 18:29:53 crc kubenswrapper[4720]: I1013 18:29:53.210339 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-59578bc799-mhlvn_284cd6cf-5985-4cad-a31c-f91f3c2098c6/kube-rbac-proxy/0.log" Oct 13 18:29:53 crc kubenswrapper[4720]: I1013 18:29:53.211363 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-59578bc799-mhlvn_284cd6cf-5985-4cad-a31c-f91f3c2098c6/manager/0.log" Oct 13 18:29:53 crc kubenswrapper[4720]: I1013 18:29:53.325761 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-5777b4f897-mpdxw_30a28fe6-1905-48df-ab2d-b9d92eaf940e/kube-rbac-proxy/0.log" Oct 13 18:29:53 crc kubenswrapper[4720]: I1013 18:29:53.405788 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-5777b4f897-mpdxw_30a28fe6-1905-48df-ab2d-b9d92eaf940e/manager/0.log" Oct 13 18:29:53 crc kubenswrapper[4720]: I1013 18:29:53.471777 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-797d478b46-m44cs_ccb8b109-9d10-48f5-b8ce-65ad05b5e1a4/kube-rbac-proxy/0.log" Oct 13 18:29:53 crc kubenswrapper[4720]: I1013 18:29:53.540444 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-797d478b46-m44cs_ccb8b109-9d10-48f5-b8ce-65ad05b5e1a4/manager/0.log" Oct 13 18:29:53 crc kubenswrapper[4720]: I1013 18:29:53.606696 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-57bb74c7bf-dkbrd_246b649b-7481-433e-aaf0-30cebf5543d8/kube-rbac-proxy/0.log" Oct 13 18:29:53 crc kubenswrapper[4720]: I1013 18:29:53.728849 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-57bb74c7bf-dkbrd_246b649b-7481-433e-aaf0-30cebf5543d8/manager/0.log" Oct 13 18:29:53 crc kubenswrapper[4720]: I1013 18:29:53.792552 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6d7c7ddf95-4jfq5_60fc7b7f-c85a-4a6d-8de9-e8e9e8df8ada/kube-rbac-proxy/0.log" Oct 13 18:29:53 crc kubenswrapper[4720]: I1013 18:29:53.803718 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6d7c7ddf95-4jfq5_60fc7b7f-c85a-4a6d-8de9-e8e9e8df8ada/manager/0.log" Oct 13 18:29:53 crc kubenswrapper[4720]: I1013 18:29:53.939031 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6cc7fb757dd6zq7_b7c96c4b-b0c5-4c82-a6ab-3878c394eab0/kube-rbac-proxy/0.log" Oct 13 18:29:53 crc kubenswrapper[4720]: I1013 18:29:53.974566 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6cc7fb757dd6zq7_b7c96c4b-b0c5-4c82-a6ab-3878c394eab0/manager/0.log" Oct 13 18:29:54 crc kubenswrapper[4720]: I1013 18:29:54.141974 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-bb4f97fd9-d7cs5_4b641160-215b-4547-a820-d613c04d9348/kube-rbac-proxy/0.log" Oct 13 18:29:54 crc kubenswrapper[4720]: I1013 18:29:54.262930 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-c9c874ff7-8qrr8_2a1eb7a4-db4c-4029-9320-c447d9f1c69c/kube-rbac-proxy/0.log" Oct 13 18:29:54 crc kubenswrapper[4720]: I1013 18:29:54.482539 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-qnssk_21cd75be-1f87-4a83-a140-d31263d1c86f/registry-server/0.log" Oct 13 18:29:54 crc kubenswrapper[4720]: I1013 18:29:54.501616 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-c9c874ff7-8qrr8_2a1eb7a4-db4c-4029-9320-c447d9f1c69c/operator/0.log" Oct 13 18:29:54 crc kubenswrapper[4720]: I1013 18:29:54.700026 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-869cc7797f-gzwxn_487124d6-9dcd-4173-8f78-2dbf29cafe87/kube-rbac-proxy/0.log" Oct 13 18:29:54 crc kubenswrapper[4720]: I1013 18:29:54.736553 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-869cc7797f-gzwxn_487124d6-9dcd-4173-8f78-2dbf29cafe87/manager/0.log" Oct 13 18:29:54 crc kubenswrapper[4720]: I1013 18:29:54.940792 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-664664cb68-xwn8g_2eab29c4-2ebe-4f71-af0d-df5f0d113f66/kube-rbac-proxy/0.log" Oct 13 18:29:54 crc kubenswrapper[4720]: I1013 18:29:54.947459 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-664664cb68-xwn8g_2eab29c4-2ebe-4f71-af0d-df5f0d113f66/manager/0.log" Oct 13 18:29:55 crc kubenswrapper[4720]: I1013 18:29:55.070497 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-5f97d8c699-h959b_f7d32fd1-190f-46ec-a313-b3c0b2c58556/operator/0.log" Oct 13 18:29:55 crc kubenswrapper[4720]: I1013 18:29:55.179572 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f4d5dfdc6-g4th9_935d79c8-281f-4ad8-8c6d-404c0e89653e/kube-rbac-proxy/0.log" Oct 13 18:29:55 crc kubenswrapper[4720]: I1013 18:29:55.205723 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-bb4f97fd9-d7cs5_4b641160-215b-4547-a820-d613c04d9348/manager/0.log" Oct 13 18:29:55 crc kubenswrapper[4720]: I1013 18:29:55.249570 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f4d5dfdc6-g4th9_935d79c8-281f-4ad8-8c6d-404c0e89653e/manager/0.log" Oct 13 18:29:55 crc kubenswrapper[4720]: I1013 18:29:55.308784 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-578874c84d-62vnr_20db86ae-f595-4a1d-b000-c97df02b65af/kube-rbac-proxy/0.log" Oct 13 18:29:55 crc kubenswrapper[4720]: I1013 18:29:55.369020 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-578874c84d-62vnr_20db86ae-f595-4a1d-b000-c97df02b65af/manager/0.log" Oct 13 18:29:55 crc kubenswrapper[4720]: I1013 18:29:55.429597 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-ffcdd6c94-xm8vk_5c9d42bc-4b65-42f2-beda-164c7c5ba3e2/kube-rbac-proxy/0.log" Oct 13 18:29:55 crc kubenswrapper[4720]: I1013 18:29:55.503680 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-ffcdd6c94-xm8vk_5c9d42bc-4b65-42f2-beda-164c7c5ba3e2/manager/0.log" Oct 13 18:29:55 crc kubenswrapper[4720]: I1013 18:29:55.546153 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-646675d848-c8bnc_492905c0-fe64-45b8-af6b-5d7373c3f71a/kube-rbac-proxy/0.log" Oct 13 18:29:55 crc kubenswrapper[4720]: I1013 18:29:55.617362 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-646675d848-c8bnc_492905c0-fe64-45b8-af6b-5d7373c3f71a/manager/0.log" Oct 13 18:30:00 crc kubenswrapper[4720]: I1013 18:30:00.143445 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29339670-8qx7z"] Oct 13 18:30:00 crc kubenswrapper[4720]: E1013 18:30:00.144123 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee6bf4a1-968b-42e6-9504-b83f90181d0e" containerName="container-00" Oct 13 18:30:00 crc kubenswrapper[4720]: I1013 18:30:00.144136 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee6bf4a1-968b-42e6-9504-b83f90181d0e" containerName="container-00" Oct 13 18:30:00 crc kubenswrapper[4720]: I1013 18:30:00.144341 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee6bf4a1-968b-42e6-9504-b83f90181d0e" containerName="container-00" Oct 13 18:30:00 crc kubenswrapper[4720]: I1013 18:30:00.145044 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29339670-8qx7z" Oct 13 18:30:00 crc kubenswrapper[4720]: I1013 18:30:00.148449 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 13 18:30:00 crc kubenswrapper[4720]: I1013 18:30:00.149349 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 13 18:30:00 crc kubenswrapper[4720]: I1013 18:30:00.157404 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29339670-8qx7z"] Oct 13 18:30:00 crc kubenswrapper[4720]: I1013 18:30:00.280817 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5w9x\" (UniqueName: \"kubernetes.io/projected/e13f69b3-6a51-45c7-ac38-56e1f8634975-kube-api-access-c5w9x\") pod \"collect-profiles-29339670-8qx7z\" (UID: \"e13f69b3-6a51-45c7-ac38-56e1f8634975\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339670-8qx7z" Oct 13 18:30:00 crc kubenswrapper[4720]: I1013 18:30:00.281223 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e13f69b3-6a51-45c7-ac38-56e1f8634975-secret-volume\") pod \"collect-profiles-29339670-8qx7z\" (UID: \"e13f69b3-6a51-45c7-ac38-56e1f8634975\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339670-8qx7z" Oct 13 18:30:00 crc kubenswrapper[4720]: I1013 18:30:00.281390 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e13f69b3-6a51-45c7-ac38-56e1f8634975-config-volume\") pod \"collect-profiles-29339670-8qx7z\" (UID: \"e13f69b3-6a51-45c7-ac38-56e1f8634975\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339670-8qx7z" Oct 13 18:30:00 crc kubenswrapper[4720]: I1013 18:30:00.382998 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5w9x\" (UniqueName: \"kubernetes.io/projected/e13f69b3-6a51-45c7-ac38-56e1f8634975-kube-api-access-c5w9x\") pod \"collect-profiles-29339670-8qx7z\" (UID: \"e13f69b3-6a51-45c7-ac38-56e1f8634975\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339670-8qx7z" Oct 13 18:30:00 crc kubenswrapper[4720]: I1013 18:30:00.383109 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e13f69b3-6a51-45c7-ac38-56e1f8634975-secret-volume\") pod \"collect-profiles-29339670-8qx7z\" (UID: \"e13f69b3-6a51-45c7-ac38-56e1f8634975\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339670-8qx7z" Oct 13 18:30:00 crc kubenswrapper[4720]: I1013 18:30:00.383165 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e13f69b3-6a51-45c7-ac38-56e1f8634975-config-volume\") pod \"collect-profiles-29339670-8qx7z\" (UID: \"e13f69b3-6a51-45c7-ac38-56e1f8634975\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339670-8qx7z" Oct 13 18:30:00 crc kubenswrapper[4720]: I1013 18:30:00.383967 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e13f69b3-6a51-45c7-ac38-56e1f8634975-config-volume\") pod \"collect-profiles-29339670-8qx7z\" (UID: \"e13f69b3-6a51-45c7-ac38-56e1f8634975\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339670-8qx7z" Oct 13 18:30:00 crc kubenswrapper[4720]: I1013 18:30:00.390487 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e13f69b3-6a51-45c7-ac38-56e1f8634975-secret-volume\") pod \"collect-profiles-29339670-8qx7z\" (UID: \"e13f69b3-6a51-45c7-ac38-56e1f8634975\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339670-8qx7z" Oct 13 18:30:00 crc kubenswrapper[4720]: I1013 18:30:00.405547 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5w9x\" (UniqueName: \"kubernetes.io/projected/e13f69b3-6a51-45c7-ac38-56e1f8634975-kube-api-access-c5w9x\") pod \"collect-profiles-29339670-8qx7z\" (UID: \"e13f69b3-6a51-45c7-ac38-56e1f8634975\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29339670-8qx7z" Oct 13 18:30:00 crc kubenswrapper[4720]: I1013 18:30:00.464954 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29339670-8qx7z" Oct 13 18:30:00 crc kubenswrapper[4720]: I1013 18:30:00.906589 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29339670-8qx7z"] Oct 13 18:30:01 crc kubenswrapper[4720]: I1013 18:30:01.331679 4720 generic.go:334] "Generic (PLEG): container finished" podID="e13f69b3-6a51-45c7-ac38-56e1f8634975" containerID="ca8b0eb4903f30550c85db27c84deb56ff0417e64972232353a24ac2e26186ba" exitCode=0 Oct 13 18:30:01 crc kubenswrapper[4720]: I1013 18:30:01.331788 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29339670-8qx7z" event={"ID":"e13f69b3-6a51-45c7-ac38-56e1f8634975","Type":"ContainerDied","Data":"ca8b0eb4903f30550c85db27c84deb56ff0417e64972232353a24ac2e26186ba"} Oct 13 18:30:01 crc kubenswrapper[4720]: I1013 18:30:01.332036 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29339670-8qx7z" event={"ID":"e13f69b3-6a51-45c7-ac38-56e1f8634975","Type":"ContainerStarted","Data":"fdb432e48564868254f5019335e10fa376f2e65ed75c9c1f3dc1f0f25f416fda"} Oct 13 18:30:02 crc kubenswrapper[4720]: I1013 18:30:02.729160 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29339670-8qx7z" Oct 13 18:30:02 crc kubenswrapper[4720]: I1013 18:30:02.838472 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e13f69b3-6a51-45c7-ac38-56e1f8634975-secret-volume\") pod \"e13f69b3-6a51-45c7-ac38-56e1f8634975\" (UID: \"e13f69b3-6a51-45c7-ac38-56e1f8634975\") " Oct 13 18:30:02 crc kubenswrapper[4720]: I1013 18:30:02.838773 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5w9x\" (UniqueName: \"kubernetes.io/projected/e13f69b3-6a51-45c7-ac38-56e1f8634975-kube-api-access-c5w9x\") pod \"e13f69b3-6a51-45c7-ac38-56e1f8634975\" (UID: \"e13f69b3-6a51-45c7-ac38-56e1f8634975\") " Oct 13 18:30:02 crc kubenswrapper[4720]: I1013 18:30:02.838808 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e13f69b3-6a51-45c7-ac38-56e1f8634975-config-volume\") pod \"e13f69b3-6a51-45c7-ac38-56e1f8634975\" (UID: \"e13f69b3-6a51-45c7-ac38-56e1f8634975\") " Oct 13 18:30:02 crc kubenswrapper[4720]: I1013 18:30:02.839434 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e13f69b3-6a51-45c7-ac38-56e1f8634975-config-volume" (OuterVolumeSpecName: "config-volume") pod "e13f69b3-6a51-45c7-ac38-56e1f8634975" (UID: "e13f69b3-6a51-45c7-ac38-56e1f8634975"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 13 18:30:02 crc kubenswrapper[4720]: I1013 18:30:02.845412 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e13f69b3-6a51-45c7-ac38-56e1f8634975-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e13f69b3-6a51-45c7-ac38-56e1f8634975" (UID: "e13f69b3-6a51-45c7-ac38-56e1f8634975"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 13 18:30:02 crc kubenswrapper[4720]: I1013 18:30:02.859612 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e13f69b3-6a51-45c7-ac38-56e1f8634975-kube-api-access-c5w9x" (OuterVolumeSpecName: "kube-api-access-c5w9x") pod "e13f69b3-6a51-45c7-ac38-56e1f8634975" (UID: "e13f69b3-6a51-45c7-ac38-56e1f8634975"). InnerVolumeSpecName "kube-api-access-c5w9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 18:30:02 crc kubenswrapper[4720]: I1013 18:30:02.940962 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5w9x\" (UniqueName: \"kubernetes.io/projected/e13f69b3-6a51-45c7-ac38-56e1f8634975-kube-api-access-c5w9x\") on node \"crc\" DevicePath \"\"" Oct 13 18:30:02 crc kubenswrapper[4720]: I1013 18:30:02.940996 4720 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e13f69b3-6a51-45c7-ac38-56e1f8634975-config-volume\") on node \"crc\" DevicePath \"\"" Oct 13 18:30:02 crc kubenswrapper[4720]: I1013 18:30:02.941007 4720 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e13f69b3-6a51-45c7-ac38-56e1f8634975-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 13 18:30:03 crc kubenswrapper[4720]: I1013 18:30:03.347466 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29339670-8qx7z" event={"ID":"e13f69b3-6a51-45c7-ac38-56e1f8634975","Type":"ContainerDied","Data":"fdb432e48564868254f5019335e10fa376f2e65ed75c9c1f3dc1f0f25f416fda"} Oct 13 18:30:03 crc kubenswrapper[4720]: I1013 18:30:03.347516 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fdb432e48564868254f5019335e10fa376f2e65ed75c9c1f3dc1f0f25f416fda" Oct 13 18:30:03 crc kubenswrapper[4720]: I1013 18:30:03.347538 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29339670-8qx7z" Oct 13 18:30:03 crc kubenswrapper[4720]: I1013 18:30:03.831391 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29339625-d4qnb"] Oct 13 18:30:03 crc kubenswrapper[4720]: I1013 18:30:03.838992 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29339625-d4qnb"] Oct 13 18:30:05 crc kubenswrapper[4720]: I1013 18:30:05.193910 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c19bc4b-bf72-4e25-aee2-310efe50630f" path="/var/lib/kubelet/pods/3c19bc4b-bf72-4e25-aee2-310efe50630f/volumes" Oct 13 18:30:11 crc kubenswrapper[4720]: I1013 18:30:11.141433 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-mjzm6_cabe6b29-bccd-4995-ab54-b6cabc86f7bf/control-plane-machine-set-operator/0.log" Oct 13 18:30:11 crc kubenswrapper[4720]: I1013 18:30:11.300704 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-dvc6z_43c40b45-9695-4d29-b627-c4ab23d1d6d0/kube-rbac-proxy/0.log" Oct 13 18:30:11 crc kubenswrapper[4720]: I1013 18:30:11.315504 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-dvc6z_43c40b45-9695-4d29-b627-c4ab23d1d6d0/machine-api-operator/0.log" Oct 13 18:30:21 crc kubenswrapper[4720]: I1013 18:30:21.128064 4720 scope.go:117] "RemoveContainer" containerID="805d1ec758d1bf3bfa0ab3daed7f06b6259d33f7f3050d16847ebe0cd1ff4bcf" Oct 13 18:30:24 crc kubenswrapper[4720]: I1013 18:30:24.980630 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-hcgbx_468376ef-c1ab-4db7-9006-0ded29f5c690/cert-manager-controller/0.log" Oct 13 18:30:25 crc kubenswrapper[4720]: I1013 18:30:25.120727 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-x8zf7_c0e61a1b-8c01-4c98-a6bd-cff432642c53/cert-manager-cainjector/0.log" Oct 13 18:30:25 crc kubenswrapper[4720]: I1013 18:30:25.125943 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-mrmx6_7142c613-395a-40ef-bef1-20ed0b6cdad3/cert-manager-webhook/0.log" Oct 13 18:30:38 crc kubenswrapper[4720]: I1013 18:30:38.705323 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6b874cbd85-jt5jd_f1c52489-5f05-43ca-a79c-db2a69061eac/nmstate-console-plugin/0.log" Oct 13 18:30:38 crc kubenswrapper[4720]: I1013 18:30:38.861989 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-r6sg8_420f8bbd-5b94-4775-8248-68220b91202f/nmstate-handler/0.log" Oct 13 18:30:38 crc kubenswrapper[4720]: I1013 18:30:38.925665 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-fdff9cb8d-k7h8j_fa98b66b-f1b2-4d57-8386-b449bf1076ec/kube-rbac-proxy/0.log" Oct 13 18:30:38 crc kubenswrapper[4720]: I1013 18:30:38.936991 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-fdff9cb8d-k7h8j_fa98b66b-f1b2-4d57-8386-b449bf1076ec/nmstate-metrics/0.log" Oct 13 18:30:39 crc kubenswrapper[4720]: I1013 18:30:39.106983 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-858ddd8f98-jprhs_ffdb8c39-acdf-40d9-9c23-bb881eb0b755/nmstate-operator/0.log" Oct 13 18:30:39 crc kubenswrapper[4720]: I1013 18:30:39.120821 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6cdbc54649-66xwd_e6844590-4dcb-4007-9e33-12ded957f55b/nmstate-webhook/0.log" Oct 13 18:30:45 crc kubenswrapper[4720]: I1013 18:30:45.212315 4720 patch_prober.go:28] interesting pod/machine-config-daemon-htwnl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 18:30:45 crc kubenswrapper[4720]: I1013 18:30:45.212917 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 18:30:47 crc kubenswrapper[4720]: I1013 18:30:47.575349 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-l5pnv"] Oct 13 18:30:47 crc kubenswrapper[4720]: E1013 18:30:47.576255 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e13f69b3-6a51-45c7-ac38-56e1f8634975" containerName="collect-profiles" Oct 13 18:30:47 crc kubenswrapper[4720]: I1013 18:30:47.576272 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="e13f69b3-6a51-45c7-ac38-56e1f8634975" containerName="collect-profiles" Oct 13 18:30:47 crc kubenswrapper[4720]: I1013 18:30:47.576553 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="e13f69b3-6a51-45c7-ac38-56e1f8634975" containerName="collect-profiles" Oct 13 18:30:47 crc kubenswrapper[4720]: I1013 18:30:47.578486 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l5pnv" Oct 13 18:30:47 crc kubenswrapper[4720]: I1013 18:30:47.591305 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l5pnv"] Oct 13 18:30:47 crc kubenswrapper[4720]: I1013 18:30:47.689010 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd631f30-0a1b-4dde-833e-fac81aaa1368-catalog-content\") pod \"redhat-marketplace-l5pnv\" (UID: \"dd631f30-0a1b-4dde-833e-fac81aaa1368\") " pod="openshift-marketplace/redhat-marketplace-l5pnv" Oct 13 18:30:47 crc kubenswrapper[4720]: I1013 18:30:47.689232 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwcdl\" (UniqueName: \"kubernetes.io/projected/dd631f30-0a1b-4dde-833e-fac81aaa1368-kube-api-access-dwcdl\") pod \"redhat-marketplace-l5pnv\" (UID: \"dd631f30-0a1b-4dde-833e-fac81aaa1368\") " pod="openshift-marketplace/redhat-marketplace-l5pnv" Oct 13 18:30:47 crc kubenswrapper[4720]: I1013 18:30:47.689275 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd631f30-0a1b-4dde-833e-fac81aaa1368-utilities\") pod \"redhat-marketplace-l5pnv\" (UID: \"dd631f30-0a1b-4dde-833e-fac81aaa1368\") " pod="openshift-marketplace/redhat-marketplace-l5pnv" Oct 13 18:30:47 crc kubenswrapper[4720]: I1013 18:30:47.791453 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwcdl\" (UniqueName: \"kubernetes.io/projected/dd631f30-0a1b-4dde-833e-fac81aaa1368-kube-api-access-dwcdl\") pod \"redhat-marketplace-l5pnv\" (UID: \"dd631f30-0a1b-4dde-833e-fac81aaa1368\") " pod="openshift-marketplace/redhat-marketplace-l5pnv" Oct 13 18:30:47 crc kubenswrapper[4720]: I1013 18:30:47.791540 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd631f30-0a1b-4dde-833e-fac81aaa1368-utilities\") pod \"redhat-marketplace-l5pnv\" (UID: \"dd631f30-0a1b-4dde-833e-fac81aaa1368\") " pod="openshift-marketplace/redhat-marketplace-l5pnv" Oct 13 18:30:47 crc kubenswrapper[4720]: I1013 18:30:47.791643 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd631f30-0a1b-4dde-833e-fac81aaa1368-catalog-content\") pod \"redhat-marketplace-l5pnv\" (UID: \"dd631f30-0a1b-4dde-833e-fac81aaa1368\") " pod="openshift-marketplace/redhat-marketplace-l5pnv" Oct 13 18:30:47 crc kubenswrapper[4720]: I1013 18:30:47.792226 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd631f30-0a1b-4dde-833e-fac81aaa1368-catalog-content\") pod \"redhat-marketplace-l5pnv\" (UID: \"dd631f30-0a1b-4dde-833e-fac81aaa1368\") " pod="openshift-marketplace/redhat-marketplace-l5pnv" Oct 13 18:30:47 crc kubenswrapper[4720]: I1013 18:30:47.792256 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd631f30-0a1b-4dde-833e-fac81aaa1368-utilities\") pod \"redhat-marketplace-l5pnv\" (UID: \"dd631f30-0a1b-4dde-833e-fac81aaa1368\") " pod="openshift-marketplace/redhat-marketplace-l5pnv" Oct 13 18:30:47 crc kubenswrapper[4720]: I1013 18:30:47.827563 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwcdl\" (UniqueName: \"kubernetes.io/projected/dd631f30-0a1b-4dde-833e-fac81aaa1368-kube-api-access-dwcdl\") pod \"redhat-marketplace-l5pnv\" (UID: \"dd631f30-0a1b-4dde-833e-fac81aaa1368\") " pod="openshift-marketplace/redhat-marketplace-l5pnv" Oct 13 18:30:47 crc kubenswrapper[4720]: I1013 18:30:47.969884 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l5pnv" Oct 13 18:30:48 crc kubenswrapper[4720]: I1013 18:30:48.447470 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l5pnv"] Oct 13 18:30:48 crc kubenswrapper[4720]: I1013 18:30:48.805875 4720 generic.go:334] "Generic (PLEG): container finished" podID="dd631f30-0a1b-4dde-833e-fac81aaa1368" containerID="288aef87388ccbf0efc17cda5ca9be341ed55125ac5f115d33defe6941151d6d" exitCode=0 Oct 13 18:30:48 crc kubenswrapper[4720]: I1013 18:30:48.805964 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l5pnv" event={"ID":"dd631f30-0a1b-4dde-833e-fac81aaa1368","Type":"ContainerDied","Data":"288aef87388ccbf0efc17cda5ca9be341ed55125ac5f115d33defe6941151d6d"} Oct 13 18:30:48 crc kubenswrapper[4720]: I1013 18:30:48.806238 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l5pnv" event={"ID":"dd631f30-0a1b-4dde-833e-fac81aaa1368","Type":"ContainerStarted","Data":"ddab0aec2f1fd59ef10dbfe768bf73969f4b46758ef3f4df1cf5dd83ff704b3b"} Oct 13 18:30:48 crc kubenswrapper[4720]: I1013 18:30:48.809773 4720 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 13 18:30:49 crc kubenswrapper[4720]: I1013 18:30:49.817361 4720 generic.go:334] "Generic (PLEG): container finished" podID="dd631f30-0a1b-4dde-833e-fac81aaa1368" containerID="22c6b6a90117f1bbca3014aa08ac2cd0e916b6a155d760ae1c09700460f80e4f" exitCode=0 Oct 13 18:30:49 crc kubenswrapper[4720]: I1013 18:30:49.817426 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l5pnv" event={"ID":"dd631f30-0a1b-4dde-833e-fac81aaa1368","Type":"ContainerDied","Data":"22c6b6a90117f1bbca3014aa08ac2cd0e916b6a155d760ae1c09700460f80e4f"} Oct 13 18:30:50 crc kubenswrapper[4720]: I1013 18:30:50.831389 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l5pnv" event={"ID":"dd631f30-0a1b-4dde-833e-fac81aaa1368","Type":"ContainerStarted","Data":"d5893bc45f96852735009e2e9c88f2d08c46545fa8f947e3b76263bf888f2100"} Oct 13 18:30:50 crc kubenswrapper[4720]: I1013 18:30:50.863550 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-l5pnv" podStartSLOduration=2.192709973 podStartE2EDuration="3.863523086s" podCreationTimestamp="2025-10-13 18:30:47 +0000 UTC" firstStartedPulling="2025-10-13 18:30:48.809440278 +0000 UTC m=+3994.266690420" lastFinishedPulling="2025-10-13 18:30:50.480253401 +0000 UTC m=+3995.937503533" observedRunningTime="2025-10-13 18:30:50.852748117 +0000 UTC m=+3996.309998259" watchObservedRunningTime="2025-10-13 18:30:50.863523086 +0000 UTC m=+3996.320773218" Oct 13 18:30:54 crc kubenswrapper[4720]: I1013 18:30:54.582857 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-68d546b9d8-ztrn5_34074aee-3c24-4d8c-929b-d0feb37ead02/kube-rbac-proxy/0.log" Oct 13 18:30:54 crc kubenswrapper[4720]: I1013 18:30:54.728073 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-68d546b9d8-ztrn5_34074aee-3c24-4d8c-929b-d0feb37ead02/controller/0.log" Oct 13 18:30:54 crc kubenswrapper[4720]: I1013 18:30:54.788438 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fw488_d10a0690-99f5-416a-b597-d41fa0635070/cp-frr-files/0.log" Oct 13 18:30:54 crc kubenswrapper[4720]: I1013 18:30:54.993827 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fw488_d10a0690-99f5-416a-b597-d41fa0635070/cp-reloader/0.log" Oct 13 18:30:55 crc kubenswrapper[4720]: I1013 18:30:55.004106 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fw488_d10a0690-99f5-416a-b597-d41fa0635070/cp-frr-files/0.log" Oct 13 18:30:55 crc kubenswrapper[4720]: I1013 18:30:55.018829 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fw488_d10a0690-99f5-416a-b597-d41fa0635070/cp-metrics/0.log" Oct 13 18:30:55 crc kubenswrapper[4720]: I1013 18:30:55.088409 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fw488_d10a0690-99f5-416a-b597-d41fa0635070/cp-reloader/0.log" Oct 13 18:30:55 crc kubenswrapper[4720]: I1013 18:30:55.212760 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fw488_d10a0690-99f5-416a-b597-d41fa0635070/cp-reloader/0.log" Oct 13 18:30:55 crc kubenswrapper[4720]: I1013 18:30:55.233319 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fw488_d10a0690-99f5-416a-b597-d41fa0635070/cp-metrics/0.log" Oct 13 18:30:55 crc kubenswrapper[4720]: I1013 18:30:55.237910 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fw488_d10a0690-99f5-416a-b597-d41fa0635070/cp-frr-files/0.log" Oct 13 18:30:55 crc kubenswrapper[4720]: I1013 18:30:55.323470 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fw488_d10a0690-99f5-416a-b597-d41fa0635070/cp-metrics/0.log" Oct 13 18:30:55 crc kubenswrapper[4720]: I1013 18:30:55.449234 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fw488_d10a0690-99f5-416a-b597-d41fa0635070/cp-metrics/0.log" Oct 13 18:30:55 crc kubenswrapper[4720]: I1013 18:30:55.457420 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fw488_d10a0690-99f5-416a-b597-d41fa0635070/cp-frr-files/0.log" Oct 13 18:30:55 crc kubenswrapper[4720]: I1013 18:30:55.491806 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fw488_d10a0690-99f5-416a-b597-d41fa0635070/cp-reloader/0.log" Oct 13 18:30:55 crc kubenswrapper[4720]: I1013 18:30:55.519614 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fw488_d10a0690-99f5-416a-b597-d41fa0635070/controller/0.log" Oct 13 18:30:55 crc kubenswrapper[4720]: I1013 18:30:55.608900 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fw488_d10a0690-99f5-416a-b597-d41fa0635070/frr-metrics/0.log" Oct 13 18:30:55 crc kubenswrapper[4720]: I1013 18:30:55.648440 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fw488_d10a0690-99f5-416a-b597-d41fa0635070/kube-rbac-proxy/0.log" Oct 13 18:30:55 crc kubenswrapper[4720]: I1013 18:30:55.703568 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fw488_d10a0690-99f5-416a-b597-d41fa0635070/kube-rbac-proxy-frr/0.log" Oct 13 18:30:55 crc kubenswrapper[4720]: I1013 18:30:55.776979 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fw488_d10a0690-99f5-416a-b597-d41fa0635070/reloader/0.log" Oct 13 18:30:55 crc kubenswrapper[4720]: I1013 18:30:55.852285 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-64bf5d555-wlwb8_01bbae6a-0286-4e02-bf7a-bdc1e5ba9e53/frr-k8s-webhook-server/0.log" Oct 13 18:30:56 crc kubenswrapper[4720]: I1013 18:30:56.028338 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-68b7b9f484-t4mwn_676df020-2204-4ef7-88b8-88eb27f8068b/manager/0.log" Oct 13 18:30:56 crc kubenswrapper[4720]: I1013 18:30:56.181749 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-86c9779c6-vtc8m_8eec22d4-687d-427c-a53e-5316b69e5448/webhook-server/0.log" Oct 13 18:30:56 crc kubenswrapper[4720]: I1013 18:30:56.300254 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-2kfxp_bd395621-3d53-4b0b-b8da-f7d5c7df9570/kube-rbac-proxy/0.log" Oct 13 18:30:56 crc kubenswrapper[4720]: I1013 18:30:56.781510 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-2kfxp_bd395621-3d53-4b0b-b8da-f7d5c7df9570/speaker/0.log" Oct 13 18:30:57 crc kubenswrapper[4720]: I1013 18:30:57.016149 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fw488_d10a0690-99f5-416a-b597-d41fa0635070/frr/0.log" Oct 13 18:30:57 crc kubenswrapper[4720]: I1013 18:30:57.969982 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-l5pnv" Oct 13 18:30:57 crc kubenswrapper[4720]: I1013 18:30:57.970039 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-l5pnv" Oct 13 18:30:58 crc kubenswrapper[4720]: I1013 18:30:58.018880 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-l5pnv" Oct 13 18:30:58 crc kubenswrapper[4720]: I1013 18:30:58.971166 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-l5pnv" Oct 13 18:30:59 crc kubenswrapper[4720]: I1013 18:30:59.027223 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l5pnv"] Oct 13 18:31:00 crc kubenswrapper[4720]: I1013 18:31:00.932118 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-l5pnv" podUID="dd631f30-0a1b-4dde-833e-fac81aaa1368" containerName="registry-server" containerID="cri-o://d5893bc45f96852735009e2e9c88f2d08c46545fa8f947e3b76263bf888f2100" gracePeriod=2 Oct 13 18:31:01 crc kubenswrapper[4720]: I1013 18:31:01.643750 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l5pnv" Oct 13 18:31:01 crc kubenswrapper[4720]: I1013 18:31:01.759078 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwcdl\" (UniqueName: \"kubernetes.io/projected/dd631f30-0a1b-4dde-833e-fac81aaa1368-kube-api-access-dwcdl\") pod \"dd631f30-0a1b-4dde-833e-fac81aaa1368\" (UID: \"dd631f30-0a1b-4dde-833e-fac81aaa1368\") " Oct 13 18:31:01 crc kubenswrapper[4720]: I1013 18:31:01.759232 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd631f30-0a1b-4dde-833e-fac81aaa1368-utilities\") pod \"dd631f30-0a1b-4dde-833e-fac81aaa1368\" (UID: \"dd631f30-0a1b-4dde-833e-fac81aaa1368\") " Oct 13 18:31:01 crc kubenswrapper[4720]: I1013 18:31:01.759447 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd631f30-0a1b-4dde-833e-fac81aaa1368-catalog-content\") pod \"dd631f30-0a1b-4dde-833e-fac81aaa1368\" (UID: \"dd631f30-0a1b-4dde-833e-fac81aaa1368\") " Oct 13 18:31:01 crc kubenswrapper[4720]: I1013 18:31:01.760535 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd631f30-0a1b-4dde-833e-fac81aaa1368-utilities" (OuterVolumeSpecName: "utilities") pod "dd631f30-0a1b-4dde-833e-fac81aaa1368" (UID: "dd631f30-0a1b-4dde-833e-fac81aaa1368"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 18:31:01 crc kubenswrapper[4720]: I1013 18:31:01.764755 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd631f30-0a1b-4dde-833e-fac81aaa1368-kube-api-access-dwcdl" (OuterVolumeSpecName: "kube-api-access-dwcdl") pod "dd631f30-0a1b-4dde-833e-fac81aaa1368" (UID: "dd631f30-0a1b-4dde-833e-fac81aaa1368"). InnerVolumeSpecName "kube-api-access-dwcdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 18:31:01 crc kubenswrapper[4720]: I1013 18:31:01.778355 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd631f30-0a1b-4dde-833e-fac81aaa1368-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dd631f30-0a1b-4dde-833e-fac81aaa1368" (UID: "dd631f30-0a1b-4dde-833e-fac81aaa1368"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 18:31:01 crc kubenswrapper[4720]: I1013 18:31:01.862171 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd631f30-0a1b-4dde-833e-fac81aaa1368-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 18:31:01 crc kubenswrapper[4720]: I1013 18:31:01.862524 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwcdl\" (UniqueName: \"kubernetes.io/projected/dd631f30-0a1b-4dde-833e-fac81aaa1368-kube-api-access-dwcdl\") on node \"crc\" DevicePath \"\"" Oct 13 18:31:01 crc kubenswrapper[4720]: I1013 18:31:01.862535 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd631f30-0a1b-4dde-833e-fac81aaa1368-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 18:31:01 crc kubenswrapper[4720]: I1013 18:31:01.889640 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9wrws"] Oct 13 18:31:01 crc kubenswrapper[4720]: E1013 18:31:01.890038 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd631f30-0a1b-4dde-833e-fac81aaa1368" containerName="extract-utilities" Oct 13 18:31:01 crc kubenswrapper[4720]: I1013 18:31:01.890056 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd631f30-0a1b-4dde-833e-fac81aaa1368" containerName="extract-utilities" Oct 13 18:31:01 crc kubenswrapper[4720]: E1013 18:31:01.890071 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd631f30-0a1b-4dde-833e-fac81aaa1368" containerName="extract-content" Oct 13 18:31:01 crc kubenswrapper[4720]: I1013 18:31:01.890077 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd631f30-0a1b-4dde-833e-fac81aaa1368" containerName="extract-content" Oct 13 18:31:01 crc kubenswrapper[4720]: E1013 18:31:01.890100 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd631f30-0a1b-4dde-833e-fac81aaa1368" containerName="registry-server" Oct 13 18:31:01 crc kubenswrapper[4720]: I1013 18:31:01.890106 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd631f30-0a1b-4dde-833e-fac81aaa1368" containerName="registry-server" Oct 13 18:31:01 crc kubenswrapper[4720]: I1013 18:31:01.890310 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd631f30-0a1b-4dde-833e-fac81aaa1368" containerName="registry-server" Oct 13 18:31:01 crc kubenswrapper[4720]: I1013 18:31:01.893577 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9wrws" Oct 13 18:31:01 crc kubenswrapper[4720]: I1013 18:31:01.907440 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9wrws"] Oct 13 18:31:01 crc kubenswrapper[4720]: I1013 18:31:01.947728 4720 generic.go:334] "Generic (PLEG): container finished" podID="dd631f30-0a1b-4dde-833e-fac81aaa1368" containerID="d5893bc45f96852735009e2e9c88f2d08c46545fa8f947e3b76263bf888f2100" exitCode=0 Oct 13 18:31:01 crc kubenswrapper[4720]: I1013 18:31:01.947773 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l5pnv" event={"ID":"dd631f30-0a1b-4dde-833e-fac81aaa1368","Type":"ContainerDied","Data":"d5893bc45f96852735009e2e9c88f2d08c46545fa8f947e3b76263bf888f2100"} Oct 13 18:31:01 crc kubenswrapper[4720]: I1013 18:31:01.947799 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l5pnv" event={"ID":"dd631f30-0a1b-4dde-833e-fac81aaa1368","Type":"ContainerDied","Data":"ddab0aec2f1fd59ef10dbfe768bf73969f4b46758ef3f4df1cf5dd83ff704b3b"} Oct 13 18:31:01 crc kubenswrapper[4720]: I1013 18:31:01.947815 4720 scope.go:117] "RemoveContainer" containerID="d5893bc45f96852735009e2e9c88f2d08c46545fa8f947e3b76263bf888f2100" Oct 13 18:31:01 crc kubenswrapper[4720]: I1013 18:31:01.947941 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l5pnv" Oct 13 18:31:01 crc kubenswrapper[4720]: I1013 18:31:01.986259 4720 scope.go:117] "RemoveContainer" containerID="22c6b6a90117f1bbca3014aa08ac2cd0e916b6a155d760ae1c09700460f80e4f" Oct 13 18:31:02 crc kubenswrapper[4720]: I1013 18:31:02.005878 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l5pnv"] Oct 13 18:31:02 crc kubenswrapper[4720]: I1013 18:31:02.016285 4720 scope.go:117] "RemoveContainer" containerID="288aef87388ccbf0efc17cda5ca9be341ed55125ac5f115d33defe6941151d6d" Oct 13 18:31:02 crc kubenswrapper[4720]: I1013 18:31:02.026720 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-l5pnv"] Oct 13 18:31:02 crc kubenswrapper[4720]: I1013 18:31:02.052888 4720 scope.go:117] "RemoveContainer" containerID="d5893bc45f96852735009e2e9c88f2d08c46545fa8f947e3b76263bf888f2100" Oct 13 18:31:02 crc kubenswrapper[4720]: E1013 18:31:02.053526 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5893bc45f96852735009e2e9c88f2d08c46545fa8f947e3b76263bf888f2100\": container with ID starting with d5893bc45f96852735009e2e9c88f2d08c46545fa8f947e3b76263bf888f2100 not found: ID does not exist" containerID="d5893bc45f96852735009e2e9c88f2d08c46545fa8f947e3b76263bf888f2100" Oct 13 18:31:02 crc kubenswrapper[4720]: I1013 18:31:02.053570 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5893bc45f96852735009e2e9c88f2d08c46545fa8f947e3b76263bf888f2100"} err="failed to get container status \"d5893bc45f96852735009e2e9c88f2d08c46545fa8f947e3b76263bf888f2100\": rpc error: code = NotFound desc = could not find container \"d5893bc45f96852735009e2e9c88f2d08c46545fa8f947e3b76263bf888f2100\": container with ID starting with d5893bc45f96852735009e2e9c88f2d08c46545fa8f947e3b76263bf888f2100 not found: ID does not exist" Oct 13 18:31:02 crc kubenswrapper[4720]: I1013 18:31:02.053596 4720 scope.go:117] "RemoveContainer" containerID="22c6b6a90117f1bbca3014aa08ac2cd0e916b6a155d760ae1c09700460f80e4f" Oct 13 18:31:02 crc kubenswrapper[4720]: E1013 18:31:02.054330 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22c6b6a90117f1bbca3014aa08ac2cd0e916b6a155d760ae1c09700460f80e4f\": container with ID starting with 22c6b6a90117f1bbca3014aa08ac2cd0e916b6a155d760ae1c09700460f80e4f not found: ID does not exist" containerID="22c6b6a90117f1bbca3014aa08ac2cd0e916b6a155d760ae1c09700460f80e4f" Oct 13 18:31:02 crc kubenswrapper[4720]: I1013 18:31:02.054379 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22c6b6a90117f1bbca3014aa08ac2cd0e916b6a155d760ae1c09700460f80e4f"} err="failed to get container status \"22c6b6a90117f1bbca3014aa08ac2cd0e916b6a155d760ae1c09700460f80e4f\": rpc error: code = NotFound desc = could not find container \"22c6b6a90117f1bbca3014aa08ac2cd0e916b6a155d760ae1c09700460f80e4f\": container with ID starting with 22c6b6a90117f1bbca3014aa08ac2cd0e916b6a155d760ae1c09700460f80e4f not found: ID does not exist" Oct 13 18:31:02 crc kubenswrapper[4720]: I1013 18:31:02.054411 4720 scope.go:117] "RemoveContainer" containerID="288aef87388ccbf0efc17cda5ca9be341ed55125ac5f115d33defe6941151d6d" Oct 13 18:31:02 crc kubenswrapper[4720]: E1013 18:31:02.056445 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"288aef87388ccbf0efc17cda5ca9be341ed55125ac5f115d33defe6941151d6d\": container with ID starting with 288aef87388ccbf0efc17cda5ca9be341ed55125ac5f115d33defe6941151d6d not found: ID does not exist" containerID="288aef87388ccbf0efc17cda5ca9be341ed55125ac5f115d33defe6941151d6d" Oct 13 18:31:02 crc kubenswrapper[4720]: I1013 18:31:02.056487 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"288aef87388ccbf0efc17cda5ca9be341ed55125ac5f115d33defe6941151d6d"} err="failed to get container status \"288aef87388ccbf0efc17cda5ca9be341ed55125ac5f115d33defe6941151d6d\": rpc error: code = NotFound desc = could not find container \"288aef87388ccbf0efc17cda5ca9be341ed55125ac5f115d33defe6941151d6d\": container with ID starting with 288aef87388ccbf0efc17cda5ca9be341ed55125ac5f115d33defe6941151d6d not found: ID does not exist" Oct 13 18:31:02 crc kubenswrapper[4720]: I1013 18:31:02.067731 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27f647fb-994a-4a9a-96a3-8fb1f285c006-utilities\") pod \"community-operators-9wrws\" (UID: \"27f647fb-994a-4a9a-96a3-8fb1f285c006\") " pod="openshift-marketplace/community-operators-9wrws" Oct 13 18:31:02 crc kubenswrapper[4720]: I1013 18:31:02.067799 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27f647fb-994a-4a9a-96a3-8fb1f285c006-catalog-content\") pod \"community-operators-9wrws\" (UID: \"27f647fb-994a-4a9a-96a3-8fb1f285c006\") " pod="openshift-marketplace/community-operators-9wrws" Oct 13 18:31:02 crc kubenswrapper[4720]: I1013 18:31:02.067967 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tk6zj\" (UniqueName: \"kubernetes.io/projected/27f647fb-994a-4a9a-96a3-8fb1f285c006-kube-api-access-tk6zj\") pod \"community-operators-9wrws\" (UID: \"27f647fb-994a-4a9a-96a3-8fb1f285c006\") " pod="openshift-marketplace/community-operators-9wrws" Oct 13 18:31:02 crc kubenswrapper[4720]: I1013 18:31:02.169643 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27f647fb-994a-4a9a-96a3-8fb1f285c006-utilities\") pod \"community-operators-9wrws\" (UID: \"27f647fb-994a-4a9a-96a3-8fb1f285c006\") " pod="openshift-marketplace/community-operators-9wrws" Oct 13 18:31:02 crc kubenswrapper[4720]: I1013 18:31:02.169736 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27f647fb-994a-4a9a-96a3-8fb1f285c006-catalog-content\") pod \"community-operators-9wrws\" (UID: \"27f647fb-994a-4a9a-96a3-8fb1f285c006\") " pod="openshift-marketplace/community-operators-9wrws" Oct 13 18:31:02 crc kubenswrapper[4720]: I1013 18:31:02.169848 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tk6zj\" (UniqueName: \"kubernetes.io/projected/27f647fb-994a-4a9a-96a3-8fb1f285c006-kube-api-access-tk6zj\") pod \"community-operators-9wrws\" (UID: \"27f647fb-994a-4a9a-96a3-8fb1f285c006\") " pod="openshift-marketplace/community-operators-9wrws" Oct 13 18:31:02 crc kubenswrapper[4720]: I1013 18:31:02.171068 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27f647fb-994a-4a9a-96a3-8fb1f285c006-utilities\") pod \"community-operators-9wrws\" (UID: \"27f647fb-994a-4a9a-96a3-8fb1f285c006\") " pod="openshift-marketplace/community-operators-9wrws" Oct 13 18:31:02 crc kubenswrapper[4720]: I1013 18:31:02.171370 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27f647fb-994a-4a9a-96a3-8fb1f285c006-catalog-content\") pod \"community-operators-9wrws\" (UID: \"27f647fb-994a-4a9a-96a3-8fb1f285c006\") " pod="openshift-marketplace/community-operators-9wrws" Oct 13 18:31:02 crc kubenswrapper[4720]: I1013 18:31:02.193085 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tk6zj\" (UniqueName: \"kubernetes.io/projected/27f647fb-994a-4a9a-96a3-8fb1f285c006-kube-api-access-tk6zj\") pod \"community-operators-9wrws\" (UID: \"27f647fb-994a-4a9a-96a3-8fb1f285c006\") " pod="openshift-marketplace/community-operators-9wrws" Oct 13 18:31:02 crc kubenswrapper[4720]: I1013 18:31:02.224322 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9wrws" Oct 13 18:31:02 crc kubenswrapper[4720]: I1013 18:31:02.792068 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9wrws"] Oct 13 18:31:03 crc kubenswrapper[4720]: I1013 18:31:03.185578 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd631f30-0a1b-4dde-833e-fac81aaa1368" path="/var/lib/kubelet/pods/dd631f30-0a1b-4dde-833e-fac81aaa1368/volumes" Oct 13 18:31:03 crc kubenswrapper[4720]: W1013 18:31:03.396950 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27f647fb_994a_4a9a_96a3_8fb1f285c006.slice/crio-bb10c43ea26616f29b0c4fb392d5a0d72a1115e389e33d52563a411c7c54e944 WatchSource:0}: Error finding container bb10c43ea26616f29b0c4fb392d5a0d72a1115e389e33d52563a411c7c54e944: Status 404 returned error can't find the container with id bb10c43ea26616f29b0c4fb392d5a0d72a1115e389e33d52563a411c7c54e944 Oct 13 18:31:04 crc kubenswrapper[4720]: I1013 18:31:04.002521 4720 generic.go:334] "Generic (PLEG): container finished" podID="27f647fb-994a-4a9a-96a3-8fb1f285c006" containerID="f34b118265847fc567dfb3299b2e93554b2aa439b8e992830403934ef0e5130d" exitCode=0 Oct 13 18:31:04 crc kubenswrapper[4720]: I1013 18:31:04.002876 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9wrws" event={"ID":"27f647fb-994a-4a9a-96a3-8fb1f285c006","Type":"ContainerDied","Data":"f34b118265847fc567dfb3299b2e93554b2aa439b8e992830403934ef0e5130d"} Oct 13 18:31:04 crc kubenswrapper[4720]: I1013 18:31:04.002918 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9wrws" event={"ID":"27f647fb-994a-4a9a-96a3-8fb1f285c006","Type":"ContainerStarted","Data":"bb10c43ea26616f29b0c4fb392d5a0d72a1115e389e33d52563a411c7c54e944"} Oct 13 18:31:05 crc kubenswrapper[4720]: I1013 18:31:05.011351 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9wrws" event={"ID":"27f647fb-994a-4a9a-96a3-8fb1f285c006","Type":"ContainerStarted","Data":"3bd2ad5052e0c121c4ec275cd0136fba0ebdc2f06fc02dac563ec443bc9b1099"} Oct 13 18:31:06 crc kubenswrapper[4720]: I1013 18:31:06.028337 4720 generic.go:334] "Generic (PLEG): container finished" podID="27f647fb-994a-4a9a-96a3-8fb1f285c006" containerID="3bd2ad5052e0c121c4ec275cd0136fba0ebdc2f06fc02dac563ec443bc9b1099" exitCode=0 Oct 13 18:31:06 crc kubenswrapper[4720]: I1013 18:31:06.028469 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9wrws" event={"ID":"27f647fb-994a-4a9a-96a3-8fb1f285c006","Type":"ContainerDied","Data":"3bd2ad5052e0c121c4ec275cd0136fba0ebdc2f06fc02dac563ec443bc9b1099"} Oct 13 18:31:06 crc kubenswrapper[4720]: I1013 18:31:06.299428 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-j5f4h"] Oct 13 18:31:06 crc kubenswrapper[4720]: I1013 18:31:06.303168 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j5f4h" Oct 13 18:31:06 crc kubenswrapper[4720]: I1013 18:31:06.315680 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j5f4h"] Oct 13 18:31:06 crc kubenswrapper[4720]: I1013 18:31:06.354474 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c99042a-9438-4469-9df9-7c91c96c7568-catalog-content\") pod \"redhat-operators-j5f4h\" (UID: \"8c99042a-9438-4469-9df9-7c91c96c7568\") " pod="openshift-marketplace/redhat-operators-j5f4h" Oct 13 18:31:06 crc kubenswrapper[4720]: I1013 18:31:06.354645 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c99042a-9438-4469-9df9-7c91c96c7568-utilities\") pod \"redhat-operators-j5f4h\" (UID: \"8c99042a-9438-4469-9df9-7c91c96c7568\") " pod="openshift-marketplace/redhat-operators-j5f4h" Oct 13 18:31:06 crc kubenswrapper[4720]: I1013 18:31:06.354695 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmq8t\" (UniqueName: \"kubernetes.io/projected/8c99042a-9438-4469-9df9-7c91c96c7568-kube-api-access-pmq8t\") pod \"redhat-operators-j5f4h\" (UID: \"8c99042a-9438-4469-9df9-7c91c96c7568\") " pod="openshift-marketplace/redhat-operators-j5f4h" Oct 13 18:31:06 crc kubenswrapper[4720]: I1013 18:31:06.456143 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c99042a-9438-4469-9df9-7c91c96c7568-utilities\") pod \"redhat-operators-j5f4h\" (UID: \"8c99042a-9438-4469-9df9-7c91c96c7568\") " pod="openshift-marketplace/redhat-operators-j5f4h" Oct 13 18:31:06 crc kubenswrapper[4720]: I1013 18:31:06.456216 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmq8t\" (UniqueName: \"kubernetes.io/projected/8c99042a-9438-4469-9df9-7c91c96c7568-kube-api-access-pmq8t\") pod \"redhat-operators-j5f4h\" (UID: \"8c99042a-9438-4469-9df9-7c91c96c7568\") " pod="openshift-marketplace/redhat-operators-j5f4h" Oct 13 18:31:06 crc kubenswrapper[4720]: I1013 18:31:06.456333 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c99042a-9438-4469-9df9-7c91c96c7568-catalog-content\") pod \"redhat-operators-j5f4h\" (UID: \"8c99042a-9438-4469-9df9-7c91c96c7568\") " pod="openshift-marketplace/redhat-operators-j5f4h" Oct 13 18:31:06 crc kubenswrapper[4720]: I1013 18:31:06.456763 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c99042a-9438-4469-9df9-7c91c96c7568-utilities\") pod \"redhat-operators-j5f4h\" (UID: \"8c99042a-9438-4469-9df9-7c91c96c7568\") " pod="openshift-marketplace/redhat-operators-j5f4h" Oct 13 18:31:06 crc kubenswrapper[4720]: I1013 18:31:06.456853 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c99042a-9438-4469-9df9-7c91c96c7568-catalog-content\") pod \"redhat-operators-j5f4h\" (UID: \"8c99042a-9438-4469-9df9-7c91c96c7568\") " pod="openshift-marketplace/redhat-operators-j5f4h" Oct 13 18:31:06 crc kubenswrapper[4720]: I1013 18:31:06.489340 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmq8t\" (UniqueName: \"kubernetes.io/projected/8c99042a-9438-4469-9df9-7c91c96c7568-kube-api-access-pmq8t\") pod \"redhat-operators-j5f4h\" (UID: \"8c99042a-9438-4469-9df9-7c91c96c7568\") " pod="openshift-marketplace/redhat-operators-j5f4h" Oct 13 18:31:06 crc kubenswrapper[4720]: I1013 18:31:06.674934 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j5f4h" Oct 13 18:31:07 crc kubenswrapper[4720]: I1013 18:31:07.037875 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9wrws" event={"ID":"27f647fb-994a-4a9a-96a3-8fb1f285c006","Type":"ContainerStarted","Data":"24cdfbc09740b3a2aae5aeed4e99d83696dff3fe436bbc2fd1adb5efc4dbd504"} Oct 13 18:31:07 crc kubenswrapper[4720]: I1013 18:31:07.054573 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9wrws" podStartSLOduration=3.601545533 podStartE2EDuration="6.054556492s" podCreationTimestamp="2025-10-13 18:31:01 +0000 UTC" firstStartedPulling="2025-10-13 18:31:04.01616295 +0000 UTC m=+4009.473413102" lastFinishedPulling="2025-10-13 18:31:06.469173919 +0000 UTC m=+4011.926424061" observedRunningTime="2025-10-13 18:31:07.05370321 +0000 UTC m=+4012.510953342" watchObservedRunningTime="2025-10-13 18:31:07.054556492 +0000 UTC m=+4012.511806624" Oct 13 18:31:07 crc kubenswrapper[4720]: I1013 18:31:07.145652 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j5f4h"] Oct 13 18:31:08 crc kubenswrapper[4720]: I1013 18:31:08.051278 4720 generic.go:334] "Generic (PLEG): container finished" podID="8c99042a-9438-4469-9df9-7c91c96c7568" containerID="c77fe40affe14dbeb7468c293266be64c79c0acad6b34ec12e0a7acfe3df7ad5" exitCode=0 Oct 13 18:31:08 crc kubenswrapper[4720]: I1013 18:31:08.053015 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j5f4h" event={"ID":"8c99042a-9438-4469-9df9-7c91c96c7568","Type":"ContainerDied","Data":"c77fe40affe14dbeb7468c293266be64c79c0acad6b34ec12e0a7acfe3df7ad5"} Oct 13 18:31:08 crc kubenswrapper[4720]: I1013 18:31:08.054347 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j5f4h" event={"ID":"8c99042a-9438-4469-9df9-7c91c96c7568","Type":"ContainerStarted","Data":"5ec6768ca5cf6297ef5ceeaf8ff1aabdc1420b167f0a36348c0a5db5222f64e8"} Oct 13 18:31:10 crc kubenswrapper[4720]: I1013 18:31:10.075570 4720 generic.go:334] "Generic (PLEG): container finished" podID="8c99042a-9438-4469-9df9-7c91c96c7568" containerID="ed56956c50cb7b520f46cb78919dd007cf3d14679bf42485156364676c184542" exitCode=0 Oct 13 18:31:10 crc kubenswrapper[4720]: I1013 18:31:10.075634 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j5f4h" event={"ID":"8c99042a-9438-4469-9df9-7c91c96c7568","Type":"ContainerDied","Data":"ed56956c50cb7b520f46cb78919dd007cf3d14679bf42485156364676c184542"} Oct 13 18:31:11 crc kubenswrapper[4720]: I1013 18:31:11.113336 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j5f4h" event={"ID":"8c99042a-9438-4469-9df9-7c91c96c7568","Type":"ContainerStarted","Data":"fe59d51728023961474d62fa3a2b92e0c0c6705e4434e4fd321022e5b3ecfe6d"} Oct 13 18:31:11 crc kubenswrapper[4720]: I1013 18:31:11.137899 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-j5f4h" podStartSLOduration=2.385417891 podStartE2EDuration="5.137880636s" podCreationTimestamp="2025-10-13 18:31:06 +0000 UTC" firstStartedPulling="2025-10-13 18:31:08.055835945 +0000 UTC m=+4013.513086077" lastFinishedPulling="2025-10-13 18:31:10.80829869 +0000 UTC m=+4016.265548822" observedRunningTime="2025-10-13 18:31:11.135778932 +0000 UTC m=+4016.593029064" watchObservedRunningTime="2025-10-13 18:31:11.137880636 +0000 UTC m=+4016.595130778" Oct 13 18:31:12 crc kubenswrapper[4720]: I1013 18:31:12.224645 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9wrws" Oct 13 18:31:12 crc kubenswrapper[4720]: I1013 18:31:12.224989 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9wrws" Oct 13 18:31:12 crc kubenswrapper[4720]: I1013 18:31:12.283674 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9wrws" Oct 13 18:31:13 crc kubenswrapper[4720]: I1013 18:31:13.059095 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ss4c8_d5bc3e7b-d845-48f4-9387-94904ed3b983/util/0.log" Oct 13 18:31:13 crc kubenswrapper[4720]: I1013 18:31:13.977690 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9wrws" Oct 13 18:31:14 crc kubenswrapper[4720]: I1013 18:31:14.027719 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ss4c8_d5bc3e7b-d845-48f4-9387-94904ed3b983/util/0.log" Oct 13 18:31:14 crc kubenswrapper[4720]: I1013 18:31:14.099057 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ss4c8_d5bc3e7b-d845-48f4-9387-94904ed3b983/pull/0.log" Oct 13 18:31:14 crc kubenswrapper[4720]: I1013 18:31:14.158273 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ss4c8_d5bc3e7b-d845-48f4-9387-94904ed3b983/pull/0.log" Oct 13 18:31:14 crc kubenswrapper[4720]: I1013 18:31:14.310568 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ss4c8_d5bc3e7b-d845-48f4-9387-94904ed3b983/util/0.log" Oct 13 18:31:14 crc kubenswrapper[4720]: I1013 18:31:14.365152 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ss4c8_d5bc3e7b-d845-48f4-9387-94904ed3b983/pull/0.log" Oct 13 18:31:14 crc kubenswrapper[4720]: I1013 18:31:14.378836 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ss4c8_d5bc3e7b-d845-48f4-9387-94904ed3b983/extract/0.log" Oct 13 18:31:14 crc kubenswrapper[4720]: I1013 18:31:14.503609 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wc648_ea62eeb5-7f86-4555-ba88-fb04f9986df6/extract-utilities/0.log" Oct 13 18:31:14 crc kubenswrapper[4720]: I1013 18:31:14.691019 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wc648_ea62eeb5-7f86-4555-ba88-fb04f9986df6/extract-utilities/0.log" Oct 13 18:31:14 crc kubenswrapper[4720]: I1013 18:31:14.696957 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wc648_ea62eeb5-7f86-4555-ba88-fb04f9986df6/extract-content/0.log" Oct 13 18:31:14 crc kubenswrapper[4720]: I1013 18:31:14.699460 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wc648_ea62eeb5-7f86-4555-ba88-fb04f9986df6/extract-content/0.log" Oct 13 18:31:14 crc kubenswrapper[4720]: I1013 18:31:14.892172 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wc648_ea62eeb5-7f86-4555-ba88-fb04f9986df6/extract-utilities/0.log" Oct 13 18:31:15 crc kubenswrapper[4720]: I1013 18:31:15.002430 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wc648_ea62eeb5-7f86-4555-ba88-fb04f9986df6/extract-content/0.log" Oct 13 18:31:15 crc kubenswrapper[4720]: I1013 18:31:15.088311 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9wrws"] Oct 13 18:31:15 crc kubenswrapper[4720]: I1013 18:31:15.146948 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9wrws" podUID="27f647fb-994a-4a9a-96a3-8fb1f285c006" containerName="registry-server" containerID="cri-o://24cdfbc09740b3a2aae5aeed4e99d83696dff3fe436bbc2fd1adb5efc4dbd504" gracePeriod=2 Oct 13 18:31:15 crc kubenswrapper[4720]: I1013 18:31:15.177060 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9wrws_27f647fb-994a-4a9a-96a3-8fb1f285c006/extract-utilities/0.log" Oct 13 18:31:15 crc kubenswrapper[4720]: I1013 18:31:15.212234 4720 patch_prober.go:28] interesting pod/machine-config-daemon-htwnl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 18:31:15 crc kubenswrapper[4720]: I1013 18:31:15.212281 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 18:31:15 crc kubenswrapper[4720]: I1013 18:31:15.237688 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wc648_ea62eeb5-7f86-4555-ba88-fb04f9986df6/registry-server/0.log" Oct 13 18:31:15 crc kubenswrapper[4720]: I1013 18:31:15.408417 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9wrws_27f647fb-994a-4a9a-96a3-8fb1f285c006/extract-content/0.log" Oct 13 18:31:15 crc kubenswrapper[4720]: I1013 18:31:15.429745 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9wrws_27f647fb-994a-4a9a-96a3-8fb1f285c006/extract-utilities/0.log" Oct 13 18:31:15 crc kubenswrapper[4720]: I1013 18:31:15.447439 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9wrws_27f647fb-994a-4a9a-96a3-8fb1f285c006/extract-content/0.log" Oct 13 18:31:15 crc kubenswrapper[4720]: I1013 18:31:15.626329 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9wrws" Oct 13 18:31:15 crc kubenswrapper[4720]: I1013 18:31:15.664280 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9wrws_27f647fb-994a-4a9a-96a3-8fb1f285c006/extract-content/0.log" Oct 13 18:31:15 crc kubenswrapper[4720]: I1013 18:31:15.692724 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9wrws_27f647fb-994a-4a9a-96a3-8fb1f285c006/extract-utilities/0.log" Oct 13 18:31:15 crc kubenswrapper[4720]: I1013 18:31:15.724303 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9wrws_27f647fb-994a-4a9a-96a3-8fb1f285c006/registry-server/0.log" Oct 13 18:31:15 crc kubenswrapper[4720]: I1013 18:31:15.744176 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27f647fb-994a-4a9a-96a3-8fb1f285c006-utilities\") pod \"27f647fb-994a-4a9a-96a3-8fb1f285c006\" (UID: \"27f647fb-994a-4a9a-96a3-8fb1f285c006\") " Oct 13 18:31:15 crc kubenswrapper[4720]: I1013 18:31:15.744263 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk6zj\" (UniqueName: \"kubernetes.io/projected/27f647fb-994a-4a9a-96a3-8fb1f285c006-kube-api-access-tk6zj\") pod \"27f647fb-994a-4a9a-96a3-8fb1f285c006\" (UID: \"27f647fb-994a-4a9a-96a3-8fb1f285c006\") " Oct 13 18:31:15 crc kubenswrapper[4720]: I1013 18:31:15.744306 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27f647fb-994a-4a9a-96a3-8fb1f285c006-catalog-content\") pod \"27f647fb-994a-4a9a-96a3-8fb1f285c006\" (UID: \"27f647fb-994a-4a9a-96a3-8fb1f285c006\") " Oct 13 18:31:15 crc kubenswrapper[4720]: I1013 18:31:15.744996 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27f647fb-994a-4a9a-96a3-8fb1f285c006-utilities" (OuterVolumeSpecName: "utilities") pod "27f647fb-994a-4a9a-96a3-8fb1f285c006" (UID: "27f647fb-994a-4a9a-96a3-8fb1f285c006"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 18:31:15 crc kubenswrapper[4720]: I1013 18:31:15.757876 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27f647fb-994a-4a9a-96a3-8fb1f285c006-kube-api-access-tk6zj" (OuterVolumeSpecName: "kube-api-access-tk6zj") pod "27f647fb-994a-4a9a-96a3-8fb1f285c006" (UID: "27f647fb-994a-4a9a-96a3-8fb1f285c006"). InnerVolumeSpecName "kube-api-access-tk6zj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 18:31:15 crc kubenswrapper[4720]: I1013 18:31:15.791944 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27f647fb-994a-4a9a-96a3-8fb1f285c006-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "27f647fb-994a-4a9a-96a3-8fb1f285c006" (UID: "27f647fb-994a-4a9a-96a3-8fb1f285c006"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 18:31:15 crc kubenswrapper[4720]: I1013 18:31:15.846163 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27f647fb-994a-4a9a-96a3-8fb1f285c006-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 18:31:15 crc kubenswrapper[4720]: I1013 18:31:15.846218 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk6zj\" (UniqueName: \"kubernetes.io/projected/27f647fb-994a-4a9a-96a3-8fb1f285c006-kube-api-access-tk6zj\") on node \"crc\" DevicePath \"\"" Oct 13 18:31:15 crc kubenswrapper[4720]: I1013 18:31:15.846230 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27f647fb-994a-4a9a-96a3-8fb1f285c006-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 18:31:15 crc kubenswrapper[4720]: I1013 18:31:15.886338 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rj2jd_7af8d26b-d10a-4be8-8773-03204f461fe3/extract-utilities/0.log" Oct 13 18:31:16 crc kubenswrapper[4720]: I1013 18:31:16.103912 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rj2jd_7af8d26b-d10a-4be8-8773-03204f461fe3/extract-content/0.log" Oct 13 18:31:16 crc kubenswrapper[4720]: I1013 18:31:16.105465 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rj2jd_7af8d26b-d10a-4be8-8773-03204f461fe3/extract-content/0.log" Oct 13 18:31:16 crc kubenswrapper[4720]: I1013 18:31:16.123951 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rj2jd_7af8d26b-d10a-4be8-8773-03204f461fe3/extract-utilities/0.log" Oct 13 18:31:16 crc kubenswrapper[4720]: I1013 18:31:16.159800 4720 generic.go:334] "Generic (PLEG): container finished" podID="27f647fb-994a-4a9a-96a3-8fb1f285c006" containerID="24cdfbc09740b3a2aae5aeed4e99d83696dff3fe436bbc2fd1adb5efc4dbd504" exitCode=0 Oct 13 18:31:16 crc kubenswrapper[4720]: I1013 18:31:16.159856 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9wrws" event={"ID":"27f647fb-994a-4a9a-96a3-8fb1f285c006","Type":"ContainerDied","Data":"24cdfbc09740b3a2aae5aeed4e99d83696dff3fe436bbc2fd1adb5efc4dbd504"} Oct 13 18:31:16 crc kubenswrapper[4720]: I1013 18:31:16.159884 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9wrws" event={"ID":"27f647fb-994a-4a9a-96a3-8fb1f285c006","Type":"ContainerDied","Data":"bb10c43ea26616f29b0c4fb392d5a0d72a1115e389e33d52563a411c7c54e944"} Oct 13 18:31:16 crc kubenswrapper[4720]: I1013 18:31:16.159902 4720 scope.go:117] "RemoveContainer" containerID="24cdfbc09740b3a2aae5aeed4e99d83696dff3fe436bbc2fd1adb5efc4dbd504" Oct 13 18:31:16 crc kubenswrapper[4720]: I1013 18:31:16.160063 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9wrws" Oct 13 18:31:16 crc kubenswrapper[4720]: I1013 18:31:16.180826 4720 scope.go:117] "RemoveContainer" containerID="3bd2ad5052e0c121c4ec275cd0136fba0ebdc2f06fc02dac563ec443bc9b1099" Oct 13 18:31:16 crc kubenswrapper[4720]: I1013 18:31:16.195441 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9wrws"] Oct 13 18:31:16 crc kubenswrapper[4720]: I1013 18:31:16.207651 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9wrws"] Oct 13 18:31:16 crc kubenswrapper[4720]: I1013 18:31:16.215972 4720 scope.go:117] "RemoveContainer" containerID="f34b118265847fc567dfb3299b2e93554b2aa439b8e992830403934ef0e5130d" Oct 13 18:31:16 crc kubenswrapper[4720]: I1013 18:31:16.242833 4720 scope.go:117] "RemoveContainer" containerID="24cdfbc09740b3a2aae5aeed4e99d83696dff3fe436bbc2fd1adb5efc4dbd504" Oct 13 18:31:16 crc kubenswrapper[4720]: E1013 18:31:16.243298 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24cdfbc09740b3a2aae5aeed4e99d83696dff3fe436bbc2fd1adb5efc4dbd504\": container with ID starting with 24cdfbc09740b3a2aae5aeed4e99d83696dff3fe436bbc2fd1adb5efc4dbd504 not found: ID does not exist" containerID="24cdfbc09740b3a2aae5aeed4e99d83696dff3fe436bbc2fd1adb5efc4dbd504" Oct 13 18:31:16 crc kubenswrapper[4720]: I1013 18:31:16.243327 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24cdfbc09740b3a2aae5aeed4e99d83696dff3fe436bbc2fd1adb5efc4dbd504"} err="failed to get container status \"24cdfbc09740b3a2aae5aeed4e99d83696dff3fe436bbc2fd1adb5efc4dbd504\": rpc error: code = NotFound desc = could not find container \"24cdfbc09740b3a2aae5aeed4e99d83696dff3fe436bbc2fd1adb5efc4dbd504\": container with ID starting with 24cdfbc09740b3a2aae5aeed4e99d83696dff3fe436bbc2fd1adb5efc4dbd504 not found: ID does not exist" Oct 13 18:31:16 crc kubenswrapper[4720]: I1013 18:31:16.243346 4720 scope.go:117] "RemoveContainer" containerID="3bd2ad5052e0c121c4ec275cd0136fba0ebdc2f06fc02dac563ec443bc9b1099" Oct 13 18:31:16 crc kubenswrapper[4720]: E1013 18:31:16.243668 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bd2ad5052e0c121c4ec275cd0136fba0ebdc2f06fc02dac563ec443bc9b1099\": container with ID starting with 3bd2ad5052e0c121c4ec275cd0136fba0ebdc2f06fc02dac563ec443bc9b1099 not found: ID does not exist" containerID="3bd2ad5052e0c121c4ec275cd0136fba0ebdc2f06fc02dac563ec443bc9b1099" Oct 13 18:31:16 crc kubenswrapper[4720]: I1013 18:31:16.243700 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bd2ad5052e0c121c4ec275cd0136fba0ebdc2f06fc02dac563ec443bc9b1099"} err="failed to get container status \"3bd2ad5052e0c121c4ec275cd0136fba0ebdc2f06fc02dac563ec443bc9b1099\": rpc error: code = NotFound desc = could not find container \"3bd2ad5052e0c121c4ec275cd0136fba0ebdc2f06fc02dac563ec443bc9b1099\": container with ID starting with 3bd2ad5052e0c121c4ec275cd0136fba0ebdc2f06fc02dac563ec443bc9b1099 not found: ID does not exist" Oct 13 18:31:16 crc kubenswrapper[4720]: I1013 18:31:16.243713 4720 scope.go:117] "RemoveContainer" containerID="f34b118265847fc567dfb3299b2e93554b2aa439b8e992830403934ef0e5130d" Oct 13 18:31:16 crc kubenswrapper[4720]: E1013 18:31:16.243910 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f34b118265847fc567dfb3299b2e93554b2aa439b8e992830403934ef0e5130d\": container with ID starting with f34b118265847fc567dfb3299b2e93554b2aa439b8e992830403934ef0e5130d not found: ID does not exist" containerID="f34b118265847fc567dfb3299b2e93554b2aa439b8e992830403934ef0e5130d" Oct 13 18:31:16 crc kubenswrapper[4720]: I1013 18:31:16.243929 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f34b118265847fc567dfb3299b2e93554b2aa439b8e992830403934ef0e5130d"} err="failed to get container status \"f34b118265847fc567dfb3299b2e93554b2aa439b8e992830403934ef0e5130d\": rpc error: code = NotFound desc = could not find container \"f34b118265847fc567dfb3299b2e93554b2aa439b8e992830403934ef0e5130d\": container with ID starting with f34b118265847fc567dfb3299b2e93554b2aa439b8e992830403934ef0e5130d not found: ID does not exist" Oct 13 18:31:16 crc kubenswrapper[4720]: I1013 18:31:16.308690 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rj2jd_7af8d26b-d10a-4be8-8773-03204f461fe3/extract-utilities/0.log" Oct 13 18:31:16 crc kubenswrapper[4720]: I1013 18:31:16.323910 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rj2jd_7af8d26b-d10a-4be8-8773-03204f461fe3/extract-content/0.log" Oct 13 18:31:16 crc kubenswrapper[4720]: I1013 18:31:16.399645 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cfj2bw_5042638b-5850-4492-a98d-62479bd6624b/util/0.log" Oct 13 18:31:16 crc kubenswrapper[4720]: I1013 18:31:16.677516 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-j5f4h" Oct 13 18:31:16 crc kubenswrapper[4720]: I1013 18:31:16.677808 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-j5f4h" Oct 13 18:31:16 crc kubenswrapper[4720]: I1013 18:31:16.696313 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cfj2bw_5042638b-5850-4492-a98d-62479bd6624b/util/0.log" Oct 13 18:31:16 crc kubenswrapper[4720]: I1013 18:31:16.698429 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cfj2bw_5042638b-5850-4492-a98d-62479bd6624b/pull/0.log" Oct 13 18:31:16 crc kubenswrapper[4720]: I1013 18:31:16.710265 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cfj2bw_5042638b-5850-4492-a98d-62479bd6624b/pull/0.log" Oct 13 18:31:16 crc kubenswrapper[4720]: I1013 18:31:16.731564 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-j5f4h" Oct 13 18:31:16 crc kubenswrapper[4720]: I1013 18:31:16.853101 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rj2jd_7af8d26b-d10a-4be8-8773-03204f461fe3/registry-server/0.log" Oct 13 18:31:16 crc kubenswrapper[4720]: I1013 18:31:16.882103 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cfj2bw_5042638b-5850-4492-a98d-62479bd6624b/pull/0.log" Oct 13 18:31:16 crc kubenswrapper[4720]: I1013 18:31:16.897923 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cfj2bw_5042638b-5850-4492-a98d-62479bd6624b/util/0.log" Oct 13 18:31:16 crc kubenswrapper[4720]: I1013 18:31:16.904227 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cfj2bw_5042638b-5850-4492-a98d-62479bd6624b/extract/0.log" Oct 13 18:31:17 crc kubenswrapper[4720]: I1013 18:31:17.049091 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-wgslm_8524b73b-8e30-4e35-bc36-1b3c9e911ad0/marketplace-operator/0.log" Oct 13 18:31:17 crc kubenswrapper[4720]: I1013 18:31:17.090408 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-69hwq_dab7d1c1-75af-4ffa-a15b-2cc516acfabf/extract-utilities/0.log" Oct 13 18:31:17 crc kubenswrapper[4720]: I1013 18:31:17.178030 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27f647fb-994a-4a9a-96a3-8fb1f285c006" path="/var/lib/kubelet/pods/27f647fb-994a-4a9a-96a3-8fb1f285c006/volumes" Oct 13 18:31:17 crc kubenswrapper[4720]: I1013 18:31:17.247540 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-j5f4h" Oct 13 18:31:17 crc kubenswrapper[4720]: I1013 18:31:17.256379 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-69hwq_dab7d1c1-75af-4ffa-a15b-2cc516acfabf/extract-utilities/0.log" Oct 13 18:31:17 crc kubenswrapper[4720]: I1013 18:31:17.271618 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-69hwq_dab7d1c1-75af-4ffa-a15b-2cc516acfabf/extract-content/0.log" Oct 13 18:31:17 crc kubenswrapper[4720]: I1013 18:31:17.331942 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-69hwq_dab7d1c1-75af-4ffa-a15b-2cc516acfabf/extract-content/0.log" Oct 13 18:31:17 crc kubenswrapper[4720]: I1013 18:31:17.454149 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-69hwq_dab7d1c1-75af-4ffa-a15b-2cc516acfabf/extract-utilities/0.log" Oct 13 18:31:17 crc kubenswrapper[4720]: I1013 18:31:17.508635 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-69hwq_dab7d1c1-75af-4ffa-a15b-2cc516acfabf/extract-content/0.log" Oct 13 18:31:17 crc kubenswrapper[4720]: I1013 18:31:17.541131 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-j5f4h_8c99042a-9438-4469-9df9-7c91c96c7568/extract-utilities/0.log" Oct 13 18:31:17 crc kubenswrapper[4720]: I1013 18:31:17.682952 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-69hwq_dab7d1c1-75af-4ffa-a15b-2cc516acfabf/registry-server/0.log" Oct 13 18:31:17 crc kubenswrapper[4720]: I1013 18:31:17.737753 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-j5f4h_8c99042a-9438-4469-9df9-7c91c96c7568/extract-content/0.log" Oct 13 18:31:17 crc kubenswrapper[4720]: I1013 18:31:17.750944 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-j5f4h_8c99042a-9438-4469-9df9-7c91c96c7568/extract-utilities/0.log" Oct 13 18:31:17 crc kubenswrapper[4720]: I1013 18:31:17.773760 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-j5f4h_8c99042a-9438-4469-9df9-7c91c96c7568/extract-content/0.log" Oct 13 18:31:17 crc kubenswrapper[4720]: I1013 18:31:17.935435 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-j5f4h_8c99042a-9438-4469-9df9-7c91c96c7568/extract-content/0.log" Oct 13 18:31:17 crc kubenswrapper[4720]: I1013 18:31:17.937569 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-j5f4h_8c99042a-9438-4469-9df9-7c91c96c7568/registry-server/0.log" Oct 13 18:31:17 crc kubenswrapper[4720]: I1013 18:31:17.957809 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-j5f4h_8c99042a-9438-4469-9df9-7c91c96c7568/extract-utilities/0.log" Oct 13 18:31:17 crc kubenswrapper[4720]: I1013 18:31:17.964455 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mdzhd_54894eb4-2aeb-4c93-b8d7-0e22213452f5/extract-utilities/0.log" Oct 13 18:31:18 crc kubenswrapper[4720]: I1013 18:31:18.081211 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mdzhd_54894eb4-2aeb-4c93-b8d7-0e22213452f5/extract-utilities/0.log" Oct 13 18:31:18 crc kubenswrapper[4720]: I1013 18:31:18.135620 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mdzhd_54894eb4-2aeb-4c93-b8d7-0e22213452f5/extract-content/0.log" Oct 13 18:31:18 crc kubenswrapper[4720]: I1013 18:31:18.141917 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mdzhd_54894eb4-2aeb-4c93-b8d7-0e22213452f5/extract-content/0.log" Oct 13 18:31:18 crc kubenswrapper[4720]: I1013 18:31:18.303140 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mdzhd_54894eb4-2aeb-4c93-b8d7-0e22213452f5/extract-content/0.log" Oct 13 18:31:18 crc kubenswrapper[4720]: I1013 18:31:18.305051 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mdzhd_54894eb4-2aeb-4c93-b8d7-0e22213452f5/extract-utilities/0.log" Oct 13 18:31:18 crc kubenswrapper[4720]: E1013 18:31:18.356700 4720 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27f647fb_994a_4a9a_96a3_8fb1f285c006.slice/crio-24cdfbc09740b3a2aae5aeed4e99d83696dff3fe436bbc2fd1adb5efc4dbd504.scope\": RecentStats: unable to find data in memory cache]" Oct 13 18:31:18 crc kubenswrapper[4720]: I1013 18:31:18.836166 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mdzhd_54894eb4-2aeb-4c93-b8d7-0e22213452f5/registry-server/0.log" Oct 13 18:31:19 crc kubenswrapper[4720]: I1013 18:31:19.090018 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j5f4h"] Oct 13 18:31:19 crc kubenswrapper[4720]: I1013 18:31:19.189262 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-j5f4h" podUID="8c99042a-9438-4469-9df9-7c91c96c7568" containerName="registry-server" containerID="cri-o://fe59d51728023961474d62fa3a2b92e0c0c6705e4434e4fd321022e5b3ecfe6d" gracePeriod=2 Oct 13 18:31:19 crc kubenswrapper[4720]: I1013 18:31:19.656699 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j5f4h" Oct 13 18:31:19 crc kubenswrapper[4720]: I1013 18:31:19.711764 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmq8t\" (UniqueName: \"kubernetes.io/projected/8c99042a-9438-4469-9df9-7c91c96c7568-kube-api-access-pmq8t\") pod \"8c99042a-9438-4469-9df9-7c91c96c7568\" (UID: \"8c99042a-9438-4469-9df9-7c91c96c7568\") " Oct 13 18:31:19 crc kubenswrapper[4720]: I1013 18:31:19.711818 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c99042a-9438-4469-9df9-7c91c96c7568-utilities\") pod \"8c99042a-9438-4469-9df9-7c91c96c7568\" (UID: \"8c99042a-9438-4469-9df9-7c91c96c7568\") " Oct 13 18:31:19 crc kubenswrapper[4720]: I1013 18:31:19.711984 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c99042a-9438-4469-9df9-7c91c96c7568-catalog-content\") pod \"8c99042a-9438-4469-9df9-7c91c96c7568\" (UID: \"8c99042a-9438-4469-9df9-7c91c96c7568\") " Oct 13 18:31:19 crc kubenswrapper[4720]: I1013 18:31:19.712456 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c99042a-9438-4469-9df9-7c91c96c7568-utilities" (OuterVolumeSpecName: "utilities") pod "8c99042a-9438-4469-9df9-7c91c96c7568" (UID: "8c99042a-9438-4469-9df9-7c91c96c7568"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 18:31:19 crc kubenswrapper[4720]: I1013 18:31:19.722917 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c99042a-9438-4469-9df9-7c91c96c7568-kube-api-access-pmq8t" (OuterVolumeSpecName: "kube-api-access-pmq8t") pod "8c99042a-9438-4469-9df9-7c91c96c7568" (UID: "8c99042a-9438-4469-9df9-7c91c96c7568"). InnerVolumeSpecName "kube-api-access-pmq8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 18:31:19 crc kubenswrapper[4720]: I1013 18:31:19.798238 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c99042a-9438-4469-9df9-7c91c96c7568-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8c99042a-9438-4469-9df9-7c91c96c7568" (UID: "8c99042a-9438-4469-9df9-7c91c96c7568"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 18:31:19 crc kubenswrapper[4720]: I1013 18:31:19.814105 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c99042a-9438-4469-9df9-7c91c96c7568-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 13 18:31:19 crc kubenswrapper[4720]: I1013 18:31:19.814139 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmq8t\" (UniqueName: \"kubernetes.io/projected/8c99042a-9438-4469-9df9-7c91c96c7568-kube-api-access-pmq8t\") on node \"crc\" DevicePath \"\"" Oct 13 18:31:19 crc kubenswrapper[4720]: I1013 18:31:19.814151 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c99042a-9438-4469-9df9-7c91c96c7568-utilities\") on node \"crc\" DevicePath \"\"" Oct 13 18:31:20 crc kubenswrapper[4720]: I1013 18:31:20.202063 4720 generic.go:334] "Generic (PLEG): container finished" podID="8c99042a-9438-4469-9df9-7c91c96c7568" containerID="fe59d51728023961474d62fa3a2b92e0c0c6705e4434e4fd321022e5b3ecfe6d" exitCode=0 Oct 13 18:31:20 crc kubenswrapper[4720]: I1013 18:31:20.202107 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j5f4h" event={"ID":"8c99042a-9438-4469-9df9-7c91c96c7568","Type":"ContainerDied","Data":"fe59d51728023961474d62fa3a2b92e0c0c6705e4434e4fd321022e5b3ecfe6d"} Oct 13 18:31:20 crc kubenswrapper[4720]: I1013 18:31:20.202136 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j5f4h" event={"ID":"8c99042a-9438-4469-9df9-7c91c96c7568","Type":"ContainerDied","Data":"5ec6768ca5cf6297ef5ceeaf8ff1aabdc1420b167f0a36348c0a5db5222f64e8"} Oct 13 18:31:20 crc kubenswrapper[4720]: I1013 18:31:20.202156 4720 scope.go:117] "RemoveContainer" containerID="fe59d51728023961474d62fa3a2b92e0c0c6705e4434e4fd321022e5b3ecfe6d" Oct 13 18:31:20 crc kubenswrapper[4720]: I1013 18:31:20.202158 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j5f4h" Oct 13 18:31:20 crc kubenswrapper[4720]: I1013 18:31:20.232085 4720 scope.go:117] "RemoveContainer" containerID="ed56956c50cb7b520f46cb78919dd007cf3d14679bf42485156364676c184542" Oct 13 18:31:20 crc kubenswrapper[4720]: I1013 18:31:20.247655 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j5f4h"] Oct 13 18:31:20 crc kubenswrapper[4720]: I1013 18:31:20.261662 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-j5f4h"] Oct 13 18:31:20 crc kubenswrapper[4720]: I1013 18:31:20.267247 4720 scope.go:117] "RemoveContainer" containerID="c77fe40affe14dbeb7468c293266be64c79c0acad6b34ec12e0a7acfe3df7ad5" Oct 13 18:31:20 crc kubenswrapper[4720]: I1013 18:31:20.335421 4720 scope.go:117] "RemoveContainer" containerID="fe59d51728023961474d62fa3a2b92e0c0c6705e4434e4fd321022e5b3ecfe6d" Oct 13 18:31:20 crc kubenswrapper[4720]: E1013 18:31:20.335851 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe59d51728023961474d62fa3a2b92e0c0c6705e4434e4fd321022e5b3ecfe6d\": container with ID starting with fe59d51728023961474d62fa3a2b92e0c0c6705e4434e4fd321022e5b3ecfe6d not found: ID does not exist" containerID="fe59d51728023961474d62fa3a2b92e0c0c6705e4434e4fd321022e5b3ecfe6d" Oct 13 18:31:20 crc kubenswrapper[4720]: I1013 18:31:20.335879 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe59d51728023961474d62fa3a2b92e0c0c6705e4434e4fd321022e5b3ecfe6d"} err="failed to get container status \"fe59d51728023961474d62fa3a2b92e0c0c6705e4434e4fd321022e5b3ecfe6d\": rpc error: code = NotFound desc = could not find container \"fe59d51728023961474d62fa3a2b92e0c0c6705e4434e4fd321022e5b3ecfe6d\": container with ID starting with fe59d51728023961474d62fa3a2b92e0c0c6705e4434e4fd321022e5b3ecfe6d not found: ID does not exist" Oct 13 18:31:20 crc kubenswrapper[4720]: I1013 18:31:20.335899 4720 scope.go:117] "RemoveContainer" containerID="ed56956c50cb7b520f46cb78919dd007cf3d14679bf42485156364676c184542" Oct 13 18:31:20 crc kubenswrapper[4720]: E1013 18:31:20.336239 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed56956c50cb7b520f46cb78919dd007cf3d14679bf42485156364676c184542\": container with ID starting with ed56956c50cb7b520f46cb78919dd007cf3d14679bf42485156364676c184542 not found: ID does not exist" containerID="ed56956c50cb7b520f46cb78919dd007cf3d14679bf42485156364676c184542" Oct 13 18:31:20 crc kubenswrapper[4720]: I1013 18:31:20.336262 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed56956c50cb7b520f46cb78919dd007cf3d14679bf42485156364676c184542"} err="failed to get container status \"ed56956c50cb7b520f46cb78919dd007cf3d14679bf42485156364676c184542\": rpc error: code = NotFound desc = could not find container \"ed56956c50cb7b520f46cb78919dd007cf3d14679bf42485156364676c184542\": container with ID starting with ed56956c50cb7b520f46cb78919dd007cf3d14679bf42485156364676c184542 not found: ID does not exist" Oct 13 18:31:20 crc kubenswrapper[4720]: I1013 18:31:20.336275 4720 scope.go:117] "RemoveContainer" containerID="c77fe40affe14dbeb7468c293266be64c79c0acad6b34ec12e0a7acfe3df7ad5" Oct 13 18:31:20 crc kubenswrapper[4720]: E1013 18:31:20.336704 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c77fe40affe14dbeb7468c293266be64c79c0acad6b34ec12e0a7acfe3df7ad5\": container with ID starting with c77fe40affe14dbeb7468c293266be64c79c0acad6b34ec12e0a7acfe3df7ad5 not found: ID does not exist" containerID="c77fe40affe14dbeb7468c293266be64c79c0acad6b34ec12e0a7acfe3df7ad5" Oct 13 18:31:20 crc kubenswrapper[4720]: I1013 18:31:20.336727 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c77fe40affe14dbeb7468c293266be64c79c0acad6b34ec12e0a7acfe3df7ad5"} err="failed to get container status \"c77fe40affe14dbeb7468c293266be64c79c0acad6b34ec12e0a7acfe3df7ad5\": rpc error: code = NotFound desc = could not find container \"c77fe40affe14dbeb7468c293266be64c79c0acad6b34ec12e0a7acfe3df7ad5\": container with ID starting with c77fe40affe14dbeb7468c293266be64c79c0acad6b34ec12e0a7acfe3df7ad5 not found: ID does not exist" Oct 13 18:31:21 crc kubenswrapper[4720]: I1013 18:31:21.186570 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c99042a-9438-4469-9df9-7c91c96c7568" path="/var/lib/kubelet/pods/8c99042a-9438-4469-9df9-7c91c96c7568/volumes" Oct 13 18:31:28 crc kubenswrapper[4720]: E1013 18:31:28.650354 4720 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27f647fb_994a_4a9a_96a3_8fb1f285c006.slice/crio-24cdfbc09740b3a2aae5aeed4e99d83696dff3fe436bbc2fd1adb5efc4dbd504.scope\": RecentStats: unable to find data in memory cache]" Oct 13 18:31:38 crc kubenswrapper[4720]: E1013 18:31:38.946425 4720 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27f647fb_994a_4a9a_96a3_8fb1f285c006.slice/crio-24cdfbc09740b3a2aae5aeed4e99d83696dff3fe436bbc2fd1adb5efc4dbd504.scope\": RecentStats: unable to find data in memory cache]" Oct 13 18:31:45 crc kubenswrapper[4720]: I1013 18:31:45.213050 4720 patch_prober.go:28] interesting pod/machine-config-daemon-htwnl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 18:31:45 crc kubenswrapper[4720]: I1013 18:31:45.213701 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 18:31:45 crc kubenswrapper[4720]: I1013 18:31:45.213752 4720 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" Oct 13 18:31:45 crc kubenswrapper[4720]: I1013 18:31:45.214552 4720 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"daa911e13591966ffbf77524066a5e02ca5e52e1c46be154956ad33bdd15eac1"} pod="openshift-machine-config-operator/machine-config-daemon-htwnl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 13 18:31:45 crc kubenswrapper[4720]: I1013 18:31:45.214619 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerName="machine-config-daemon" containerID="cri-o://daa911e13591966ffbf77524066a5e02ca5e52e1c46be154956ad33bdd15eac1" gracePeriod=600 Oct 13 18:31:45 crc kubenswrapper[4720]: I1013 18:31:45.458828 4720 generic.go:334] "Generic (PLEG): container finished" podID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerID="daa911e13591966ffbf77524066a5e02ca5e52e1c46be154956ad33bdd15eac1" exitCode=0 Oct 13 18:31:45 crc kubenswrapper[4720]: I1013 18:31:45.458956 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" event={"ID":"ce442c80-fcde-4b79-b6f9-f8f25771dfd4","Type":"ContainerDied","Data":"daa911e13591966ffbf77524066a5e02ca5e52e1c46be154956ad33bdd15eac1"} Oct 13 18:31:45 crc kubenswrapper[4720]: I1013 18:31:45.459158 4720 scope.go:117] "RemoveContainer" containerID="79ca5c733b5eaab4a6d8e4dab434c0461ff00748dd7441ce45a3602ece00a48e" Oct 13 18:31:46 crc kubenswrapper[4720]: I1013 18:31:46.471562 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" event={"ID":"ce442c80-fcde-4b79-b6f9-f8f25771dfd4","Type":"ContainerStarted","Data":"2b7bf512d55af1f6e87e3d107365e78aebc383d4ecbb5a0b274805c3690958bc"} Oct 13 18:31:49 crc kubenswrapper[4720]: E1013 18:31:49.210300 4720 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27f647fb_994a_4a9a_96a3_8fb1f285c006.slice/crio-24cdfbc09740b3a2aae5aeed4e99d83696dff3fe436bbc2fd1adb5efc4dbd504.scope\": RecentStats: unable to find data in memory cache]" Oct 13 18:31:59 crc kubenswrapper[4720]: E1013 18:31:59.442968 4720 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27f647fb_994a_4a9a_96a3_8fb1f285c006.slice/crio-24cdfbc09740b3a2aae5aeed4e99d83696dff3fe436bbc2fd1adb5efc4dbd504.scope\": RecentStats: unable to find data in memory cache]" Oct 13 18:32:09 crc kubenswrapper[4720]: E1013 18:32:09.715777 4720 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27f647fb_994a_4a9a_96a3_8fb1f285c006.slice/crio-24cdfbc09740b3a2aae5aeed4e99d83696dff3fe436bbc2fd1adb5efc4dbd504.scope\": RecentStats: unable to find data in memory cache]" Oct 13 18:32:15 crc kubenswrapper[4720]: E1013 18:32:15.182793 4720 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/1b3a30525c481c325489114f53e1a84f3df22c7b6ff406d8fda454531dda306d/diff" to get inode usage: stat /var/lib/containers/storage/overlay/1b3a30525c481c325489114f53e1a84f3df22c7b6ff406d8fda454531dda306d/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openshift-marketplace_community-operators-9wrws_27f647fb-994a-4a9a-96a3-8fb1f285c006/registry-server/0.log" to get inode usage: stat /var/log/pods/openshift-marketplace_community-operators-9wrws_27f647fb-994a-4a9a-96a3-8fb1f285c006/registry-server/0.log: no such file or directory Oct 13 18:32:55 crc kubenswrapper[4720]: I1013 18:32:55.276532 4720 generic.go:334] "Generic (PLEG): container finished" podID="05df5caf-006a-4071-b2f2-25fc6b06d156" containerID="fb9ff718e9905527a223a83d9386147be4fe82b9d70f96b2fc36327f3038da9a" exitCode=0 Oct 13 18:32:55 crc kubenswrapper[4720]: I1013 18:32:55.276667 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qqx25/must-gather-t7pst" event={"ID":"05df5caf-006a-4071-b2f2-25fc6b06d156","Type":"ContainerDied","Data":"fb9ff718e9905527a223a83d9386147be4fe82b9d70f96b2fc36327f3038da9a"} Oct 13 18:32:55 crc kubenswrapper[4720]: I1013 18:32:55.278182 4720 scope.go:117] "RemoveContainer" containerID="fb9ff718e9905527a223a83d9386147be4fe82b9d70f96b2fc36327f3038da9a" Oct 13 18:32:55 crc kubenswrapper[4720]: I1013 18:32:55.541355 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qqx25_must-gather-t7pst_05df5caf-006a-4071-b2f2-25fc6b06d156/gather/0.log" Oct 13 18:33:05 crc kubenswrapper[4720]: I1013 18:33:05.410914 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qqx25/must-gather-t7pst"] Oct 13 18:33:05 crc kubenswrapper[4720]: I1013 18:33:05.411944 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-qqx25/must-gather-t7pst" podUID="05df5caf-006a-4071-b2f2-25fc6b06d156" containerName="copy" containerID="cri-o://193bcc46713c60adca84947ababbfb5a3668b55da58c8889c624a0e9eee19582" gracePeriod=2 Oct 13 18:33:05 crc kubenswrapper[4720]: I1013 18:33:05.422887 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qqx25/must-gather-t7pst"] Oct 13 18:33:05 crc kubenswrapper[4720]: I1013 18:33:05.922709 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qqx25_must-gather-t7pst_05df5caf-006a-4071-b2f2-25fc6b06d156/copy/0.log" Oct 13 18:33:05 crc kubenswrapper[4720]: I1013 18:33:05.923821 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qqx25/must-gather-t7pst" Oct 13 18:33:05 crc kubenswrapper[4720]: I1013 18:33:05.999205 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/05df5caf-006a-4071-b2f2-25fc6b06d156-must-gather-output\") pod \"05df5caf-006a-4071-b2f2-25fc6b06d156\" (UID: \"05df5caf-006a-4071-b2f2-25fc6b06d156\") " Oct 13 18:33:05 crc kubenswrapper[4720]: I1013 18:33:05.999445 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dg4st\" (UniqueName: \"kubernetes.io/projected/05df5caf-006a-4071-b2f2-25fc6b06d156-kube-api-access-dg4st\") pod \"05df5caf-006a-4071-b2f2-25fc6b06d156\" (UID: \"05df5caf-006a-4071-b2f2-25fc6b06d156\") " Oct 13 18:33:06 crc kubenswrapper[4720]: I1013 18:33:06.007806 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05df5caf-006a-4071-b2f2-25fc6b06d156-kube-api-access-dg4st" (OuterVolumeSpecName: "kube-api-access-dg4st") pod "05df5caf-006a-4071-b2f2-25fc6b06d156" (UID: "05df5caf-006a-4071-b2f2-25fc6b06d156"). InnerVolumeSpecName "kube-api-access-dg4st". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 13 18:33:06 crc kubenswrapper[4720]: I1013 18:33:06.101785 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dg4st\" (UniqueName: \"kubernetes.io/projected/05df5caf-006a-4071-b2f2-25fc6b06d156-kube-api-access-dg4st\") on node \"crc\" DevicePath \"\"" Oct 13 18:33:06 crc kubenswrapper[4720]: I1013 18:33:06.142212 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05df5caf-006a-4071-b2f2-25fc6b06d156-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "05df5caf-006a-4071-b2f2-25fc6b06d156" (UID: "05df5caf-006a-4071-b2f2-25fc6b06d156"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 13 18:33:06 crc kubenswrapper[4720]: I1013 18:33:06.203684 4720 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/05df5caf-006a-4071-b2f2-25fc6b06d156-must-gather-output\") on node \"crc\" DevicePath \"\"" Oct 13 18:33:06 crc kubenswrapper[4720]: I1013 18:33:06.382635 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qqx25_must-gather-t7pst_05df5caf-006a-4071-b2f2-25fc6b06d156/copy/0.log" Oct 13 18:33:06 crc kubenswrapper[4720]: I1013 18:33:06.382973 4720 generic.go:334] "Generic (PLEG): container finished" podID="05df5caf-006a-4071-b2f2-25fc6b06d156" containerID="193bcc46713c60adca84947ababbfb5a3668b55da58c8889c624a0e9eee19582" exitCode=143 Oct 13 18:33:06 crc kubenswrapper[4720]: I1013 18:33:06.383011 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qqx25/must-gather-t7pst" Oct 13 18:33:06 crc kubenswrapper[4720]: I1013 18:33:06.383027 4720 scope.go:117] "RemoveContainer" containerID="193bcc46713c60adca84947ababbfb5a3668b55da58c8889c624a0e9eee19582" Oct 13 18:33:06 crc kubenswrapper[4720]: I1013 18:33:06.411345 4720 scope.go:117] "RemoveContainer" containerID="fb9ff718e9905527a223a83d9386147be4fe82b9d70f96b2fc36327f3038da9a" Oct 13 18:33:06 crc kubenswrapper[4720]: I1013 18:33:06.481496 4720 scope.go:117] "RemoveContainer" containerID="193bcc46713c60adca84947ababbfb5a3668b55da58c8889c624a0e9eee19582" Oct 13 18:33:06 crc kubenswrapper[4720]: E1013 18:33:06.481876 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"193bcc46713c60adca84947ababbfb5a3668b55da58c8889c624a0e9eee19582\": container with ID starting with 193bcc46713c60adca84947ababbfb5a3668b55da58c8889c624a0e9eee19582 not found: ID does not exist" containerID="193bcc46713c60adca84947ababbfb5a3668b55da58c8889c624a0e9eee19582" Oct 13 18:33:06 crc kubenswrapper[4720]: I1013 18:33:06.481901 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"193bcc46713c60adca84947ababbfb5a3668b55da58c8889c624a0e9eee19582"} err="failed to get container status \"193bcc46713c60adca84947ababbfb5a3668b55da58c8889c624a0e9eee19582\": rpc error: code = NotFound desc = could not find container \"193bcc46713c60adca84947ababbfb5a3668b55da58c8889c624a0e9eee19582\": container with ID starting with 193bcc46713c60adca84947ababbfb5a3668b55da58c8889c624a0e9eee19582 not found: ID does not exist" Oct 13 18:33:06 crc kubenswrapper[4720]: I1013 18:33:06.481923 4720 scope.go:117] "RemoveContainer" containerID="fb9ff718e9905527a223a83d9386147be4fe82b9d70f96b2fc36327f3038da9a" Oct 13 18:33:06 crc kubenswrapper[4720]: E1013 18:33:06.482652 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb9ff718e9905527a223a83d9386147be4fe82b9d70f96b2fc36327f3038da9a\": container with ID starting with fb9ff718e9905527a223a83d9386147be4fe82b9d70f96b2fc36327f3038da9a not found: ID does not exist" containerID="fb9ff718e9905527a223a83d9386147be4fe82b9d70f96b2fc36327f3038da9a" Oct 13 18:33:06 crc kubenswrapper[4720]: I1013 18:33:06.482960 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb9ff718e9905527a223a83d9386147be4fe82b9d70f96b2fc36327f3038da9a"} err="failed to get container status \"fb9ff718e9905527a223a83d9386147be4fe82b9d70f96b2fc36327f3038da9a\": rpc error: code = NotFound desc = could not find container \"fb9ff718e9905527a223a83d9386147be4fe82b9d70f96b2fc36327f3038da9a\": container with ID starting with fb9ff718e9905527a223a83d9386147be4fe82b9d70f96b2fc36327f3038da9a not found: ID does not exist" Oct 13 18:33:07 crc kubenswrapper[4720]: I1013 18:33:07.182678 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05df5caf-006a-4071-b2f2-25fc6b06d156" path="/var/lib/kubelet/pods/05df5caf-006a-4071-b2f2-25fc6b06d156/volumes" Oct 13 18:33:45 crc kubenswrapper[4720]: I1013 18:33:45.212411 4720 patch_prober.go:28] interesting pod/machine-config-daemon-htwnl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 18:33:45 crc kubenswrapper[4720]: I1013 18:33:45.213185 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 18:33:47 crc kubenswrapper[4720]: I1013 18:33:47.796388 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="f8338f95-b766-4ce8-b60e-020957cdee12" containerName="galera" probeResult="failure" output="command timed out" Oct 13 18:34:15 crc kubenswrapper[4720]: I1013 18:34:15.213644 4720 patch_prober.go:28] interesting pod/machine-config-daemon-htwnl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 18:34:15 crc kubenswrapper[4720]: I1013 18:34:15.214166 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 18:34:21 crc kubenswrapper[4720]: I1013 18:34:21.399002 4720 scope.go:117] "RemoveContainer" containerID="b312ffd4ca60ad21e868ef21d0dcd18b3627853b804b4b056bef82e2063baa2e" Oct 13 18:34:45 crc kubenswrapper[4720]: I1013 18:34:45.092303 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tg9hm"] Oct 13 18:34:45 crc kubenswrapper[4720]: E1013 18:34:45.095295 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05df5caf-006a-4071-b2f2-25fc6b06d156" containerName="copy" Oct 13 18:34:45 crc kubenswrapper[4720]: I1013 18:34:45.095331 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="05df5caf-006a-4071-b2f2-25fc6b06d156" containerName="copy" Oct 13 18:34:45 crc kubenswrapper[4720]: E1013 18:34:45.095346 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27f647fb-994a-4a9a-96a3-8fb1f285c006" containerName="extract-utilities" Oct 13 18:34:45 crc kubenswrapper[4720]: I1013 18:34:45.095355 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="27f647fb-994a-4a9a-96a3-8fb1f285c006" containerName="extract-utilities" Oct 13 18:34:45 crc kubenswrapper[4720]: E1013 18:34:45.095370 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c99042a-9438-4469-9df9-7c91c96c7568" containerName="extract-utilities" Oct 13 18:34:45 crc kubenswrapper[4720]: I1013 18:34:45.095379 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c99042a-9438-4469-9df9-7c91c96c7568" containerName="extract-utilities" Oct 13 18:34:45 crc kubenswrapper[4720]: E1013 18:34:45.095409 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c99042a-9438-4469-9df9-7c91c96c7568" containerName="extract-content" Oct 13 18:34:45 crc kubenswrapper[4720]: I1013 18:34:45.095417 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c99042a-9438-4469-9df9-7c91c96c7568" containerName="extract-content" Oct 13 18:34:45 crc kubenswrapper[4720]: E1013 18:34:45.095433 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27f647fb-994a-4a9a-96a3-8fb1f285c006" containerName="registry-server" Oct 13 18:34:45 crc kubenswrapper[4720]: I1013 18:34:45.095440 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="27f647fb-994a-4a9a-96a3-8fb1f285c006" containerName="registry-server" Oct 13 18:34:45 crc kubenswrapper[4720]: E1013 18:34:45.095471 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27f647fb-994a-4a9a-96a3-8fb1f285c006" containerName="extract-content" Oct 13 18:34:45 crc kubenswrapper[4720]: I1013 18:34:45.095479 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="27f647fb-994a-4a9a-96a3-8fb1f285c006" containerName="extract-content" Oct 13 18:34:45 crc kubenswrapper[4720]: E1013 18:34:45.095495 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c99042a-9438-4469-9df9-7c91c96c7568" containerName="registry-server" Oct 13 18:34:45 crc kubenswrapper[4720]: I1013 18:34:45.095503 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c99042a-9438-4469-9df9-7c91c96c7568" containerName="registry-server" Oct 13 18:34:45 crc kubenswrapper[4720]: E1013 18:34:45.095521 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05df5caf-006a-4071-b2f2-25fc6b06d156" containerName="gather" Oct 13 18:34:45 crc kubenswrapper[4720]: I1013 18:34:45.095528 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="05df5caf-006a-4071-b2f2-25fc6b06d156" containerName="gather" Oct 13 18:34:45 crc kubenswrapper[4720]: I1013 18:34:45.095697 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="05df5caf-006a-4071-b2f2-25fc6b06d156" containerName="copy" Oct 13 18:34:45 crc kubenswrapper[4720]: I1013 18:34:45.095710 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c99042a-9438-4469-9df9-7c91c96c7568" containerName="registry-server" Oct 13 18:34:45 crc kubenswrapper[4720]: I1013 18:34:45.095734 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="05df5caf-006a-4071-b2f2-25fc6b06d156" containerName="gather" Oct 13 18:34:45 crc kubenswrapper[4720]: I1013 18:34:45.095744 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="27f647fb-994a-4a9a-96a3-8fb1f285c006" containerName="registry-server" Oct 13 18:34:45 crc kubenswrapper[4720]: I1013 18:34:45.097288 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tg9hm" Oct 13 18:34:45 crc kubenswrapper[4720]: I1013 18:34:45.113869 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tg9hm"] Oct 13 18:34:45 crc kubenswrapper[4720]: I1013 18:34:45.180892 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzmfm\" (UniqueName: \"kubernetes.io/projected/2d226f5d-5e5e-4241-9008-e8f28927578c-kube-api-access-kzmfm\") pod \"certified-operators-tg9hm\" (UID: \"2d226f5d-5e5e-4241-9008-e8f28927578c\") " pod="openshift-marketplace/certified-operators-tg9hm" Oct 13 18:34:45 crc kubenswrapper[4720]: I1013 18:34:45.181294 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d226f5d-5e5e-4241-9008-e8f28927578c-utilities\") pod \"certified-operators-tg9hm\" (UID: \"2d226f5d-5e5e-4241-9008-e8f28927578c\") " pod="openshift-marketplace/certified-operators-tg9hm" Oct 13 18:34:45 crc kubenswrapper[4720]: I1013 18:34:45.181324 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d226f5d-5e5e-4241-9008-e8f28927578c-catalog-content\") pod \"certified-operators-tg9hm\" (UID: \"2d226f5d-5e5e-4241-9008-e8f28927578c\") " pod="openshift-marketplace/certified-operators-tg9hm" Oct 13 18:34:45 crc kubenswrapper[4720]: I1013 18:34:45.212784 4720 patch_prober.go:28] interesting pod/machine-config-daemon-htwnl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 13 18:34:45 crc kubenswrapper[4720]: I1013 18:34:45.212869 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 13 18:34:45 crc kubenswrapper[4720]: I1013 18:34:45.212939 4720 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" Oct 13 18:34:45 crc kubenswrapper[4720]: I1013 18:34:45.214000 4720 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2b7bf512d55af1f6e87e3d107365e78aebc383d4ecbb5a0b274805c3690958bc"} pod="openshift-machine-config-operator/machine-config-daemon-htwnl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 13 18:34:45 crc kubenswrapper[4720]: I1013 18:34:45.214108 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerName="machine-config-daemon" containerID="cri-o://2b7bf512d55af1f6e87e3d107365e78aebc383d4ecbb5a0b274805c3690958bc" gracePeriod=600 Oct 13 18:34:45 crc kubenswrapper[4720]: I1013 18:34:45.283017 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzmfm\" (UniqueName: \"kubernetes.io/projected/2d226f5d-5e5e-4241-9008-e8f28927578c-kube-api-access-kzmfm\") pod \"certified-operators-tg9hm\" (UID: \"2d226f5d-5e5e-4241-9008-e8f28927578c\") " pod="openshift-marketplace/certified-operators-tg9hm" Oct 13 18:34:45 crc kubenswrapper[4720]: I1013 18:34:45.283278 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d226f5d-5e5e-4241-9008-e8f28927578c-utilities\") pod \"certified-operators-tg9hm\" (UID: \"2d226f5d-5e5e-4241-9008-e8f28927578c\") " pod="openshift-marketplace/certified-operators-tg9hm" Oct 13 18:34:45 crc kubenswrapper[4720]: I1013 18:34:45.283307 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d226f5d-5e5e-4241-9008-e8f28927578c-catalog-content\") pod \"certified-operators-tg9hm\" (UID: \"2d226f5d-5e5e-4241-9008-e8f28927578c\") " pod="openshift-marketplace/certified-operators-tg9hm" Oct 13 18:34:45 crc kubenswrapper[4720]: I1013 18:34:45.283806 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d226f5d-5e5e-4241-9008-e8f28927578c-catalog-content\") pod \"certified-operators-tg9hm\" (UID: \"2d226f5d-5e5e-4241-9008-e8f28927578c\") " pod="openshift-marketplace/certified-operators-tg9hm" Oct 13 18:34:45 crc kubenswrapper[4720]: I1013 18:34:45.284101 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d226f5d-5e5e-4241-9008-e8f28927578c-utilities\") pod \"certified-operators-tg9hm\" (UID: \"2d226f5d-5e5e-4241-9008-e8f28927578c\") " pod="openshift-marketplace/certified-operators-tg9hm" Oct 13 18:34:45 crc kubenswrapper[4720]: I1013 18:34:45.301754 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzmfm\" (UniqueName: \"kubernetes.io/projected/2d226f5d-5e5e-4241-9008-e8f28927578c-kube-api-access-kzmfm\") pod \"certified-operators-tg9hm\" (UID: \"2d226f5d-5e5e-4241-9008-e8f28927578c\") " pod="openshift-marketplace/certified-operators-tg9hm" Oct 13 18:34:45 crc kubenswrapper[4720]: E1013 18:34:45.335935 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:34:45 crc kubenswrapper[4720]: I1013 18:34:45.429568 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tg9hm" Oct 13 18:34:45 crc kubenswrapper[4720]: I1013 18:34:45.548143 4720 generic.go:334] "Generic (PLEG): container finished" podID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" containerID="2b7bf512d55af1f6e87e3d107365e78aebc383d4ecbb5a0b274805c3690958bc" exitCode=0 Oct 13 18:34:45 crc kubenswrapper[4720]: I1013 18:34:45.548521 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" event={"ID":"ce442c80-fcde-4b79-b6f9-f8f25771dfd4","Type":"ContainerDied","Data":"2b7bf512d55af1f6e87e3d107365e78aebc383d4ecbb5a0b274805c3690958bc"} Oct 13 18:34:45 crc kubenswrapper[4720]: I1013 18:34:45.548560 4720 scope.go:117] "RemoveContainer" containerID="daa911e13591966ffbf77524066a5e02ca5e52e1c46be154956ad33bdd15eac1" Oct 13 18:34:45 crc kubenswrapper[4720]: I1013 18:34:45.549481 4720 scope.go:117] "RemoveContainer" containerID="2b7bf512d55af1f6e87e3d107365e78aebc383d4ecbb5a0b274805c3690958bc" Oct 13 18:34:45 crc kubenswrapper[4720]: E1013 18:34:45.549760 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:34:45 crc kubenswrapper[4720]: I1013 18:34:45.966701 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tg9hm"] Oct 13 18:34:46 crc kubenswrapper[4720]: I1013 18:34:46.562912 4720 generic.go:334] "Generic (PLEG): container finished" podID="2d226f5d-5e5e-4241-9008-e8f28927578c" containerID="2745e13a740ddae7a06f84c85339de7d0910feae9508d0679f0d3eaac8503043" exitCode=0 Oct 13 18:34:46 crc kubenswrapper[4720]: I1013 18:34:46.563045 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tg9hm" event={"ID":"2d226f5d-5e5e-4241-9008-e8f28927578c","Type":"ContainerDied","Data":"2745e13a740ddae7a06f84c85339de7d0910feae9508d0679f0d3eaac8503043"} Oct 13 18:34:46 crc kubenswrapper[4720]: I1013 18:34:46.563251 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tg9hm" event={"ID":"2d226f5d-5e5e-4241-9008-e8f28927578c","Type":"ContainerStarted","Data":"c96c918a06b37809f7f91080a6be8115cd4b3f4cbe3ddc879cdf39726994e69e"} Oct 13 18:34:47 crc kubenswrapper[4720]: I1013 18:34:47.573573 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tg9hm" event={"ID":"2d226f5d-5e5e-4241-9008-e8f28927578c","Type":"ContainerStarted","Data":"9544fe4c01027f1de64afaee13aca195b7e1e1f639f289347024db0221ccda9c"} Oct 13 18:34:48 crc kubenswrapper[4720]: I1013 18:34:48.586402 4720 generic.go:334] "Generic (PLEG): container finished" podID="2d226f5d-5e5e-4241-9008-e8f28927578c" containerID="9544fe4c01027f1de64afaee13aca195b7e1e1f639f289347024db0221ccda9c" exitCode=0 Oct 13 18:34:48 crc kubenswrapper[4720]: I1013 18:34:48.586527 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tg9hm" event={"ID":"2d226f5d-5e5e-4241-9008-e8f28927578c","Type":"ContainerDied","Data":"9544fe4c01027f1de64afaee13aca195b7e1e1f639f289347024db0221ccda9c"} Oct 13 18:34:49 crc kubenswrapper[4720]: I1013 18:34:49.599264 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tg9hm" event={"ID":"2d226f5d-5e5e-4241-9008-e8f28927578c","Type":"ContainerStarted","Data":"ff932f192e6086bbda2ba7a8ecb8e7c479d0385a03c49a4637f76871d59a4c82"} Oct 13 18:34:49 crc kubenswrapper[4720]: I1013 18:34:49.650519 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tg9hm" podStartSLOduration=2.223761311 podStartE2EDuration="4.650496s" podCreationTimestamp="2025-10-13 18:34:45 +0000 UTC" firstStartedPulling="2025-10-13 18:34:46.56579989 +0000 UTC m=+4232.023050032" lastFinishedPulling="2025-10-13 18:34:48.992534549 +0000 UTC m=+4234.449784721" observedRunningTime="2025-10-13 18:34:49.635388599 +0000 UTC m=+4235.092638791" watchObservedRunningTime="2025-10-13 18:34:49.650496 +0000 UTC m=+4235.107746142" Oct 13 18:34:55 crc kubenswrapper[4720]: I1013 18:34:55.430483 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tg9hm" Oct 13 18:34:55 crc kubenswrapper[4720]: I1013 18:34:55.433015 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tg9hm" Oct 13 18:34:55 crc kubenswrapper[4720]: I1013 18:34:55.490306 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tg9hm" Oct 13 18:34:55 crc kubenswrapper[4720]: I1013 18:34:55.736786 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tg9hm" Oct 13 18:34:55 crc kubenswrapper[4720]: I1013 18:34:55.795789 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tg9hm"] Oct 13 18:34:57 crc kubenswrapper[4720]: I1013 18:34:57.193550 4720 scope.go:117] "RemoveContainer" containerID="2b7bf512d55af1f6e87e3d107365e78aebc383d4ecbb5a0b274805c3690958bc" Oct 13 18:34:57 crc kubenswrapper[4720]: E1013 18:34:57.194499 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-htwnl_openshift-machine-config-operator(ce442c80-fcde-4b79-b6f9-f8f25771dfd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-htwnl" podUID="ce442c80-fcde-4b79-b6f9-f8f25771dfd4" Oct 13 18:34:57 crc kubenswrapper[4720]: I1013 18:34:57.689925 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tg9hm" podUID="2d226f5d-5e5e-4241-9008-e8f28927578c" containerName="registry-server" containerID="cri-o://ff932f192e6086bbda2ba7a8ecb8e7c479d0385a03c49a4637f76871d59a4c82" gracePeriod=2